forum_id
stringlengths
9
10
sections
stringlengths
1.26k
174k
H1kjdOYlx
[{"section_index": "0", "section_name": "MODULAR MULTITASK REINFORCEMENT LEARNING WITH POLICY SKETCHES", "section_text": "Jacob Andreas, Dan Klein, and Sergey Levine\njda,klein, svlevine}@eecs.berkeley.edu\nWe describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate each task with a sequence of named subtasks. providing high-level structural relationships among tasks, but not providing the detailed guidance required by previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motiva- tions). Our approach associates every subtask with its own modular subpolicy. and jointly optimizes over full task-specific policies by tying parameters across. shared subpolicies. This optimization is accomplished via a simple decoupled. actor-critic training objective that facilitates learning common behaviors from dissimilar reward functions. We evaluate the effectiveness of our approach on a maze navigation game and a 2-D Minecraft-inspired crafting game. Both games. feature extremely sparse rewards that can be obtained only after completing a number of high-level subgoals (e.g. escaping from a sequence of locked rooms or collecting and combining various ingredients in the proper order). Experiments illustrate two main advantages of our approach. First, we outperform standard. baselines that learn task-specific or shared monolithic policies. Second, our method naturally induces a library of primitive behaviors that can be recombined to rapidly acquire policies for new tasks.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "This paper describes a framework for learning composable deep subpolicies in a multitask set- ting, guided only by abstract policy sketches We are interested in problems like the ones shown in Figure 1] with collections of tasks that involve sparse rewards and long-term plan- ning, but which share structure in the form of common subgoals or reusable high-level ac- tions. Our work aims to develop models that can learn efficiently from these sparse rewards and rapidly adapt to new tasks, by exploiting this shared structure and translating success on one task into progress on others. Our approach ultimately induces a library of high-level ac tions directly from symbolic annotations like the ones marked Kj and K, in the figure\nThis approach builds on a significant body of. research in reinforcement learning that focuses on hierarchical representations of behavior. In. these approaches, a high-level controller learns. a policy over high-level actions-known var-. iously as options. (Sutton et al. 1999, skills"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "t1: make planks I1 t2: make sticks. H2 b1: get wood K1 T1 b1: get wood K2 T1 b2: use workbench Tt2 b3: use toolshed Tt3 T3 Tt2 T1 Tt1\nFigure 1: Composing policies from subpolicies. Here. we have simplified versions of two tasks (make planks. and make sticks, each associated with its own policy. (IIj and II, respectively). These policies share an ini- tial high-level action b1: both require the agent to gei. wood before taking it to an appropriate crafting station. By enforcing that the agent initially follows the same subpolicy 1 in both tasks, we can learn a reusable rep. resentation of their shared structure..\nKonidaris & Barto 2007), or primitives (Hauser et al.[[2008)which are themselves implemente as policies over low-level actions in the environment. While one line of research (e.g.Danie et al.(2012)) investigates learning hierarchical policies without any supervision, such hierarchies are empirically difficult to learn directly from unconstrained interaction (Hengst!2002). The bulk o existing work instead relies on additional information (in the form of intermediate rewards, subtasl completion signals, or intrinsic motivations) that guide the learner toward useful high-level actions While effective, these approaches depend on state representations simple or structured enough tha suitable reward signals can be effectively engineered by hand.\nHere we focus on multitask learning of hierarchical policies from a weaker form of supervision: a training time, each task (1 and T2 in|Figure 1) is annotated with a sketch (K1 and K2) consisting of a sequence of high-level action symbols (b1, b2 and b3)-with no information about how these actions. should be implemented. Our approach associates each such high-level action with its own low. level subpolicy, and jointly optimizes over concatenated task-specific policies by tying parameters. across shared subpolicies. Our thesis is that even the minimal information about high-level policy structure contained in a sketch provides enough of a learning signal to induce general, reusable. subpolicies. Crucially, sketches are totally ungrounded in the representation of the world-they. require no intervention in a simulator or environment model..\nThe present work may be viewed as an extension of recent approaches for learning compositiona. deep architectures from structured program descriptors (Andreas et al.]2016, Reed & de Freitas 2015). Here we focus on learning in interactive environments with reinforcement training signals This extension presents a variety of technical challenges. Concretely, our contributions are:.\nWe evaluate our approach on two families of tasks: a maze navigation game (Figure 3h), in which. the agent must navigate through a sequence of locked doors to reach a target room; and a 2-D. Minecraft-inspired crafting game (Figure 3b), in which the agent must acquire particular resources. by finding raw ingredients, combining them together in the proper order, and in some cases building intermediate tools that enable the agent to alter the environment itself. In both games, the agent. receives a reward only after the final goal is accomplished. For the most challenging tasks, involving. sequences of four or five high-level actions, a task-specific agent initially following a random policy. essentially never discovers the reward signal..\nWe evaluate a modular agent architecture trained with guidance from policy sketches under several different data conditions: (1) when learning the full collection of tasks jointly via reinforcement, (2) in a zero-shot setting where a policy sketch is available for the held-out task, and (3) in a adaptation setting, where sketches are hidden and the agent must learn a policy over high-level actions. In all cases, our approach substantially outperforms standard policy optimization baselines."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "The agent representation we describe in this paper belongs to the broader family of hierarchica. reinforcement learners described in the literature. As detailed in|Section 3] our subpolicies may be viewed as a relaxation of the options framework first described by|Sutton et al.(1999). A large body of work describes techniques for learning options and related abstract actions, in both single- and multitask settings. For learning the implementation of options, most techniques rely on intermediate supervisory signals, e.g. to encourage exploration (Kearns & Singh2002) or completion of pre defined subtasks (Kulkarni et al.|2016). An alternative family of approaches employs either post hoc analysis of already-learned policies to extract reusable sub-components (Stolle & Precupl|2002 Konidaris et al.]2011). Techniques for learning options with less guidance than the present work include Bacon & Precup[(2015) and Vezhnevets et al.[(2016), and other general hierarchical policy learners includeDaniel et al.(2012),Bakker & Schmidhuber[(2004) and Menache et al.(2002)\nOnce a library of high-level actions exists, agents are faced with the problem of learning high-leve (typically semi-Markov) policies that invoke appropriate high-level actions in sequence (Precup\nA general paradigm for multitask, hierarchical, deep reinforcement learning guided by ab- stract sketches of task-specific policies. A concrete agent architecture for learning in this paradigm, featuring a modular model structure and multitask actor-critic training objective\n2000). The learning problem we describe in this paper is in some sense the direct dual to the. problem of learning these high-level policies. There, the agent begins with an inventory of complex. primitives and must learn to model their behavior and select among them; here we begin knowing. the names of appropriate high-level actions but nothing about how they are implemented, and must. infer implementations (but not, initially, high-level plans) from context. We expect that our approach. could be coupled with a generic learner of options policies to provide a general mechanism for hierarchical RL; we leave this for future work..\nOur approach is also inspired by a number of recent efforts toward compositional reasoning anc interaction with structured deep models. Such models have been previously used for tasks involving question answering (Iyyer et al.|2014) |Andreas et al.[2016) and relational reasoning (Socher et al. 2012), and more recently for multi-task, multi-robot transfer problems (Devin et al.]2016). In this work-as in existing approaches employing dynamically assembled modular networks-task specific training signals are propagated through a collection of composed discrete structures with tied weights. Here the composed structures specify time-varying policies rather than feedforwarc computations, and their parameters must be learned via interaction rather than direct supervision Another closely related family of models includes neural programmers (Neelakantan et al.|2015 and programmer-interpreters (Reed & de Freitas]2015), which generate discrete computationa structures but require supervision in the form of output actions or full execution traces.\nA closely related line of work is the Hierarchical Abstract Machines (HAM) framework introduce byParr & Russell(1998). Like our approach, HAMs begin with a representation of a high-leve. policy as an automaton (or a more general computer program; Andre & Russell,2001) and us. reinforcement learning to fill in low-level details. Variations on this architecture have considered number of control constructs beyond the scope of the current paper (e.g. concurrency and recursior. Marthi et al.|2004). However, because these approaches attempt to learn a single representation c. the Q function for all subtasks and contexts, they require extremely strong formal assumptions aboi the form of the reward function and state representation (Andre & Russell]2002) that the preser. work avoids by decoupling the policy representation from the value function..\nOur approach also bears some resemblance to the instruction following literature in natural language processing. Existing work on instruction following falls into two broad categories: approaches tha require a highly structured (typically logical) action and world representations (Chen & Mooney 2011} |Artzi & Zettlemoyer2013f |Andreas & Klein2015fTellex et al.[2011), and approaches tha require detailed supervision of action sequences or dense reward signals essentially equivalent tc full action traces (Branavan et al.]2009, Vogel & Jurafsky]2010]Mei et al.]2016). By contrast the framework we describe here involves no formal or logical language for describing plans, anc no supervised action sequences. Additionally, the modular model described in this paper natrually supports adaptation to tasks where no sketches are available, while all existing instruction following models learn a joint policy over instructions and actions, and are unable to function in the absence of instructions.\nWe consider a multitask reinforcement learning problem arising from a family of infinite-horizon. discounted Markov decision processes in a shared environment. This environment is specified by. a tuple (S, A, P, ), with S a set of states, A a set of low-level actions, P : S A S -> R. a transition probability distribution, and y a discount factor. Each task t E T is then specified by a pair (R-, P), with R, : S -> R a task-specific reward function and p- : S -> R an initial. distribution over states. For a fixed sequence {(s, a)} of states and actions obtained from a rollout. of a given policy, we will denote the empirical return starting in state s; as qi := j= R(s;). In. addition to the components of a standard multitask RL problem, we assume that tasks are annotated with sketches K, each consisting of a sequence (b-1, b-2, ...) of high-level symbolic labels drawn. from a fixed vocabulary B. Our model associates each of these symbols with a randomly initialized modular subpolicy. By sharing each subpolicy across all tasks annotated with the corresponding. symbol, our approach naturally learns the shared abstraction for the corresponding subtask, without. requiring any information about the grounding of that task to be explicitly specified by annotation."}, {"section_index": "4", "section_name": "3.1 MODEL", "section_text": "We exploit the structural information provided by sketches by constructing for each symbol b a corresponding subpolicy y. At each timestep, a subpolicy may select either a low-level action a E A or a special sToP action. We denote the augmented state space A+ := A U{sTop}. While this framework is agnostic to the implementation of subpolicies, we are especially interested in the case where subpolicies are specified by deep networks. As shown inFigure 2[ the experiments in this paper represent each y as a neural network whose input is a representation of the current state, and whose output is a distribution over A+. While all action spaces in our experiments are discrete, it is straightforward to instead allow this last layer to parameterize a mixed distribution over an underlying continuous action space and the sTop action. These subpolicies may be viewed as options of the kind described bySutton et al.(1999), with the key distinction that they have no initiation semantics, but are instead invokable everywhere, and have no explicit representation as a function from an initial state to a distribution over final states (instead implicitly using the sTOP action to terminate).\nGiven a sketch, a task-specific policy II, is formed by concatenating its associated subpolicies ir. sequence. In particular, the high-level policy maintains a subpolicy index i (initially O), and execute. actions from b, until the SToP symbol is emitted, at which point control is passed to bi+1. We may. thus think of II, as inducing a Markov chain over the state space S B, with transitions given by:.\n(s',bi) S,b: with probability aEAb,(a|s) : P(s'|s,a (s,bi+1) with probability Tb(STOP|S)\nNote that II, is semi-Markov with respect to projection of the augmented state space S B onto the underlying state space S. We denote the complete family of task-specific policies II :=- U,{II,} and let each y be an arbitrary function of the current environment state parameterized by some weight vector O. The learning problem is to optimize over all 0 to maximize the sum of expected discounted rewards J(II) := E s: ~II. J(:= : y' R-(s)across all tasks t E T.\nHere that optimization is accomplished via a simple de- b1 b2 coupled actor-critic method. In a standard policy gradi- a1 a2 a3. STOP a4. a5 a6 STOP ent approach, with a single policy with parameters 0. we compute gradient steps of the form (Williams! 1992): Tt1 Tt2 VeJ() =>`(Velog(ai|si))(qi-c(s)), (1) where the baseline or \"critic\" c can be chosen indepen- S1 S2 S3 S4 S4 S5 S6 S7\nHere that optimization is accomplished via a simple de coupled actor-critic method. In a standard policy gradi- ent approach, with a single policy with parameters 0 we compute gradient steps of the form (Williamsl1992):\nVeJ() =>(Velogn(ai|si))(qi- c(s))\nvhere the baseline or \"critic' c can be chosen indepen- ently of the future without introducing bias into the gra-. ient. Recalling our previous definition of q as the em-. irical return starting from s, this form of the gradient. orresponds to a generalized advantage estimator (Schul- nan et al.[2015) ) with = 1. Here c achieves close to the. ptimal variance (Greensmith et al.[2004) when it is set. xactly equal to the state-value function V(si) = E,qi or the target policy r starting in state Si..\nThe situation becomes slightly more complicated when generalizing to modular policies built by sequencing subpolicies. In this case, we will have one subpolicy per symbol but one critic per. task. This is because subpolicies y might participate in a number of composed policies I,, each associated with its own reward function R,. Thus individual subpolicies are not uniquely identified with value functions, and the aforementioned subpolicy-specific state-value estimator is no longer well-defined. We extend the actor-critic method to incorporate the decoupling of policies from value functions by allowing the critic to vary per-sample (that is, per-task-and-timestep) depending on the. reward function with which the sample is associated. Noting that Vo, J(II) = t:be K, o, J(II,), i.e. the expected reward across all tasks in which participates, we have:.\nVeJ(II) =>VeJ(II,) =>>(Ve, logb(ari|sri))(qi-c,(sri)) T i\nb1 b2 a1 STOP a4 STOI T1 T2 S1 52 S3 S4 S4 S5 S6\nFigure 2: Model overview. Each subpol- icy r is uniquely associated with a symbol b implemented as a neural network that maps from a state s; to distributions over A+, and chooses an action a; by sampling from this distribution. Whenever the sTop action is sampled, control advances to the next sub- policy in the sketch.\nwhere each state-action pair (s, ar) was selected by the subpolicy , in the context of the task 7\nNow minimization of the gradient variance requires that each c, actually depend on the task identity (This follows immediately by applying the corresponding argument in Greensmith et al.(2004 individually to each term in the sum over in Equation 2D) Because the value function is itsel unknown, an approximation must be estimated from data. Here we allow these c, to be implementec with an arbitrary function approximator parameterized by a vector nr. This is trained to minimize a squared error criterion, with gradients given by\n(Vn,cr(si) )(qi-c-(si) 2\nAlgorithm 1 DO-STEP(II, curriculum) 1: D 0 2: while|D] < D do 3: T ~ curriculum() > sample task from curriculum (Section 3.3 4: d={(si,ai,bi=Kr,i,qi,T),..}~II > do rollol 5: D DU d 6: for b E B, t E T do 7: d={(si,ai,b',qi,T') ED:b'=b,T'=T} 8: 0b0b-d(VlogTb(ai|si))(qi-c(si)) > update polic 9: nrnT-Bd(Vc,(si))(qi-c(si)) > update criti\nThe complete procedure for computing a single gradient step is given in|Algorithm 1 (The oute training loop over these steps, which is driven by a curriculum learning procedure, is described ir the following section and specified in|Algorithm 2) This is an on-policy algorithm. In each step, th agent samples tasks from a task distribution provided by a curriculum (described in the following subsection). The current family of policies II is used to perform rollouts in each sampled task accumulating the resulting tuples of (states, low-level actions, high-level symbols, rewards, and tasl identities) into a dataset D. Once D reaches a maximum size D, it is used to compute gradient w.r.t. both policy and critic parameters, and the parameter vectors are updated accordingly. The stej sizes a and in|Algorithm 1 can be chosen adaptively using any first-order method.\nTo avoid both of these problems, we use a curriculum learning scheme (Bengio et al.] 2009) that. allows the model to smoothly scale up from easy tasks to more difficult ones while avoiding overfit- ting. Initially the model is presented with tasks associated with short sketches. Once average reward. on all these tasks reaches a certain threshold, the length limit is incremented. We assume that re- wards across tasks are normalized with maximum achievable reward 0 < qi < 1. Let Er, denote. the empirical estimate of the expected reward for the current policy on task t. Then at each timestep. tasks are sampled in proportion to 1 Er, which by assumption must be positive. Experiments. show that both components of this curriculum learning scheme improve the rate at which the model converges to a good policy (Section 4.3).\nAlternative forms of the advantage estimator (e.g. the TD residual R,(s) + V,(si+1) - yV,(si) or any other member of the GAE family) can be easily substituted by simply maintaining one such estimator per task. Experiments (Section 4.3) show that conditioning on both the state and the task identity results in noticeable performance improvements, suggesting that the variance reduction provided by this objective is important for efficient joint learning of modular policies.\nFor complex tasks, like the one depicted in|Figure 3p, it is difficult for the agent to discover any states with positive reward until many subpolicy behaviors have already been learned. It is thus a better use of the learner's time to focus on \"easy\"' tasks, where many rollouts will result in high reward from which appropriate subpolicy behavior can be inferred. But there is a fundamental tradeoff involved here: if the learner spends too much time on easy tasks before being made aware of the existence of harder ones, it may overfit and learn subpolicies that no longer generalize or exhibit the desired structural properties.\nThe complete curriculum-based training procedure is specified in |Algorithm 2 Initially, the max imum sketch length lmax is set to one, and the curriculum initialized to sample length-1 tasks uni formly. (Neither of the environments we consider in this paper feature any length-1 tasks; in thi case, observe that[Algorithm 2|will simply advance to length-2 tasks without any parameter updates For each setting of lmax, the algorithm uses the current collection of task policies II to compute anc apply the gradient step described in|Algorithm 1] The rollouts obtained from the call to Do-sTE can also be used to compute reward estimates Er-; these estimates determine a new task distributio for the curriculum. The inner loop is repeated until the reward threshold rmin is exceeded, at whicl point lmax is incremented and the process repeated over a (now-expanded) collection of tasks.\nAs described in the introduction, we evaluate the performance of our approach in two environments a maze navigation game and a crafting game. Both games involve nontrivial low-level control agents must learn to avoid obstacles and interact with various kinds of objects. But the environment also feature hierarchical structure: rewards are accessible only after the agent has completed two tc five high-level actions in the appropriate sequence.\nIn all our experiments, we implement each subpolicy as a multilayer perceptron with ReLU nonlin. earities and a hidden layer with 128 hidden units. and each critic as a linear function of the current state. Each subpolicy network receives as input a set of features describing the current state of the. environment, and outputs a distribution over actions. The agent acts at every timestep by sampling. from this distribution. The gradient steps given in lines 8 and 9 of|Algorithm 1 are implemented us-. ing RMSPr0p (Tieleman2012) with a step size of O.001 and gradient clipping to a unit norm. We. take the batch size parameter D in|Algorithm 1|to be 2o00, and set y = 0.9 in both environments.. For curriculum learning, the improvement threshold rgood is set to O.8..\nThe maze environment t (Figure 3h) corresponds closely to the the \"light world\" described by. Konidaris & Barto|(2007). The agent is placed in a discrete world consisting of a series of rooms. some of which are connected by doors. Some doors require that the agent first pick up a key tc open them. For our experiments, each task corresponds to a goal room (always at the same positior relative to the agent's starting position) that the agent must reach by navigating through a sequence of intermediate rooms. The agent has one sensor on each side of its body, which reports the distance. to keys, closed doors, and open doors in the corresponding direction. Sketches specify a particula sequence of directions for the agent to traverse between rooms to reach the goal. Mazes are samplec with random sizes and random decisions about whether to connect rooms with open doors, locke. doors, or no doors. The sketch always corresponds to a viable traversal from the start to the goa. position, but other (possibly shorter) traversals may also exist..\nThe crafting environment (Figure 3p) is inspired by the popular game Minecraft, but is imple mented in a 2-D grid world. The agent may interact with some objects in the world by facing them\ngorithm 2 TRAIN-POLICIESO : II = INIT() > initialize subpolicies randoml. : lmax1 : loop :: rmin 8 curriculum(.) = Unif(T') > initialize lmax-step curriculum uniforml T'={T E T:|K| lmax} : :: while rmin < rgood do. :: DO-STEP(II, curriculum) > update parameters (Algorithm 1 :: Z = teT[1- Err] ): curriculum(t) =1[r E T'l(1- Er)/Z Vr E T :: Imin min, Er- :: lmax lmax + 1\nFigure 3: Example tasks from the environments used in this paper. (a) In the maze environment, the agent mus reach a goal position by traversing right (1), down (2) and down again (3) through a sequence of rooms, some. of which may have locked doors. (b) In the crafting environment, an agent seeking to pick up the gold nugge. in the top corner must first collect wood (1) and iron (2), use a workbench to turn them into a bridge (3), an . use the bridge to cross the water (4)..\nand executing a special INTERACT action. Interacting with raw materials initially scattered around the environment causes them to be added to an inventory. Interacting with different crafting stations causes objects in the agent's inventory to be combined or transformed into other objects. Each task in this game corresponds to some crafted object the agent must produce; the most complicated goals require the agent to also craft intermediate ingredients, and in some cases build tools (like a pickaxe and a bridge) to reach ingredients located in initially inaccessible regions of the environment.\nA complete listing of tasks and sketches is given in|Appendix A"}, {"section_index": "5", "section_name": "4.2 MULTITASK LEARNING", "section_text": "The primary experimental question in this paper is whether the extra structure provided by policy sketches alone is enough to enable fast learning of coupled policies across tasks. To evaluate this, we compare our modular approach to two policy gradient baselines-one that learns an independent policy for each task and one that learns a joint policy across all tasks-as well as a critic-only Q reader baseline. For the independent model, task-specific policies are represented by networks with the same structure as the modular subpolicies. The joint model conditions both on these environment features, as well as a feature vector encoding the complete sketch. The Q reader forms the same joint state and action space described in Section 3.1] and learns a single feedforward network to map from both environment states and representations of action symbols onto Q values. This baseline can be viewed either as a chain-structured hierarchical abstract machine with a learned state abstracto. (Andre & Russell] 2002), or as a standard instruction following baseline from the natural language processing literature (Vogel & Jurafsky2010).\nMaze environment Crafting environment Crafting environment (by task) 1.0 1.0 1.0 Q reader Q reader 0.8 - Modular (ours) 0.8 - Modular (ours) 0.8 Joint Joint rrrrrd 0.6 Indep. 0.6 Indep. 0.6 po 0.4 to 0.4 0.4 A A 0.2 0.2 0.2 0.0 0.0 0.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Episode X106 Episode x106 Episode X106 (a) (b) (c)\nFigure 4: Comparing modular learning from sketches with standard RL baselines. Modular is the approach described in this paper, while Independent learns a separate policy for each task, Joint learns a shared policy that conditions on the task identity, and Q reader learns a single network to map from states and action symbols to Q values. Performance for the best iteration of the (off-policy) Q reader is plotted. (a) Performance of the three models in the maze environment. (b) Performance in the crafting environment. (c) Individual task performance for the modular model in the crafting domain. Colors correspond to task length. It can be seen that the sharp steps in the learning curve correspond to increases of lmax in the curriculum. The modular approach is eventually able to achieve high reward on all tasks, while the baseline models perform considerably worse on average.\nt: go to goal t: get gold b1: right K b1: get wood K b2: down b2: get iron b3: down b3: use workbench b4: get gold (a) (b)\nCritics Curricula 1.0 1.0 {task, state} {length, weight} 0.8 {task} 0.8 {weight} {state} {length} 0.6 {} 0.6 {} 0.4 0.4 H 0.2 0.2 0.0 0.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Episode X106 Episode X106 (a) (b)\nCritics Curricula 1.0 1.0 {task, state} {length, weight} 0.8 {task} 0.8 {weight} {state} {length} rrrrrd 0.6 0 0.6 {} 0.4 0.4 0.2 0.2 0.0 0.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Episode X106 Episode X106\nFigure 5: Ablation experiments. (a) The critic: lines labeled \"task\"' include a baseline that varies with the tasl identity, while lines labeled \"state' include a baseline that varies with the state identity. Estimating a baseline that depends on both the representation of the current state and the identity of the current task is better thar either alone or a constant baseline. (b) The curriculum: lines labeled \"length' use a curriculum with iterativel increasing lengths, while lines labeled \"weight\"' sample tasks in inverse proportion to their current reward Adjusting the sampling distribution based on both task length and performance return improves convergence.\nLearning curves for baselines and the modular model are shown in Figure 4 It can be seen that in. both the maze domain and the crafting domain, our approach substantially outperforms the baselines:. it induces policies with substantially higher average reward and converges more quickly than the policy gradient baselines. It can further be seen inFigure 4c that after policies have been learned on. simple tasks, the model is able to rapidly adapt to more complex ones, even when the longer tasks. involve high-level actions not required for any of the short tasks (Appendix A).\nHaving demonstrated the overall effectiveness of our approach, our remaining experiments explore. (1) the importance of various components of the training procedure, and (2) the learned models. ability to generalize or adapt to held-out tasks. For compactness, we restrict our consideration on. the crafting domain, which features a larger and more diverse range of tasks and high-level actions.\nIn addition to the overall modular parameter-tying structure induced by our sketches, the key com ponents of our training procedure are the decoupled critic and the curriculum. Our next experiments investigate the extent to which these are necessary for good performance.\nTo evaluate the the critic, we consider three ablations: (1) removing the dependence of the model o1 the environment state, in which case the baseline is a single scalar per task; (2) removing the depen dence of the model on the task, in which case the baseline is a conventional generalized advantag estimator; and (3) removing both, in which case the baseline is a single scalar, as in a vanilla polic gradient approach. Results are shown in|Figure 5a. Introducing both state and task dependence intc the baseline leads to faster convergence of the model: the approach with a constant baseline achieve less than half the overall performance of the full critic after 3 million episodes. Introducing task anc state dependence independently improve this performance; combining them gives the best result\nWe also investigate two aspects of our curriculum learning scheme: starting with short examples and moving to long ones, and sampling tasks in inverse proportion to their accumulated reward Experiments are shown in Figure 5p. We again see that both components are essential for good performance. Sampling uniformly across all tasks of the target length results in slow convergence\nIn our final experiments, we consider the model's ability to generalize to new tasks unseen at training time. We consider two evaluation conditions: a zero-shot setting, in which the model is provided a sketch for the new task and must immediately achieve good performance, and a adaptation setting in which no sketch is provided and the model must learn the form of a suitable sketch by interacting with the new task.\nTable 1: Model performance under var ious evaluation conditions. MT is the. multitask training condition describec. in|Section 4.2[ while 0-S and Ad. are re spectively the zero-shot and adaptatior experiments described in|Section 4.4."}, {"section_index": "6", "section_name": "5 CONCLUSIONS", "section_text": "We have described an approach for multitask learning of neural network policies guided by symbolic. policy sketches. By associating each symbol appearing in a sketch with a modular neural subpolicy. we have shown that it is possible to build agents that share behavior across tasks in order to achieve success in tasks with sparse and delayed rewards. This process induces an inventory of reusable and interpretable subpolicies which can be employed for zero-shot generalization when further sketches are available, and hierarchical reinforcement learning when they are not. Our work suggests that these sketches, which are easy to produce and require no grounding in the environment, provide an effective scaffold for learning hierarchical policies from minimal supervision. We have released our"}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "JA is supported by a Facebook Graduate Fellowship and a Huawei / Berkeley AI fellowship"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "David Andre and Stuart Russell. Programmable reinforcement learning agents. In Advances ir Neural Information Processing Systems, 2001.\nJacob Andreas and Dan Klein. Alignment-based compositional semantics for instruction following\nJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neura networks for question answering. In Proceedings of the Annual Meeting of the North Americai Chapter of the Association for Computational Linguistics, 2016.\nYoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. pp 41-48. ACM. 2009\nWe hold out two length-four tasks from the full inventory used. Ad. in |Section 4.2 and train on the remaining tasks. For zero-. <.1 shot experiments, we simply form the concatenated policy de. scribed by the sketches of the held-out tasks, and repeatedly. 76 execute this policy (without learning) in order to obtain an. estimate of its effectiveness. For adaptation experiments, we. r var- consider ordinary reinforcement learning over B rather than. s the ribed A, implementing the high-level learner with the same agent. re re- architecture as described in [Section 3.1] Note that the Inde- ation pendent baseline cannot be applied to the zero-shot evalua-. .4 tion, while the joint baseline cannot be applied to the adapta. tion baseline (because it depends on pre-specified sketch fea-. le 1 The held-out tasks are sufficiently challenging that the baselines. egligible reward, while the modular model does comparatively well..\nfor mapping instructions to actions. In Proceedings of the Annual Meeting of the Association fo Computational Linguistics, pp. 82-90. Association for Computational Linguistics, 2009. David L. Chen and Raymond J. Mooney. Learning to interpret natural language navigation instruc. tions from observations. In Proceedings of the Meeting of the Association for the Advancemeni. of Artificial Intelligence, volume 2, pp. 1-2, 2011. Christian Daniel, Gerhard Neumann, and Jan Peters. Hierarchical relative entropy policy search. In. Proceedings of the International Conference on Artificial Intelligence and Statistics, pp. 273-281 2012. Coline Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, and Sergey Levine. Learning modular neural network policies for multi-task and multi-robot transfer. arXiv preprint arXiv:1609.07088 2016. Evan Greensmith, Peter L Bartlett, and Jonathan Baxter. Variance reduction techniques for gradien estimates in reinforcement learning. Journal of Machine Learning Research, 5(Nov):1471-1530, 2004. Kris Hauser, Timothy Bretl, Kensuke Harada, and Jean-Claude Latombe. Using motion primitives ir. probabilistic sample-based planning for humanoid robots. In Algorithmic foundation of robotics,. pp. 507-522. Springer, 2008. Bernhard Hengst. Discovering hierarchy in reinforcement learning with HEXQ. In ICML, volume 2.\nDoina Precup. Temporal abstraction in reinforcement learning. PhD thesis, 2000\nGeorge Konidaris and Andrew G Barto. Building portable options: Skill transfer in reinforcement learning. In IJCAI, volume 7, pp. 895-900, 2007.\nIshai Menache. Shie Mannor. and Nahum Shimkin. O-cutdynamic discovery of sub-goals in re inforcement learning. In European Conference on Machine Learning, pp. 295-306. Springer, 2002.\nJohn Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High dimensional continuous control using generalized advantage estimation.. arXivpreprini arXiv:1506.02438. 2015.\nRichard S Sutton. Doina Precup, and Satinder Singh. Between MDPs and semi-MDPs: A frameworl. for temporal abstraction in reinforcement learning. Artificial intelligence. 112(1):181-211. 1999\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992.\nStefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R. Walter, Ashis Gopal Banerjee, Seth Teller, and Nicholas Roy. Understanding natural language commands for robotic navigation and mobile manipulation. In In Proceedings of the National Conference on Artificial Intelligence 2011."}, {"section_index": "9", "section_name": "A TASKS AND SKETCHES", "section_text": "The complete list of tasks, sketches, and symbols is given below. Tasks marked with an asterisk* ar held out for the generalization experiments described in |Section 4.4] but included in the multitas training experiments in Sections4.2|and|4.3\nGoal Sketch Maze environment goal1 left left goal2 left down goal3 right down goal4 up left goal5 up right goal6 up right up goal7 down right up left goal8 left down goal9 right down down goal10 left up right Crafting environment make plank get wood use toolshed. make stick get wood use workbench make cloth get grass use factory make rope. get grass use toolshed make bridge get iron. get wood use factory make bed*. get wood use toolshed get grass use workbench make axe*. get wood. use workbench get iron use toolshed make shears. get wood. use workbench get iron. use workbench get gold. get iron. get wood use factory use bridge get gem get wood use workbench get iron use. toolshed use axe"}]
r1w7Jdqxl
[{"section_index": "0", "section_name": "COLLABORATIVE E DEEP EMBEDDING VIA DUAL NETWORKS", "section_text": "Yilei Xiong & Dahua Lin\nDepartment of Information Engineering The Chinese University of Hong Kong\n[niu.haoying,cheng.iiefeng,li.zhenguo}@huawei.com"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Despite the long history of research on recommender systems, current approaches. still face a number of challenges in practice, e.g. the difficulties in handling new. items, the high diversity of user interests, and the noisiness and sparsity of ob-. servations. Many of such difficulties stem from the lack of expressive power to. capture the complex relations between items and users. This paper presents a. new method to tackle this problem, called Collaborative Deep Embedding. In. this method, a pair of dual networks, one for encoding items and the other for. users, are jointly trained in a collaborative fashion. Particularly, both networks. produce embeddings at multiple aligned levels, which, when combined together. can accurately predict the matching between items and users. Compared to existing. methods, the proposed one not only provides greater expressive power to capture. complex matching relations, but also generalizes better to unseen items or users On multiple real-world datasets, this method outperforms the state of the art.."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "What do consumers really want? - this is a question to which everyone wishes to have an answer. Over the past decade, the unprecedented growth of web services and online commercial platforms. such as Amazon, Netflix, and Spotify, gives rise to a vast amount of business data, which contain. valuable information about the customers. However, \"data don't speak for themselves\". To accurately. predict what the customers want, one needs not only the data, but also an effective means to extract useful messages therefrom.\nThere has been extensive study on recommender systems. Existing methods roughly fall into twc categories, namely content-based filtering (Pazzani & Billsus|2007) and collaborative filtering (Mnil & Salakhutdinov2008] Hu et al.]2008] [Yu et al.]2009]. The former focuses on extracting relevant features from the content, while the latter attempts to exploit the common interest among groups o1 users. In recent efforts, hybrid methods (Agarwal & Chen2009] [Van den Oord et al.[[2013) tha combine both aspects have also been developed.\nWhereas remarkable progress has been made on this topic, the state of the art remains far fror satisfactory. The key challenges lie in several aspects. First, there is a large semantic gap between th true cause of a matching and what we observe from the data. For example, what usually attracts a boo consumer is the implied emotion that one has to feel between the lines instead of the occurrence of certain words. It is difficult for classical techniques to extract such deep meanings from th observations. Second, the cold-start issue, namely making predictions for unseen items or user has not been well addressed. Many collaborative filtering methods rely on the factorization of th matching matrix. Such methods implicitly assume that all the users and items are known in advanc and thus are difficult to be applied in real-world applications, especially online services.\nThe success of deep learning brings new inspiration to this task. In a number of areas, including image classification (Krizhevsky et al. 2012), speech recognition (Hinton et al.]2012), and natural language understanding (Socher et al. 2011), deep learning techniques have substantially pushed forward the state of the art. The power of deep networks in capturing complex variations and bridging semantic gaps has been repeatedly shown in previous study. However, deep models were primarily used for classification or regression, e.g. translating images to sentences. How deep networks can be used to model cross-domain relations remains an open question.\nOn a number of real world tasks, the proposed method yields significant improvement over the current state-of-the-art. It is worth stressing that whereas our focus is on the matching between items anc users, Collaborative Deep Embedding is a generic methodology, which can be readily extended to model other kinds of cross-domain relations."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Existing methods for recommendation roughly fall into two categories: content-based methods (Paz. zani & Billsus2007) and collaborative filtering (CF) (Mnih & Salakhutdinov2008] Hu et al.]2008 Yu et al.[ 2009). Specifically, content-based methods rely primarily on feature representation of the content, in which recommendations are often made based on feature similarity (Slaney et al.. 2008). Following this, there are also attempts to incorporate additional information, such as meta-data. of users, to further improve the performance (McFee et al.]2012). Instead, collaborative filtering. exploits the interaction between users and items. A common approach to CF is to derive latent factors. of both users and items through matrix factorization, and measure the degree of matching by their. inner products. Previous study (Ricci et al.[2011) showed that CF methods tend to have higher rec. ommendation accuracy than content-based methods, as they directly target the recommendation task. However, practical use of CF is often limited by the cold start problem. It is difficult to recommend. items without a sufficient amount of use history. Issues like this motivated hybrid methods (Agarwal. & Chen[2009 [Van den Oord et al.|[2013) that combine both aspects of information, which have. showed encouraging improvement. Our exploration is also along this line.\nDespite the progress on both family of methods, the practical performance of state-of-the-art still leaves a lot to be desired. This, to a large extent, is due to the lack of capability of capturing complex variations in interaction patterns. Recently, deep learning (Bengio|2009) emerges as an important technique in machine learning. In a number of successful stories (Krizhevsky et al.]2012) Hinton et al.[2012| Socher et al.[[2011], deep models have demonstrated remarkable representation power in capturing complex patterns. This power has been exploited by some recent work for recommendation Van den Oord et al.(2013) applies deep learning for music recommendation. It uses the latent item vector learned by CF as ground truth to train a deep network for extracting content features obtaining considerable performance gain. However the latent vectors for known users and items are not improved. Wang & Wang(2014) proposed an extension to this method, which concatenates both the CF features and the deep features, resulting in slight improvement.\nIn this work, we aim to explore deep neural networks for learning the matching relations across two domains, with our focus placed on the matching between items and users. Specifically, we propose a new framework called Collaborative Deep Embedding, which comprises a pair of dual networks one for encoding items and the other for users. Each network contains multiple embedding layers that are aligned with their dual counterparts of the other network. Predictions can then be made by coupling these embeddings. Note that unlike a conventional network, the dual networks are trained on two streams of data. In this paper, we devise an algorithm that can jointly train both networks using dual mini-batches. Compared to previous methods, this method not only narrows the semantic gap through a deep modeling architecture, but also provides a natural way to generalize - new items and new users can be encoded by the trained networks, just like those present in the training stage.\nWang & Blei(2011) showed that CF and topic modeling, when combined, can benefit each other. Inspired by this,Wang et al.[(2015) proposed Collaborative Deep Learning (CDL), which incorporates. CF and deep feature learning with a combined objective function. This work represents the latest. advances in recommendation methods. Yet, its performance is still limited by several issues, e.g. the. difficulties in balancing diversified objectives and the lack of effective methods for user encoding. An. important aspect that distinguishes our work from CDL and other previous methods is that it encodes. both items and users through a pair of deep networks that are iointly trained. which substantially\nenhance the representation power on both sides. Moreover, the objective function of our learning. framework directly targets the recommendation accuracy, which also leads to better performance\nAt the heart of a recommender system is matching model, namely, a model that can predict whether a given item matches the interest of a given user. Generally, this can be formalized as below. Suppose there are m users and n items, respectively indexed by i and j. Items are usually associated with inherent features, e.g. the descriptions or contents. Here, we use x, to denote the observed features of the j-th item. However, inherent information for users is generally very limited and often irrelevant Hence, in most cases, users are primarily characterized by their history, i.e. the items they have purchased or rated. Specifically, the user history can be partly captured by a matching matrix R E {0, 1}mxn, where R(i, j) = 1 indicates that the i-th user purchased the j-th item and gave a positive rating. Note that R is often an incomplete reflection of the user interest - it is not uncommon that a user does not purchase or rate an item that he/she likes."}, {"section_index": "4", "section_name": "3.1 DUAL EMBEDDING", "section_text": "To motivate our approach, we begin with a brief revisit of collaborative filtering (CF), which is widely adopted in practical recommender systems. The basic idea of CF is to derive vector representations for both users and items by factorizing the matching matrix R. A representative formulation in this. family is the Weighted Matrix Factorization (WMF) (Hu et al.]2008), which adopts an objective. function as below:\nLL (Ri-u, v) 4+ >`l|ull?+X>`llvill?\nHere, u; and v; denote the vector representations of the i-th user and the j-th item, c; the confidence coefficient of an observed entry, and Au, A, the regularization coefficients. Underlying such methods lies a common assumption, namely, all users and items must be known a priori. As a result, they will face fundamental difficulties when handling new items and new users.\nEncoding Networks. In this work, we aim to move beyond this limitation by exploring an alterna tive approach. Instead of pursuing the embeddings of a given set of items and users, our approach jointly learns a pair of encoding networks, respectively for items and users. Compared to CF, the key advantage of this approach is that it is generalizable by nature. When new items or new users come, their vector embeddings can be readily derived using the learned encoders.\nu; = argmin cij|Rij-ufvj|2+Xu>|u|2 =(VC;VT+XuI) u 1\nu; = g(ri; Wu)\nGenerally, the items can be encoded based on their own inherent features, using, for example, an. auto-encoder. The key question here, however, is how to encode users, which, as mentioned, have no inherent features. Again, we revisit conventional CF methods such as WMF and find that in these methods, the user representations can be expressed as:.\nThe analysis above reveals that u, is a linear transform of r, as u, = W.r,, where the transform matrix W, depends on the item embeddings V. This motivates our idea of user encoding, that is, to use a deep neural network instead the linear transform above, as\nwhere q denotes a nonlinear transform based on a deep network with parameters W.. As we wil show in our experiments, by drawing on the expressive power of deep neural networks, the proposed way of user encoding can substantially improve the prediction accuracy.\nFigure 1: This figure shows three different designs of the dual networks. Here, O indicates dot product and indicates summation. (a) The basic design adopts the MLP structure for each network. (b) The multi-level desigr integrates the dot products of embeddings at different levels to produce the prediction. (c) In the branching design, the embeddings (except those of the top level) used in the dot products are produced by transform branches. In this way, the main abstraction paths won't be directly twisted..\nOverall Formulation. By coupling an item-network denoted by f(x;; W) and a user-network g. as introduced above, we can predict the matching of any given pair of user and item based on the. inner product of their embeddings, as (f(x; W,), g(r; Wu)). The inputs to these networks include. x, the inherent feature of the given item, and r, the history of the given user on a set of reference items. With both encoding networks, we formulate the learning objective as follows:.\ncij|Rij-{f(xj;Ws),g(r;Wu)>|l2 min Wu,Wv i j\nHere, X = [x1, ..., xn] denotes the input features of all reference items. This formulation differs. from previous ones in two key aspects: (1) Both users and items are encoded using deep neura networks. The learning objective above encourages the cooperation of both networks such that the coupling of both sides yield the highest accuracy. Hence, the user-network parameters W depends. on the item embeddings V, and likewise for the item-network. (2) The learning task is to estimate the parameters of the encoding networks. Once the encoding networks are learned, they encode users and items in a uniform way, no matter whether they are seen during training. In other words, new. users and new items are no longer second-class citizens - they are encoded in exactly the same way. as those in the training set..\nComparison with CDL. The Collaborative Deep Learning (CDL) recently proposed byWang. et al. (2015) was another attempt to tackle the cold-start issue. This method leverages the item. features by aligning the item encoder with the embeddings resulted from matrix factorization. In particular, the objective function is given as follows:.\ncij(Rij-ufvj)2+Ay>`llVj-fe(x;,0)||2+An>`|x;-fr(x;,0)|2+u>`|uiq|2+r(O)\n(5) Here, a Stacked Denoising Autoencoder (SDAE) (Vincent et al.|2010) with parameter 0 is used tc encode the items, based on {x,}, noisy versions of their features. Compared to our formulation. CDL has several limitations: (1) The objective is to balance the SDAE reconstruction error and. the matching accuracy, which does not necessarily lead to improved recommendation. Tuning this. balance also turns out to be tricky. (2) Only items are encoded, while the representations of the users. are still obtained by matrix factorization. As a result, its expressive power in capturing user interest. remains limited. (3) There are inconsistencies between known items and new ones - the embedding of known items is resulted from a tradeoff between the matching accuracy and the fidelity to SDAF. features, while the embedding of new items are purely based on SDAE encoding..\n0000 00 000000 000000 000000 000000 000000 000000 (a) Basic Design (b) Multi-level Design (c) Multi-level Branching Design\nOur model consists of two networks, namely the item-network f and the user-network q. We went through a progressive procedure in designing their architectures, obtaining three different designs,. from basic design, multi-level design, to multi-level branching design. Each new design was motivated by the observation of certain limitations in the previous version..\nThe basic design, as shown in Figure[1(a) adopts the multilayer perceptron as the basic architecture using tanh as the nonlinear activation function between layers' The top layer of the item-network produces a vector f (x;; W,) for each item; while that of the user-network produces a dual vector g(ri; W) for each user. During training, the loss layer takes their inner products and compares them with the ground-truth R(i, j).\nEach layer in these networks generates a vector representation. We observe that representations from different layers are complementary. Representations from lower layers tend to be closer to the inputs and preserve more information; while those from higher layers focus on deeper semantics The representations from these levels have their respective values, as different users tend to focus on different aspects of an item. Following this intuition, we reach a multi-level design, as shown in Figure[1(b). In this design, dot products between dual embeddings at corresponding levels are. aggregated to produce the final prediction.\nThere is an issue of the multi-level design - the output of each intermediate layer actually plays. two roles. On one hand, it is the input to the next layer for further abstraction; on the other hand, i. also serves as a facet to be matched with the other side. These two roles require different propertie.. of the representations. Particularly, for the former role, the representation needs to preserve more. information for higher-level abstraction; while for the latter, those parts related to the current leve of matching need to be emphasized. To address this issue, we design a multi-level branching. architecture, as shown in Figure[1(c). In this design, a matching branch is introduced to transform the. representation at each level to a form that is more suitable for matching. This can also be considerec. as learning an alternative metric to measure the matchness between the embeddings. As we will shov. in our experiments, this design can considerably improve the prediction accuracy.."}, {"section_index": "5", "section_name": "TRAINING WITH DUAL MINI-BATCHES", "section_text": "A distinctive aspect of our training algorithm is the use of dual mini-batches. Specifically, in each iteration, B., items and B., users are selected. In addition to the item features and user histories, the corresponding part of the matching matrix R will also be loaded and fed to the network. Here, the two batch sizes B, and B, can be different, and they should be chosen according to the sparsity of the matching matrix R, such that each dual mini-batch can cover both positive and zero ratings.\nThe entire training procedure consists of two stages: pre-training and optimization. In the pre-training. stage, we initialize the item-network with unsupervised training (Vincent et al.|[2010) and the user. network randomly. The unsupervised training of the item-network allows it to capture the feature. statistics. Then both networks will be jointly refined in a layer-by-layer fashion. Particularly, we first. tune the one-level networks, taking the dot products of their outputs as the predictions. Subsequently. we stack the second layers on top and refine them in a similar way. Empirically, we found that this layer-wise refinement scheme provides better initialization. In the optimization stage, we adopt the. SGD algorithm with momentum and use the dual mini-batch scheme presented above. In this stage. the training is conducted in epochs. Each epoch, through multiple iterations, traverses the whole. matching matrix R without repetition. The order of choosing mini-batches is arbitrary and will be. shuffled at the beginning of each epoch. Additional tricks such as dropout and batch normalization. are employed to further improve the performance..\nI The choice of tanh as the activation function is based on empirical comparison\nDuring the backward pass, the loss layer that compares the predictions with the ground-truth match ings will produce two sets of gradients, respectively for items and users. These gradients are then back-propagated along respective networks. Note that when the multi-level designs (both with and without branching) are used, each intermediate layer will receive gradients from two sources - those from the upper layers and those from the dual network (via the dot-product layer). Hence, the training of one network would impact that of the other."}, {"section_index": "6", "section_name": "5 EXPERIMENTS", "section_text": "1. CiteULike, constructed byWang & Blei(2011), provides a list of researchers and the papers that they interested. Each paper comes with a text document that comprises both the title anc the abstract. In total, it contains 5, 551 researchers (as users) and 16, 980 papers (as items) with 0.22% density. The task is to predict the papers that a researcher would like. 2. MovieLens+Posters is constructed based on the MovieLens 20M Dataset (Harper & Konstan 2016), which provides about 20M user ratings on movies. For each movie, we collect a movie poster from TMDb and extract a visual feature therefrom using a convolutional neural net work (Szegedy et al.]2016) as the item feature. Removing all those movies without posters and the users with fewer than 10 ratings, we obtain a dataset that contains 76, 531 users and 14, 101 items with 0.24% density. In this dataset, all 5 ratings are considered as positive matchings. 3. Ciao is organized by[Tang et al.(2012) from a product review site, where each product comes with a series of reviews. The reviews for each product are concatenated to serve as the iten content. We removed those items with less than 5 rated users and the users with less than 1( ratings. This results in a dataset with 4, 663 users and 12, 083 items with 0.25% density. Al atings with 40 or ahove (the ratino angesfrom0 to50are arded as nositive matching"}, {"section_index": "7", "section_name": "5.1 EVALUATION", "section_text": "The performance of a recommender system can be assessed from different perspective. In this paper we follow Wang & Blei (2011) and perform the evaluation from the retrieval perspective. Specifically a fraction of rating entries are omitted in the training phase, and the algorithms being tested will. be used to predict those entries. As pointed out by|Wang & Blei](2011), as the ratings are implicit. feedback (Hu et al.] 2008) - some positive matchings are not reflected in the ratings, recall is more suitable than precision in measuring the performance. In particular, we use Recall@ M averaged over. all users as the performance metric. Here, for a certain user, Recall@ M is defined as follows:.\nFollowingWang & Blei (2011), we consider two tasks, in-matrix prediction and out-matrix prediction. Specifically, we divide all users into two disjoint parts, known and unknown, by the ratio of 9 to 1. The in-matrix prediction task only considers known items. For this task, all rating entries are spli. into three disjoint sets: training, validation and testing, by the ratio 3 : 1 : 1. It is ensured that all. items in the validation and testing sets have appeared in the training stage (just that part of theii. ratings were omitted). The out-matrix prediction task is to make predictions for the items that are completely unseen in the training phase. This task is to test the performance of generalization and the. capability of handling the cold-start issue"}, {"section_index": "8", "section_name": "5.2 COMPARISON WITH OTHER METHODS", "section_text": "We compared our method, which we refer to as DualNet with two representative methods in previou. work: (1) Weighted Matrix Factorization (WMF) (Hu et al.| 2008), a representative method for fo collaborative filtering (CF), and (2) Collaborative deep learning (CDL) (Wang et al.||2015), a hybric method that combines deep encoding of the items and CF, which represents the latest advances ir recommendation techniques.\nOn each dataset, we chose the design parameters for each method via grid search. The parameter combinations that attain best performance on the validation set are used. For our DualNet method we adopt a three-level branching configuration, where the embedding dimensions of each network from bottom to top, are set to 200, 200, 50. For WMF, the latent dimension is set to 300 on CDL and 450 on other datasets. For CDL, the best performance is attained when the structure of SDAE is configured to be (2000, 1000, 300), with drop out ratio 0.1. Other design parameters of CDL are set as a = 1.0, b = 0.01, lu = 1, lv = 10, ln = 1000, lw = 0.0005.\nthe number of items a user likes in top M recommendations\nTable 1: Comparison of performance on three datasets. The performances are measure with the metric Recall@ M. We report the results where M are set to 50, 100, and 200..\nTable 2: Comparison for out-matrix predictions on CiteULike\nNote that on CiteULike, there are two ways to split the data. One is the scheme in (Wang et al. 2015), and the other is the scheme in (Wang & Blei]2011), which is the one presented in the previous section. Note that in the former scheme, a fixed number of ratings from each user are selected fo. training. This may result in some testing items being missed in the training set. To provide a complete comparison with prior work, we use both schemes in our experiments, which are respectively denoted as CiteULike1 and CiteULike2.\nTable[1compares the performance of WML, CDL, and DualNet on all three datasets (four data splitting settings). From the results, we observed: (1) Our proposed DualNet method outperforms both WML and CDL on all datasets. On certain data sets, the performance gains are substantial. For example. on MovieLens, we obtained average recalls at 44.95%, 59.15%, and 72.56% respectively when M = 50, 100, 200. Comparing what CDL achieves (38.11%, 49, 73%, and 61.00%), the relative gains are around 18%. On other data sets, the gains are also considerable. (2) The performance. gains vary significantly across different datasets, as they are closely related to the relevance of the. item features. Particularly, when the item features are pertinent to the user interest, we may see remarkable improvement when those features are incorporated; otherwise, the performance gains would be relatively smaller."}, {"section_index": "9", "section_name": "5.3 DETAILED STUDY", "section_text": "We conducted additional experiments on CiteULike to further study the proposed algorithm. In. this study, we investigate the performance of out-matrix prediction, the impact of various modeling choices. e.g. multi-level branching. as well as the influence of training tactics.\nOut-matrix prediction. As mentioned, the out-matrix prediction task is to examine an algorithm's. capability of handling new items, i.e. those unseen in the training stage. For this task, we compared. CDL and DualNet on the CiteULike dataset. WML is not included here as it is not able to handle new items. Table2 shows the results. It can be clearly seen that DualNet outperforms CDL by a notable margin. For example, Recall@ 50 increases from 32.18% to 47.51% the relative gain is 47.6%, a very remarkable improvement. The strong generalization performance as demonstrated here is, to a large extent, ascribed to our basic formulation, where the encoding networks uniformly encode both known and new items.\nMulti-level branching. We compared three different designs presented in Section 3: basic design multi-level design, and multi-level branching design. From the results shown in Table[3] we can ob- serve limited improvement of the multi-level design over the basic one. More significant performance\nCiteULike1 CiteULike2 50 100 200 50 100 200 WMF 22.14% 32.58% 43.65% 40.45% 50.28% 59.95% CDL 25.02% 36.57% 48.32% 39.49% 52.02% 64.41% DualNet 30.41 % 41.71% 52.24% 41.26% 53.80% 65.21% MovieLens Ciao 50 100 200 50 100 200 WMF 37.14% 48.81% 60.25% 14.46% 19.66% 26.22% CDL 38.11% 49.73% 61.00% 17.90% 24.55% 32.53% DualNet 44.95% 59.15% 72.56% 17.94 % 24.58% 32.52%\nRecall@50 Recall@100 Recall@ 200 CDL 32.18% 43.90% 56.36% DualNet 47.51% 56.59% 66.36%\nTable 3: Comparison of different network architecture designs on CiteULike\ngains are observed when the branching design is introduced. This shows that the branches contribut a lot to the overall performance\nNoise injection. Sometimes we noticed overfitting during training i.e. the validation performance gets worse while the training loss is decreasing. To tackle this issue, we inject noises to the inputs i.e. setting a fraction of input entries to zeros. Generally, we observed that noise injection has little effect for Recall@ M on in-matrix predictions when M < 30. However, it can considerably increase the recall for large M value or out-matrix predictions. Particularly, on CiteULike, it increases in-matrix Recall@ 300 from 67.3% to 71.2%, and out-matrix Recall@50 from 38.6% to 47.5%.\nUnsuccessful Tactics. Finally, we show some tactics that we have tried and found to be not working. (1) Replacing the weighted Euclidean loss with logistic loss would lead to substantial degradation. of the performance (sometimes by up to 20%). Also, when using logistic loss, we observed severe overfitting.Rendle et al.(2009) proposed Bayesian Personalized Recommendation (BPR) which. directly targets on ranking. We tested this on CiteULike with parameters tuned to obtain the optimal. performance. Our experimental results showed that its performance is similar to WMF. Particularly,. the Recall@ 50, 100, 200 for BPR are respectively 39.11%, 49.16%, 59.96%, while those for WMF are 40.45%, 50.25%, 59.95%.\n(2) Motivated by the observation that positive ratings are sparse, we tried a scheme that ignores a. fraction of dual mini-batches that correspond to all zero ratings, with an aim to speed up the training Whereas this can reduces the time needed to run an epoch, it takes significantly more epochs to reach. the same level of performance. As a result, the overall runtime is even longer..\nThis paper presented a new method for predicting the interactions between users and items, callec. Collaborative Deep Embedding. This method uses dual networks to encode users and items respec. tively. The user-network and item-network are trained jointly, in a collaborative manner, based on twc streams of data. We obtained considerable performance gains over the state-of-the-art consistently or. three large datasets. The proposed method also demonstrated superior generalization performance. (on out-matrix predictions). This improvement, from our perspective, is ascribed to three importani. reasons: (1) the expressive power of deep models for capturing the rich variations in user interests, (2 the collaborative training process that encourages closely coupled embeddings, and (3) an objective. function that directly targets the prediction accuracy.."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Yoshua Bengio. Learning deep architectures for ai. Foundations and trends(R) in Machine Learning 2(1):1-127, 2009.\nRecall@10 Recall@50 Recall@100 basic 15.86% 38.86% 51.03% multi-level 16.89% 39.92% 51.26% multi-level branching 17.43% 40.31% 51.78%\nWe consider this work as a significant step that brings the power of deep models to relational modeling. However, the space of deep relational modeling remains wide open - lots of questions remain yet to be answered. In future, we plan to investigate more sophisticated network architectures, and extend the proposed methodology to applications that involve more than two domains.\nAndriy Mnih and Ruslan R Salakhutdinov. Probabilistic matrix factorization. In J. C. Platt. D. Koller Y. Singer, and S. T. Roweis (eds.), Advances in Neural Information Processing Systems 20 pp. 1257-1264. Curran Associates, Inc., 2008. URL http://papers.nips.cc/paper/ 3208-probabilistic-matrix-factorization.pdf\nMichael J Pazzani and Daniel Billsus. Content-based recommendation systems. In The adaptive web pp. 325-341. Springer, 2007.\nSteffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of the twenty-fifth conference on. uncertainty in artificial intelligence, pp. 452-461. AUAI Press, 2009.\nFrancesco Ricci, Lior Rokach, and Bracha Shapira. Introduction to recommender systems handbook Springer, 2011.\nMalcolm Slaney, Kilian Weinberger, and William White. Learning a metric for music similarity. Ir International Symposium on Music Information Retrieval (SMIR). 2008\nRichard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th international conference on machine learning (1CML-11), pp. 129-136, 2011.\nChristian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016\nJ. Tang, H. Gao, and H. Liu. mTrust: Discerning multi-faceted trust in a connected world. In Proceedings of the fifth ACM international conference on Web search and data mining, pp. 93-102 ACM, 2012.\nAaron Van den Oord. Sander Dieleman, and Benjamin Schrauwen. Deep content-based musi recommendation. In Advances in Neural Information Processing Systems, pp. 2643-2651, 2013.\nPascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371-3408, 2010..\nHao Wang, Naiyan Wang, and Dit- Yan Yeung. Collaborative deep learning for recommender systems In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1235-1244. ACM, 2015.\nGeoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82-97, 2012"}]
Hk8rlUqge
[{"section_index": "0", "section_name": "ABSTRACT", "section_text": "We investigate deep generative models that can exchange multiple modalities bi. directionally, e.g., generating images from corresponding texts and vice versa. Re. cently, some studies handle multiple modalities on deep generative models, such. as variational autoencoders (VAEs). However, these models typically assume tha1. modalities are forced to have a conditioned relation, i.e., we can only generate. modalities in one direction. To achieve our objective, we should extract a join. representation that captures high-level concepts among all modalities and througl. which we can exchange them bi-directionally. As described herein, we propose a. joint multimodal variational autoencoder (JM VAE), in which all modalities are in dependently conditioned on joint representation. In other words, it models a join distribution of modalities. Furthermore, to be able to generate missing modal. ities from the remaining modalities properly, we develop an additional method. JMVAE-kl, that is trained by reducing the divergence between JMVAE's encode. and prepared networks of respective modalities. Our experiments show that oui. proposed method can obtain appropriate joint representation from multiple modal-. ities and that it can generate and reconstruct them more properly than conventiona. VAEs. We further demonstrate that JMVAE can generate multiple modalities bi-. directionally."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In our world, information is represented through various modalities. While images are represented by pixel information, these can also be described with text or tag information. People often exchange such information bi-directionally. For instance, we can not only imagine what \"a young female with a smile who does not wear glasses\"' looks like, but also add this caption to a corresponding photo- graph. To do so, it is important to extract a joint representation that captures high-level concepts among all modalities. Then we can bi-directionally generate modalities through the joint repre- sentations. However, each modality typically has a different kind of dimension and structure, e.g., images (real-valued and dense) and texts (discrete and sparse). Therefore, the relations between each modality and the joint representations might become high nonlinearity. To discover such relations. deep neural network architectures have been used widely for multimodal learning (Ngiam et al. 2011; Srivastava & Salakhutdinov, 2012). The common approach with these models to learn joint representations is to share the top of hidden layers in modality specific networks. Among them generative approaches using deep Boltzmann machines (DBMs) (Srivastava & Salakhutdinov,2012 Sohn et alL2014) offer the Ortant advantage that these can. generate modalities bi-directionally\nRecently, variational autoencoders (VAEs) (Kingma & Welling,2013; Rezende et al., 2014) have. been proposed to estimate flexible deep generative models by variational inference methods. These models use back-propagation during training, so that it can be trained on large-scale and high. dimensional dataset compared with DBMs with MCMC training. Some studies have addressed tc. handle such large-scale and high-dimensional modalities on VAEs, but they are forced to model con ditional distribution (Kingma et al.,2014; Sohn et al.,2015;Pandey & Dukkipati,2016). Therefore it can only generate modalities in one direction. For example, we cannot obtain generated image. from texts if we train the likelihood of texts given images. To generate modalities bi-directionally"}, {"section_index": "2", "section_name": "JOINT MULTIMODAL LEARNING WITH DEEP GENERA- TIVE MODELS", "section_text": "JMVAE Reconstructimages Input image Male Not Young Eyeglasses Not Smiling Reconstruetimages with changed attributes Average face| Random faces Generate Male:-1 Generate attributes Eyeglasses:-1 images Smiling:1 Attributes\nJMVAE Input image. Reconstructimages Male Not Young Eyeglasses Not Smiling Reconstruct images with changed attributes Average face Random faces Generate Male:-1 Generate attributes Eyeglasses:-1 images Smiling:1 Attributes\nFigure 1: Various images and attributes generated from an input image. We used the CelebA dataset (Liu et al.. 2015) to train and test models in this example. Each yellow box corresponds to different processes. All. processes are estimated from a single generative model: the joint multimodal variational autoencoder (JMVAE) which is our proposed model.\nall modalities should be treated equally under the learned joint representations, which is the same a previous multimodal learning models before VAEs.\nAs described in this paper, we develop a novel multimodal learning model with VAEs, which we call a joint multimodal variational autoencoder (JMVAE). The most significant feature of our model is that all modalities, x and w (e.g., images and texts), are conditioned independently on a latent variable z corresponding to joint representation, i.e., the JMVAE models a joint distribution of all modalities, p(x, w). Therefore, we can extract a high-level representation that contains all informa- tion of modalities. Moreover, since it models a joint distribution, we can draw samples from both p(x|w) and p(w|x). Because, at this time, modalities that we want to generate are usually missing the inferred latent variable becomes incomplete and generated samples might be collapsed in the testing time when missing modalities are high-dimensional and complicated. To prevent this issue, we propose a method of preparing the new encoders for each modality, p(z[x) and p(z|w), and reducing the divergence between the multimodal encoder p(z|x, w), which we call JMVAE-k1. This contributes to more effective bi-directional generation of modalities, e.g., from face images to texts (attributes) and vice versa (see Figure 1).\nThe main contributions of this paper are as follows:"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "The common approach of multimodal learning with deep neural networks is to share the top of hidden layers in modality specific networks. Ngiam et al. (2011) proposed this approach with deep autoencoders (AEs) and found that it can extract better representations than single modality settings. Srivastava & Salakhutdinoy (2012) also took this idea but used deep Boltzmann machines (DBMs) (Salakhutdinov & Hinton, 2o09). DBMs are generative models with undirected connections based on maximum joint likelihood learning of all modalities. Therefore, this model can generate modali- ties bi-directionally. Sohn et al. (2014) improved this model to exchange multiple modalities effec- tively, which are based on minimizing the variation of information and JMVAE-kl in ours can be regarded as minimizing it with variational learning on parameterized distributions (see Section 3.3\nWe introduce a joint multimodal variational autoencoder (JMVAE), which is the first study to train joint distribution of modalities with VAEs.. We propose an additional method (JMVAE-kl), which prevents generated samples from being collapsed when some modalities are missing. We experimentally confirm that this method solves this issue. We show qualitatively and quantitatively that JMVAE can extract appropriate joint distri-. bution and that it can generate and reconstruct modalities similarly or more properly than conventional VAEs. We demonstrate that the JMVAE can generate multiple modalities bi-directionally even if. these modalities have completely different kinds of dimensions and structures, e.g., high-. dimentional color face images and low-dimentional binary attributes..\nRecently, VAEs (Kingma & Welling, 2013; Rezende et al., 2014) are used to train such high dimensional modalities.Kingma et al.(2014);Sohn et al.(2015) propose conditional VAEs (CVAEs), which maximize a conditional log-likelihood by variational methods. Many studies are based on CVAEs to train various multiple modalities such as handwriting digits and labels (Kingma et al.], 2014; Sohn et al., 2015), object images and degrees of rotation (Kulkarni et al. 2015), face images and attributes (Larsen et al., 2015; Yan et al.,2015), and natural images anc captions (Mansimov et al.,|2015). The main features of CVAEs are that the relation between modal ities is one-way and a latent variable does not contain the information of a conditioned modality' which are unsuitable for our objective\nPandey & Dukkipati(2016) proposed a conditional multimodal autoencoder (CMMA), which also maximizes the conditional log-likelihood. The difference between CVAEs is that a latent variable is connected directly from a conditional variable, i.e., these variables are not independent. Moreover. this model forces the latent representation from an input to be close to the joint representation from. multiple inputs, which is similar to JMVAE-kl. However, the CMMA still considers that modalities. are generated in fixed direction. This is the most different part from ours.."}, {"section_index": "4", "section_name": "3 METHODS", "section_text": "This section first introduces the algorithm of VAEs briefly and then proposes a novel multimodal. learning model with VAEs, which we call the joint multimodal variational autoencoder (JMVAE).\nGiven observation variables x and corresponding latent variables z, their generating processes are. definable as z ~ p(z) = N(0,I) and x ~ pe(x[z), where 0 is the model parameter of p. The. objective of VAEs is maximization of the marginal distribution p(x) = I pe(x|z)p(z)dx. Because. this distribution is intractable, we instead train the model to maximize the following lower bound o. the marginal distribution Lv AE(x) as.\nlogp(x) -DkL(qo(z[x)|[p(z)) + Eqo(z|x)[logpe(xz)] = LvAE(x)\nTo optimize the lower bound (x) with respect to parameters 0, $, we estimate gradients of Equa tion 1using stochastic gradient variational Bayes (SGVB). If we consider qo(z|x) as Gaussian distribution N(z; , diag(o2)), where = {, o2}, then we can reparameterize z ~ qo(z|x) to z = + O e, where e ~ N(0, I). Therefore, we can estimate the gradients of the neg- ative reconstruction term in Equation[ with respect to 0 and as Ve,Eq(z|x)[log pe(x|z)] = EN(e;o,1)[Ve, log pe(z[ + O e)]. Because the gradients of the regularization term are solvable analytically, we can optimize Equation 1 with standard stochastic optimization methods.\nNext, we consider i.i.d. dataset (X, W) = {(x1, w1), ..., (x, wn)}, where two modalities x and w have different kinds of dimensions and structures. Our objective is to generate two modalities bi-directionally. For that reason, we assume that these are conditioned independently on the same latent concept z: joint representation. Therefore, we assume their generating processes as z ~ p(z) and x, w ~ p(x, wz) = pe,(xz)pew (w|z), where 0x and Ow represent the model parameters of each independent p. Figure[2(a) shows a graphical model that represents generative processes. One can see that this models joint distribution of all modalities, p(x, w). Therefore, we designate this model as a joint multimodal variational autoencoder (JMVAE).\nwhere qo(z|x) is an approximate distribution of posterior p(z|x) and is the model parameter of q We designate qo(z|x) as encoder and pe(x z) as decoder. Moreover, in Equation 1] the first term represents a regularization. The second one represents a negative reconstruction error..\nFigure 2: (a) Graphical model of the JMVAE. Gray circles represent observed variables. The white one denotes a latent variable. (b) Two approaches to estimate encoders with a single input, q(z[x) and q(z|w), on the JMVAE: left, make modalities except an input modality missing (JMVAE-zero); right, prepare encoders that have a single input and make them close to the JMVAE encoder (JMVAE-kl).\nConsidering an approximate posterior distribution as qo(z|x, w), we can estimate a lower bound o. the log-likelihood log p(x, w) as follows:.\nPe(x, w, z LJM(x,w) qo(zx, w -DkL(qo(z|x,w)|p(z)) +Eqs(z|x,w)[log Pex(x|z)] + Eqs(z|x,w)[log Pew(w\nWe can apply the SGVB to Equation[3]just as Equation1 so that we can parameterize the encoder and decoder as deterministic deep neural networks and optimize them with respect to their param-. eters, Ox, Ow, and . Because each modality has different feature representation, we should set different networks for each decoder, pox (x|z) and pw(w|z). The type of distribution and corre-. sponding network architecture depends on the representation of each modality, e.g., Gaussian when the representation of modality is continuous, and a Bernoulli when it is a binary value..\nUnlike original VAEs and CVAEs, the JMVAE models joint distribution of all modalities. In this model, modalities are conditioned independently on a joint latent variable. Therefore, we can extract better representation that includes all information of modalities. Moreover, we can estimate both marginal distribution and conditional distribution in bi-directional, so that we can not only obtain images reconstructed themselves but also draw texts from corresponding images and vice versa. Additionally, we can extend JMVAEs to handle more than two modalities such as p(x, w1, w2, ...) in the same learning framework."}, {"section_index": "5", "section_name": "3.3 INFERENCE MISSING MODALITIES", "section_text": "In the JMVAE, we can extract joint latent features by sampling from the encoder qo(z|x, w) a testing time. Our objective is to exchange modalities bi-directionally, e.g., images to texts and vice versa. In this setting, modalities that we want to sample are missing, so that inputs of such modalitie. are set to zero (the left panel of Figure2(b)). The same is true of reconstructing a modality only from itself. This is a natural way in discriminative multimodal settings to estimate samples fron unimodal information (Ngiam et al., 2011). However, if missing modalities are high-dimensiona and complicated such as natural images, then the inferred latent variable becomes incomplete and generated samples might collapse.\nWe propose a method to solve this issue, which we designate as JMVAE-kl. Moreover, we describe the former way as JMVAE-zero to distinguish it. Suppose that we have encoders with a single input qox (z|x) and qow (z|w), where $x and $w are parameters. We would like to train them by bringing their encoders close to an encoder q(z|x, w) (the right panel of Figure[2(b)). Therefore, the objec function of JMVAE-k1 becomes\nwhere a is a factor that regulates the KL divergence terms\nFrom another viewpoint, maximizing Equationcan be regarded as minimizing the variation of in formation with variational learning on parameterized distributions (proven and derived in Appendix A). The variation of information, a measure of the distance between two variables, is written as\nJMVAE-zero JMVAE-kl K W X W qzxqzx,0) q(z|w) ~ qs(z|0,w) qpx(z|x) qg(z|x,w) 9ow(z|w) (a) (b)\nEquation 3 has two negative reconstruction terms which are correspondent to each modality. As with VAEs, we designate qo(z[x, w) as the encoder and both pe..(x z) and pe...(w(z) as decoders\n= LJM(x,w) - Q[DkL(qo(z|x,w)||qox(z|x)) + DkL(qo(z|x,w)||qow(z|w))], )(X.W)\nEpp(x,w)[logp(x|w) + logp(w|x)], where pD is the data distribution. It is apparent that the. variation of information is the sum of two negative conditional log-likelihoods. Therefore, mini mizing the variation of information contributes to appropriate bi-directional exchange of modalities. Sohn et al.(2014) also train their model to minimize the VI for the same objective as ours. However. they use DBMs with MCMC training.\nThis section presents evaluation of the qualitative and quantitative performance and confirms the JMVAE functionality in practice.\nMNIST is not a dataset for multimodal setting. In this work, we used this dataset for toy problen of multimodal learning. We consider handwriting images and corresponding digit labels as twc different modalities. We used 50.000 as training set and the remaining 10,000 as a test set\nCelebA consists of 202,599 color face images and corresponding 40 binary attributes such as male eyeglasses, and mustache. In this work, we regard them as two modalities. This dataset is challeng ing because these have completely different kinds of dimensions and structures. Beforehand, we cropped the images to squares and resized to 64 64 and normalized. From the dataset, we chose 191,899 images that are identifiable face by OpenCV and used them for our experiment. We used 90% out of all the dataset contains as training set and the remaining 10% of them as test set.\nFor MNIST, we considered images as x E R2828 and corresponding labels as w E {0, 1}10. We prepared two networks each with two dense layers of 512 hidden units and using leaky rectifiers and shared the top of each layers and mapped them into 64 hidden units. Moreover, we prepared two networks each with three dense layers of 512 units and set p(x z) as Bernoulli and p(w[z) as categorical distribution whose output layer is softmax.We used warm-up (Bowman et al., 2015; Sonderby et al., 2016), which first forces training only of the term of the negative reconstruction error and then gradually increases the effect of the regularization term to prevent local minima during early training. We increased this term linearly during the first N epochs as with Sonderby et al. (2016). We set Nt = 200 and trained for 500 epochs on MNIST. Moreover, same asBurda et al. (2015); Sonderby et al. (2016), we resampled the binarized training values randomly from MNIST for each epoch to prevent over-fitting.\nFor CelebA, we considered face images as x E R32x32x3 and corresponding attributes as w { -1, 1}40. We prepared two networks with layers (four convolutional and a flattened layers for and two dense layers for w) with ReLU and shared the top of each layers and mapped them into 12 units. For the decoder, we prepared two networks, with a dense and four deconvolutional layers fo x and three dense layers for w, and set Gaussian distribution for decoder of both modalities, wher the variance of Gaussian was fixed to 1 for the decoder of w. In CelebA settings, we combine JMVAE with generative adversarial networks (GANs) (Goodfellow et al.,2014) to generate cleare images. We considered the network of p(x|z) as generator in GAN, then we optimized the GAN los with the lower bound of the JMVAE, which is the same way as a VAE-GAN model (Larsen et al 2015). As presented herein, we describe this model as JMVAE-GAN. We set Nt = 20 and traine for 100 epochs on CelebA.\nTable 1: Evaluation of test log-likelihood. All models are trained and tested on MNIST. a is a coefficient o regularization term in JMVAE-k1 (Equation4): left, marginal log-likelihood; right, conditional log-likelihood\n< logp(x|w) log p(x) multiple single multiple single CVAE -83.80 VAE -86.91 CMMA -86.12 JMVAE-zero -86.89 -86.89 JMVAE-zero -84.64 -4838 JMVAE-kl, = 0.01 -86.89 -86.55 JMVAE-kl, a = 0.01 -84.61 -129.6 JMVAE-kl, = 0.1 -86.86 -86.73 JMVAE-kl, a = 0.1 -84.72 -126.0 JMVAE-kl, q = 1 -89.20 -89.20 JMVAE-kl, a = 1 -86.97 -112.7\nTable 2: Evaluation of log-likelihood. Models are trained and tested on CelebA. We trained JMVAE-kl and set Q = 0.1: left, marginal log-likelihood; right, conditional log-likelihood (with the multiple lower bound)\nlog p(x) < logp(xw multiple single CVAE-GAN -4152 VAE-GAN -4439 CMMA-GAN -4147 JMVAE-GAN -4141 -4144 JMVAE-GAN -4130"}, {"section_index": "6", "section_name": "4.3.1 EVALUATION METHOD", "section_text": "For this experiment, we estimated test log-likelihood to evaluate the performance of model. This. estimate roughly corresponds to negative reconstruction error. Therefore, higher is better. From this performance, we can find that not only whether the JMVAE can generate samples properly but also whether it can obtain joint representation properly. If the log-likelihood of a modality is low,. representation for this modality might be hurt by other modalities. By contrast, if it is the same or higher than model trained on a single modality, then other modalities contribute to obtaining appropriate representation.\nWe estimate the test marginal log-likelihood and test conditional log-likelihood on JMVAE. We compare the test marginal log-likelihood against VAEs (Kingma & Welling.2013: Rezende et al. 2014) and the test conditional log-likelihood against CVAEs (Kingma et al.,[2014;[Sohn et al.],[2015] and CMMAs (Pandey & Dukkipati.2016). On CelebA, we combine all competitive models with GAN and describe them as VAE-GAN, CVAE-GAN, and CMMA-GAN. For fairness, architectures and parameters of these competitive models were set to be as close as possible to those of JMVAE.\nWe calculate the importance weighted estimator (Burda et al., 2015) from lower bounds at testing time because we would like to estimate the true test log-likelihood from lower bounds. To es timate the test marginal log-likelihood p(w) of the JMVAE, we use two possible lower bounds sampling from qo(z|x, w) or qox (z|x). We describe the former lower bound as the multiple lower bound and the latter one as the single lower bound. When we estimate the test conditional log likelihood log p(x|w), we also use two lower bounds, each of which is estimated by sampling fron qo(z|x, w) (multiple) or qow (z|w) (single) (see AppendixB]for more details). To estimate the single lower bound, we should approximate the single encoder (qox(z|x) or qow (z|w)) by JMVAE-zerc or JMVAE-kl. When the value of log-likelihood with the single lower bound is the same or large than that with the multiple lower bound, the approximation of the single encoder is good. Note tha original VAEs use a single lower bound and that CVAEs and CMMAs use a multiple lower bound."}, {"section_index": "7", "section_name": "4.3.2 MNIST", "section_text": "Our first experiment evaluated the test marginal log-likelihood and compared it with that of the VAE on MNIST dataset. We trained the model with both JMVAE-zero and JMVAE-kl and confirmed these differences. As described in Section4.3.1 we have two possible ways of estimating the marginal log-likelihood of the JMVAE, i.e., multiple and single lower bounds. The left of Table 1shows the test marginal log-likelihoods of the VAE and JMVAE. It is apparent that log-likelihood of the JMVAE-zero is the same or slightly better than that of the VAE. In the case of the log- likelihood of JMVAE-kl, the log-likelihood becomes better as a is small. Especially, JMVAE-kl with a = 0.01 and single lower bound archives the highest log-likelihood in Table 1 If a is 1, however, then the test log-likelihood on JMVAE-k1 becomes much lower. This is because the influence of the regularization term becomes strong as a is large.\nFigure 3: Visualizations of 2-D latent representation. The network architectures are the same as those in Section 4.3] except that the dimension of the top hidden layer is forced into 2. Points with different colors correspond to the digit labels. These were sampled from q(z|x) in the VAE and q(z|x, w) in both the CVAE and JMVAE. We used JMVAE-zero as the JMVAE.\nNext, we evaluated the test conditional log-likelihood and compared it with that of the CVAE anc CMMA conditioned on w. As in the case of the marginal log-likelihood, we can estimate the JMVAE's conditional log-likelihood by both the single and multiple lower bound. The single bounc can be estimated using JMVAE-zero or JMVAE-kl. The right of Table[1|shows the test conditiona log-likelihoods of the JMVAE, CVAE, and CMMA. It is apparent that the CVAE achieves the highes log-likelihood. Even so, in the case of multiple bound, log-likelihoods with both JMVAE-zero anc JMVAE-kl (except = 1) outperform that of the CMMA.\nIt should be noted that the log-likelihood with JMVAE-zero and single bound is significantly low. As described in Section[3.3] this is because a modality w is missing as input. By contrast, it is apparent that the log-likelihood with JMVAE-kl is improved significantly from that with JMVAE- zero. It shows that JMVAE-kl solves the issue of missing modalities (we can also find this result in generated images, see Appendix[E). Moreover, we find that this log-likelihood becomes better as a is large, which is opposite to the other results. Therefore, there is a trade-off between whether each. modality can be reconstructed properly and whether multiple modalities can be exchanged properly and it can be regulated by Q."}, {"section_index": "8", "section_name": "4.3.3 CELEBA", "section_text": "In this section, we used CelebA dataset to evaluate the JMVAE. Table[2|presents the evaluations of marginal and conditional log-likelihood. From this table, it is apparent that values of both marginal and conditional log-likelihood with JMVAEs are larger than those with other competitive methods.. Moreover, comparison with Table[1shows that the improvement on CelebA is greater than that on MNIST, which suggests that joint representation with multiple modalities contributes to improve-. ment of the quality of the reconstruction and generation in the case in which an input modality is. large-dimensioned and complicated."}, {"section_index": "9", "section_name": "4.4.1 JOINT REPRESENTATION ON MNIST", "section_text": "In this section, we first evaluated that the JMVAE can obtain joint representation that includes th. information of modalities. Figure[3 shows the visualization of latent representation with the VAE CVAE, and JMVAE on MNIST. It is apparent that the JMVAE obtains more discriminable laten representation by adding digit label information. Figure 3(b) shows that, in spite of using multi. modal information as with the JMVAE, points in CVAE are distributed irrespective of labels becaus CVAEs force latent representation to be independent of label information, i.e., it is not objective fo. CVAEs to obtain joint representation.\nNext, we confirm that JMVAE-GAN on CelebA can generate images from attributes. Figure4(a) portrays generated faces conditioned on various attributes. We find that we can generate an average face of each attribute and various random faces conditioned on a certain attributes. Figure 4(b) shows that samples are gathered for each attribute and that locations of each variation are the same irrespective of attributes. From these results, we find that manifold learning of joint representation. with images and attributes works well..\n0 -2 -3 -2 a)VAE (b)CVAE cJMVAE\n0 8 (a) VAE bCVAE cJMVAE\nFigure 4: (a) Generation of average faces and corresponding random faces. We first set all values of attributes. -1, 1} randomly and designate them as Base. Then, we choose an attribute that we want to set (e.g., Male, Bald, Smiling) and change this value in Base to 2 (or -2 if we want to set 'Not'). Each column corresponds. to same attribute according to legend. Average faces are generated from p(x zmean), where zmean is a mean. of q(z|w). Moreover, we can obtain various images conditioned on the same values of attributes such as. x ~ p(x|z), where z = Zmean + O e, e ~ N(0, ), and C is the parameter which determines the range of. variance. In this figure, we set ( = 0.6. Each row in random faces has the same e. (b) PCA visualizations of latent representation. Colors indicate which attribute each sample is conditioned on..\nGenerated Mouth Input attributes Average face Reconstruction Not Male Eyeglasses Not Young Smiling slightly open Male:0.95 Eyeglasses-0.99 Young0.30 Smiling-0.97 Male : 0.22 Eyeglasses:-0.99 Young0.87 Smiling:-1.00\nFigure 5: Portraits of the Mona Lisa(upper) and Mozart(lower), generated their attributes, and reconstructed images conditioned on varied attributes, according to the legend. We cropped and resized it in the same way as CelebA. The procedure is as follows: generate the corresponding attributes w from an unlabeled image x generate an average face xmean from the attributes w; select attributes which we want to vary and change the values of these attributes; generate the changed average face x'mean from the changed attributes; and obtain a changed reconstruction image x' by x + xmean - Xmean.\nFinally, we demonstrate that JMVAE-GAN can generate bi-directionally between faces and at tributes. Figure5 shows that MVAE-GAN can generate both attributes and changed images condi tioned on various attributes from images which had no attribute information. This way of generating an image by varying attributes is similar to the way of the CMMA (Pandey & Dukkipati, 2016) However, the CMMA cannot generate attributes from an image because it only generates images. from attributes in one direction.."}, {"section_index": "10", "section_name": "CONCLUSION AND FUTURE WORK", "section_text": "In this paper, we introduced a novel multimodal learning model with VAEs, the joint multimoda variational autoencoders (JMVAE). In this model, modalities are conditioned independently on joint representation, i.e., it models a joint distribution of all modalities. We further proposed the method (JMVAE-kl) of reducing the divergence between JMVAE's encoder and a prepared encoder of each modality to prevent generated samples from collapsing when modalities are missing. We confirmed that the JMVAE can obtain appropriate joint representations and high log-likelihoods on MNIST.\nAverage face Random faces 2.0 Base Base 15 Not Male (random) 1.0 Not Male 0.5 0.0 Bald -0.5 -1.0 Smiling 0 (a) (b)\nIn future work, we would like to evaluate the multimodal learning performance of JMVAEs using various multimodal datasets such as containing three or more modalities."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Be gio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.2014.\nAnders Boesen Lindbo Larsen, Soren Kaae Sonderby, and Ole Winther. Autoencoding beyonc pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015..\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild In Proceedings of the IEEE International Conference on Computer Vision. pp. 3730-3738. 2015\nElman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating image from captions with attention. arXiv preprint arXiv:1511.02793.2015..\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.\nRuslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In AISTATS, volume 1 pp. 3, 2009.\nSander Dieleman, Jan Schlter, Colin Raffel, Eben Olson, Sren Kaae Snderby, Daniel Nouri, Daniel Maturana, Martin Thoma, Eric Battenberg, Jack Kelly, Jeffrey De Fauw, Michael Heilman, Diogo Moitinho de Almeida, Brian McFee, Hendrik Weideman, Gbor Takcs, Peter de Rivaz, Jon Crall, Gregory Sanders, Kashif Rasul, Cong Liu, Geoffrey French, and Jonas Degrave. Lasagne: First release.. August 2015. URL ht t p: /dx.doi.0rg/10.5281/zenodo.27878\nJiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. Multi modal deep learning. In Proceedings of the 28th international conference on machine learning (ICML-11), pp. 689-696, 2011.\nThe Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frederic Bastien, Justin Bayer, Anatoly Be- likov, et al. Theano: A python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688, 2016.\nXinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation from visual attributes. arXiv preprint arXiv:1512.00570, 2015.\nA RELATION BETWEEN THE OBJECTIVE OF JMVAE-KL AND THE VARIATION OF INFORMATION\nThe variation of information can be expressed as -Epp(x,w)[log p(x|w) + log p(w|x)], where pD is the data distribution. In this equation, we specifically examine the sum of two negative log. likelihoods and do not consider the expectation in this derivation. We can calculate the lower bounds of these log-likelihoods as follows:\nlogp(x|w)+logp(w|x) Eq(z|x,w) q(z x, w 1(zx,w Eq(z|x,w)[logp(x|z)] + Eq(z|x,w)[logp(w|z)] -DkL(q(z|x,w)||p(z|x)) - DkL(q(z|x,w)||p(z|w)) LJM(x,w) -[DkL(q(z|x,w)|lp(z|x)) + DkL(q(z|x,w)||p(z|v +DkL(q(z|x,w)||p(z))\nLJm(x,w) -[DkL(q(z|x,w)||q(z|x)) + DkL(q(z|x,w)||q(z|w))] + DkL(q(z|x,w)||p(z) = LJMn(1)(x,w) + DKL(q(z|x,w)|p(z)) LJMnt(1)(x,w),\nTwo lower bounds used to estimate test marginal log-likelihood p(x) of the JMVAE are as follows\nPex(x|z)p(z) Lsingle(x) = Eqox(z|x)[log qqx(z|x)\nPex(x|z)p(z multiple(x) = Eqs(z|x,w) qg(z|x,w\n2 7 Input 9 7 Reconstuction (multiple) 9 Reconstuction (single) 7 2 S 9 (a) Input Reconstuction (multiple) Reconstuction (single) (b)\n469 Input 7 7 2 4 9S9 Reconstuction (multiple) 2 7 4 4 9S9 Reconstuction (single)\nFigure 6: Comparison of the original images and reconstructed images by the JMVAE (a = 0.1). We used (a MNIST and (b) CelebA datasets.\nWe can also estimate test conditional log-likelihood p(x|w) from these two lower bounds as\np(x,z|w Lsingle(x|w) = Eqsw(z|w)[log - log p(w) qpw(z|w qpw(z|w)\nPex(xz)pe,wz)p(z Lmultiple(x|w) = Eqg(z|x,w)[log log p W qgz|x,w\nWe can obtain a tighter bound on the log-likelihood by k-fold importance weighted sampling. For example, we obtain an importance weighted bound on log p(x) from Equation[11|as follows:\nk Per (xz)p(z logp(x) Ez1,..,zk~q$x(z|x)[ ) qpx(z|x) i=1\nStrictly speaking, these two lower bounds are not equal. However, if the number of importance samples is extremely large, the difference of these two lower bounds converges to 0\nTherefore,\nlimk->o~multiple imi -\nInput Reconstuction (multiple) Reconstuction (single) (b)\nProof. Let the multiple and single k-hold importance weighted lower bounds as Lk. single and Lk. ~ single ~multiple converge to log p(x) as k -> 0o.\nFigure [6 presents a comparison of the original image and reconstructed image by the JMVAE on both MNIST and CelebA datasets. It is apparent that the JMVAE can reconstruct the original image. properly with either a multiple or single encoder.\nTable 3: Evaluation of test log-likelihood. All models are trained on the MNIST dataset: left, marginal lo likelihood; right, conditional log-likelihood.\nFigure 7: Image generation from conditional distribution p(x|w). We used a single encoder p(z|w) for both generations.\nFigure7 presents generation samples of x conditioned on single input w. It is apparent that th. JMVAE with JMVAE-kl generates conditioned digit images properly, although that with JMVAE zero cannot generate them. As results showed, we also confirmed qualitatively that JMVAE-kl car model qo. (z[x) properly compared to JMVAE-zero.\n0 1 2 3 4 5 6 7 8 9 JMVAE-zero + 1 0 3 4 5 6 7 8 JMVAE-kl, = 0.1\nTable[3 shows the joint log-likelihood of the JMVAE on MNIST dataset by both JMVAE-zero and JMVAE-kl. It is apparent that the log-likelihood test on both approaches is almost identical (strictly. JMVAE-zero is slightly lower). The test log-likelihood on JMVAE-kl becomes much lower if a is large."}]
B1IzH7cxl
[{"section_index": "0", "section_name": "A NEURAL STOCHASTIC VOLATILITY MODEI", "section_text": "Rui Luo', Xiaojun Xu, Weinan Zhang', Jun Wang\nIn this paper, we show that the recent integration of statistical models with re. current neural networks provides a new way of formulating volatility models that. have been popular in time series analysis and prediction. The model comprises a. pair of complementary stochastic recurrent neural networks: the generative net-. work models the joint distribution of the stochastic volatility process; the inference. network approximates the conditional distribution of the latent variables given the. observable ones. Our focus in this paper is on the formulation of temporal dynam-. ics of volatility over time under a stochastic recurrent neural network framework Our derivations show that some popular volatility models are a special case of. our proposed neural stochastic volatility model. Experiments demonstrate that. the proposed model generates a smoother volatility estimation, and outperforms. standard econometric models GARCH, EGARCH, GJR-GARCH and some other GARCH variants as well as MCMC-based model stochvol and a recent Gaussian. processes based volatility model GPvOL on several metrics about the fitness of. the volatility modelling and the accuracy of the prediction."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The volatility of the price movements reflects the ubiquitous uncertainty within financial markets. It is critical that the level of risk, indicated by volatility, is taken into consideration before investment decisions are made and portfolio are optimised (Hull, 2006); volatility is substantially a key variable in the pricing of derivative securities. Hence, estimating and forecasting volatility is of great im portance in branches of financial studies, including investment, risk management, security valuation and monetary policy making (Poon & Granger, 2003).\nVolatility is measured typically by using the standard deviation of price change in a fixed time in terval, such as a day, a month or a year. The higher the volatility, the riskier the asset. One o the primary challenges in designing volatility models is to identify the existence of latent (stochas tic) variables or processes and to characterise the underlying dependences or interactions betweer variables within a certain time span. A classic approach has been to handcraft the characteris tic features of volatility models by imposing assumptions and constraints, given prior knowledg and observations. Notable examples include autoregressive conditional heteroskedasticity (ARCH model (Engle, 1982) and its generalisation GARCH (Bollerslev, 1986), which makes use of autore gression to capture the properties of time-variant volatility within many time series. Heston (1993 assumed that the volatility follows a Cox-Ingersoll-Ross (CIR) process (Cox et al., 1985) and de rived a closed-form solution for options pricing. While theoretically sound, those approaches requir strong assumptions which might involve complex probability distributions and non-linear dynamic that drive the process, and in practice, one may have to impose less prior knowledge and rectify a solution under the worst-case volatility case (Avellaneda & Paras, 1996).\nIn this paper, we take a fully data driven approach and determine the configurations with as few. exogenous input as possible, or even purely from the historical data. We propose a neural network. re-formulation of stochastic volatility by leveraging stochastic models and recurrent neural networks (RNNs). We are inspired by the recent development on variational approaches of stochastic (deep). neural networks (Kingma & Welling, 2013; Rezende et al., 2014) to a recurrent case (Chung et al.,. 2015: Fabius & van Amersfoort, 2014: Bayer & Osendorfer, 2014), and our formulation shows thal existing volatility models such as the GARCH (Bollerslev, 1986) and the Heston model (Heston,. 1993) are the special cases of our neural stochastic volatility formulation. With the hidden latent."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Experiments with synthetic data and real-world financial data are performed, showing that the pro posed model outperforms the widely-used GARCH model on several metrics of the fitness and the. accuracy of time series modelling and prediction: it verifies our model's high flexibility and rich. expressive power.\nA notable volatility method is autoregressive conditional heteroskedasticity (ARCH) model (En gle, 1982): it can accurately capture the properties of time-variant volatility within many types of. time series. Inspired by ARCH model, a large body of diverse work based on stochastic process for volatility modelling has emerged. Bollerslev (1986) generalised ARCH model to the gener. alised autoregressive conditional heteroskedasticity (GARCH) model in a manner analogous to the. extension from autoregressive (AR) model to autoregressive moving average (ARMA) model by. introducing the past conditional variances in the current conditional variance estimation. Engle. & Kroner (1995) presented theoretical results on the formulation and estimation of multivariate. GARCH model within simultaneous equations systems. The extension to multivariate model allows. the covariances to present and depend on the historical information, which are particularly useful in. multivariate financial models. Heston (1993) derived a closed-form solution for option pricing with. stochastic volatility where the volatility process is a CIR process driven by a latent Wiener process. such that the current volatility is no longer a deterministic function even if the historical information. is provided. Notably, empirical evidences have confirmed that volatility models provide accurate. forecasts (Andersen & Bollerslev, 1998) and models such as ARCH and its descendants/variants have become indispensable tools in asset pricing and risk evaluation..\nOn the other hand, deep learning (LeCun et al., 2015; Schmidhuber, 2015) that utilises nonlinear structures known as deep neural networks, powers various applications. It has triumph over pattern recognition challenges, such as image recognition (Krizhevsky et al., 2012; He et al., 2015; van den Oord et al., 2016), speech recognition (Hinton et al., 2012; Graves et al., 2013; Chorowski et al., 2015), machine translation (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014; Luong et al., 2015) to name a few.\nTime-dependent neural networks models include RNNs with advanced neuron structure such as long. short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997), gated recurrent unit (GRU) (Chc. et al.. 2014). and bidirectional RNN (BRNN) (Schuster & Paliwal. 1997). Recent results show tha RNNs excel for sequence modelling and generation in various applications (Graves, 2013; Gregor. et al., 2015). However, despite its capability as non-linear universal approximator, one of the draw-. backs of neural networks is its deterministic nature. Adding latent variables and their processes into. neural networks would easily make the posterori computationally intractable. Recent work shows. that efficient inference can be found by variational inference when hidden continuous variables are. embedded into the neural networks structure (Kingma & Welling, 2013; Rezende et al., 2014). Some. early work has started to explore the use of variational inference to make RNNs stochastic (Chung. et al., 2015; Bayer & Osendorfer, 2014; Fabius & van Amersfoort, 2014). Bayer & Osendorfer. (2014) and Fabius & van Amersfoort (2014) considered the hidden variables are independent be. tween times, whereas (Fraccaro et al., 2016) utilised a backward propagating inference network. according to its Markovian properties. Our work in this paper extends the work (Chung et al., 2015). with a focus on volatility modelling for time series. We assume that the hidden stochastic variables. follow a Gaussian autoregression process, which is then used to model both the variance and the. mean. We show that the neural network formulation is a general one, which covers two major fi-. nancial stochastic volatility models as the special cases by defining the specific hidden variables and. non-linear transforms.\nStochastic processes are often defined by stochastic differential equations (SDEs), e.g. a (univariate) generalised Wiener process is d xt = d t + d wt, where and denote the time-invariant rates of drift and standard deviation (square root of variance) while d wt ~ N(0, d t) is the increment of\nXt = Xt-1++OEt\nP10t-1 where xt-1 is the observation from N(t-1, ?-1) at time t - 1. Note that the determinism is ir. a conditional sense, which means that it only holds under the condition that the complete history {x<t} is presented, such as the case of 1-step-ahead forecast. otherwise the current volatility would. still be stochastic as it is built on stochastic process { xt }. However, for multi-step-ahead forecast, we usually exploit the relation Et-1[(xt t)2] = ? to substitute the corresponding terms and calculate the forecasts with longer horizon in a recursive fashion, for example, ot+1 = o + 1Et-1[(xt - t)2] + 1o? = o + (1 + 1)o?. For n-step-ahead forecast, there will be n iterations and the. procedure is hence also deterministic.\nAnother extension is applicable for , from being conditionally deterministic (i.e. deterministic given the complete history (x<t}) to fully stochastic: t = (z<t) is driven by another laten stochastic process {zt} instead of the observable process {xt}. Heston (1993) model instantiates a continuous-time stochastic volatility model for univariate processes:.\ndot=aotdt+bdwt .(2)\nxt=xt-1+-0.50 +0E where t=(1+a)t-1+bZt\nAs discussed above, the observable yariable x+ follows Gaussian distribution of which the mean and variance depend on the history of observable process {xt} and latent {zt}. We presume in addition that the latent process {zt} is an autoregressive model such that zt is (conditionally) Gaussian distributed. Therefore, we formulate the volatility model in general as:\nzt ~ N(*(z<t),z(z<t)); xt ~ N(*(x<t,Z<t),*(x<t,Z<t))\nThese two formulas (Eqs. (6) and (7)) abstract the generalised formulation of volatility models Together, they represents a broad family of volatility models with latent variables, where the Heston\nstandard Wiener process at time t. In a small time interval between t and t + t, the change in the variable is xt = t + wt. Let t = 1, we obtain the discrete-time version of basic volatility. model:\nThe time-invariant variance can be extended to be a function t = (x<t) relying on history of the (observable) underlying stochastic process {x<t}. The current variance t is therefore de termined given the history {x<t} up to time t. An example of such extensions is the univariate GARCH(1.1) model (Bollerslev, 1986):\n=Q0+Q1xt-1t_ O t t-1\n-\nwhere the correlation between d w. and d wt 2 applies: IE[d w d w = p d t. We apply Euler's scheme of quantisation (Stoer & Bulirsch, 2013) to obtain the discrete analogue to the continuous- time Heston model (Eqs. (3) and (4)):\nwhere ~(x<t, Z<t) and (x<t, Z<t) denote the autoregressive time-varying mean and variance of the latent variable zt while (x<t, Z<t) and (x<t, Z<t) represent the mean and variance of observable variable xt, which depend on not only history of the observable process {x<t} but that of the latent process {z<t}\nmodel for stochastic volatility is merely a special case of the family. Furthermore, it will degenerate to deterministic volatility models such as the well-studied GARCH model if we disable the laten process.\nIn this section, we establish the neural stochastic volatility model (NSVM) for stochastic volatility estimation and forecast."}, {"section_index": "3", "section_name": "4.1 GENERATING OBSERVABLE SEOUENCE", "section_text": "Recall that the latent variable zt (Eq. (6)) and the observable xt (Eq. (7)) are described by autore gressive models (xt has the exogenous input {z<t}.) For the distributions of {zt} and {xt}, the following factorisation applies:\npo(Z) =ps(zt|z<t) =]N(zt;$(z<t),$(z<t)), t p(XZ) = Ips(xt|x<t,z<t) =I[N(xt;$(x<t,z<t),$(x<t,Z<t)) t t\nps(X,Z) =]]Ps(xt|x<t,Z<t)Ps(zt|z<t) t =1IN(zt;$(z<t),$(z<t))N(xt;$(x<t,z<t),E$(x<t,z<t))\nIt is observed that the means and variances are conditionally deterministic: given the historica information {z<t}, the current mean 7 = $(<t) and variance = $(z<t) of zt is obtainec and hence the distribution N(zt; z, ) of zt is specified; after sampling zt from the specifie distribution, we incorporate {x<t} and calculate the current mean = $(x<t, Z<t) and varianc = $(x<t, <t) of xt and determine its distribution N(xt, t, ) of xt. It is natural an convenient to present such a procedure in a recurrent fashion because of its autoregressive nature As is known that RNNs can essentially approximate arbitrary function of recurrent form (Hammer 2000), the means and variances, which may be driven by complex non-linear dynamics, can b efficiently computed using RNNs.\nIt is always a good practice to reparameterise the random variables before we go into RNN ar-. chitecture. As the covariance matrix is symmetric and positive definite, it can be factorised as = UAUT, where A is a full-rank diagonal matrix with positive diagonal elements. Let. A = U A, we have = AAT. Hence we can reparameterise the latent variable zt (Eq. (6)) and observable xt (Eq. (7)):\nwhere A(Az)T = z,A(A)T = and e ~ N(0, Ix), e7 ~ N(0, I) are auxiliary vari- ables. Note that the randomness within the variables of interest (e.g. zt) is extracted by the auxiliary variables (e.g. e) which follow the standard distributions. Hence, the reparameterisation guarantees that gradient-based methods can be applied in learning phase (Kingma & Welling, 2013).\nIn this paper, the joint generative model is comprised of two sets of RNN and multilayer perceptror (MLP): RNN?/MLP for the latent variable, while RNN*/MLP for the observables. We stack these two RNN/MLP together according to the causal dependency between those variables. The\npo(Z) =][po(zt|z<t) =][N(zt;$(z<t),$(z<t)); t t po(X|Z) = 1ps(xt|x<t,Z<t) = 1[N(xt;$(x<t,Z<t),$(x<t,Z<t))\nwhere X = {xt} and Z = {zt} are the sequences of observable and latent variables, respectively, while represents the parameter set of the model. The full generative model is defined as the joint distribution:\nx,z) =I Pp(Xt|X<t,Z<t)P(Zt|Z<t) t =IIN(zt;$(z<t),$(z<t)N(xt;$(x<t,z<t),E$(x<t,z<t))\nZt=t+ Aet, xt= t+Aet\njoint generative model is implemented as the generative network.\nhz = RNN( -1Zt-1; Zt=t+Aet {t,A}=MLP(h;), hz = RNN*(h t-1,Xt-1,Zt;) Xt=t+Aet,\nwhere h? and h+ denote the hidden states of the corresponding RNNs. The MLPs map the hidden states of RNNs into the means and deviations of variables of interest. The parameter set is. comprised of the weights of RNNs and MLPs..\nOne should notice that when the latent variable z is obtained, e.g. by inference (details in the next. subsection), the conditional distribution po(X|Z) (Eq. (9)) will involve in generating the observable xt instead of the joint distribution po(X, Z) (Eq. (1O)). This is essentially the scenario of predicting. future values of the observable variable given its history. We will use the term \"generative model'. and will not discriminate the joint generative model or the conditional one as it can be inferred in. cOntext."}, {"section_index": "4", "section_name": "4.2 INFERENCING THE LATENT PROCESS", "section_text": "As the generative model involves latent variable zt, of which the true valus are unaccessible even we have observed xt. Hence, the marginal likelihood po(X) becomes the key that bridges the model and the data. The calculation of marginal likelihood involves the posterior distribution po(Z|X).. which is often intractable as complex integrals are involved. We are unable to learn the paramters. or to infer the latent variables. Therefore, we consider instead a restricted family of tractable dis- tributions qy(Z|X), referred to as the approximate posterior family, as approximations to the true. posterior po(Z|X) such that the family is sufficiently rich and flexible to provide good approxima tions (Bishop, 2006; Kingma & Welling, 2013; Rezende et al., 2014).\nqw(Z|X) = 1 qw(zt|Z<t,x<t) =11N(zt;w(z<t,x<t),Y$(z<t,x<t)) t t\nwhere (z<t, x<t) and (z<t, x<t) are functions of the historical information {z<t},{x<t} representing the approximated mean and variance of the latent variable zt, respectively. Note that represents the parameter set of inference model..\n{z,Az} = MLP,(ht), hz = RNN,(ht-1,Zt-1,xt-i Zt=t+AEt,"}, {"section_index": "5", "section_name": "4.3 FORECASTING OBSERVATIONS IN FUTURE", "section_text": "We define the inference model in accordance with the approximate posterior family we have pre sumed, in a similar fashion as (Chung et al., 2015), where the factorised distribution is formulatec as follows:\nThe inference model essentially describes an autoregressive model on zt with exogenous input xt Hence, in a similar fashion as the generative model, we implement the inference model as the infer- ence network using RNN/MLP:\nIn the realm of time series analysis, we usually pay more attention on forecasting over generating. (Box et al., 2015). It means that we are essentially more interested in the generation procedure\n*000O xOOOO ui1O Ei1O Ht+1O Ei+1C Generate hOOOC hi ht+O O 4 Zt+1O O H1O E1O uf+1O E4+1C Inference hOOOC ht 0OO ht+1O C xt1 O x 000 x+1OOOC\nFigure 1: Forecasting the future using Neural Stochastic Volatility Model\nconditioning on the historical information rather than generation purely based on a priori belief since the observations in the past of x<t influences our belief of the latent variable zt. Therefore we apply the approximate posterior distribution of the latent variable zt (Eq. (19)) as discussed in previous subsection, in place of the prior distribution (Eq. (8)) to build our predictive model.\nNSVM is learned using Stochastic Gradient Variational Bayes following (Kingma & Welling, 2013. Rezende et al., 2014). For readability, we provide the detailed derivation in Appendix A.\nAlthough we refer to GARCH and Heston as volatility models, the purposes of them are quit lifferent: GARCH is a predictive model used for volatility forecasting whereas Heston is mor f a generative model of the underlying dynamics which facilitate closed-form solutions to SDE n option pricing. The proposed NSVM has close relations to GARCH(1,1) and Heston model oth of them can be regarded as a special case of the neural network formulation. Recall Eq. (2 GARCH(1,1) is formulated as 0? = Q0 + Q1(xt-1 - t-1)2 + 10?-1, where t-1 is the tren estimate of {xt} at time step t calculated by some mean models. A common practice is to assum hat t follows the ARMA family (Box et al., 2015), or even simpler, as a constant that t = . W adopt the constant trend for simplicity as our focus is on volatility estimation.\nWe define the hidden state as ht = [, ot]' , and disable the latent variable zt = O as the volatil. ity modelled by GARCH(1,1) is conditionally deterministic. Hence, we instantiate the generative network (Eqs. (16), (17) and (18)) as follows:\nThe set of generative parameters is = { , Qo, Q1, B1}\nNext, we show the link between NSvM and (discrete-time) Heston model (Eq. (5)). Let hx = xt-1, u, ot1 ' be the hidden state and zt be i.i.d. standard Gaussian instead of autoregressive vari\nGiven the historical observations x <t, the predictive model infers the current value of latent variable zt using inference network and then generates the prediction of the current observation xt using generative network. The procedure of forecasting is shown in Fig. 1.\n{,t}=MLP(h;) ={[1,0]h, [0,1]h} (2 ht = RNN(ht-1,xt-1;) -1 - 1, 0ht-1 (2 Xt = l+OtEt where et ~ N(0, 1). (2\n,t} =MLP(h;) ={[1, 0]hx,[0,1]h} ht =RNNg(ht-1,xt-1;) xt-1 [1, 0]ht_1)2 Xt = l+ OtEt where et ~ N(0, 1)\nable, we represent the Heston model in the framework of NSVM as:\nEt {t,t} = MLP(h;) ={[1, 1, 0]hx- [0, 0, O.5](h)2, [0, 0, 1]h ht = RNN(ht-1,xt-1,Zt;) [0 0 0 [0 0 1 0 0 Xt-1 + 0 Zt 7 0 0 1 + a 0 6] Xt = t+tEt\nThe set of generative parameters is = { , a, b}\nOne should notice that, in practice, the formulation may change in accordance with the specific ar chitecture of neural networks involved in building the model, and hence a closed-form representation may be absent."}, {"section_index": "6", "section_name": "5 EXPERIMENTS", "section_text": "In this section, we present our experiments' both on the synthetic and real-world datasets to validate the effectiveness of NSVM"}, {"section_index": "7", "section_name": "5.1 BASELINES AND EVALUATION METRICS", "section_text": "To evaluate the performance of volatility modelling, we adopt the standard economet-. ric model GARCH(1,1) Bollerslev (1986) as well as its variants EGARCH(1,1) Nelson (1991), GJR-GARCH(1,1,1) Glosten et al. (1993), ARCH(5), TARCH(1,1,1), APARCH(1,1,1) AGARCH(1.1.1), NAGARCH(1.1.1),IGARCH(1.1),IAVGARCH(1.1). FIGARCH(1.d.1) as base lines, which incorporate with the corresponding mean model AR(20). We would also compare our NSVM against a MCMC-based model \"stochvol\"' and the recent Gaussian-processes-based model. \"GPvOL' Wu et al. (2014), which is a non-parametric model jointly learning the dynamics and. hidden states via online inference algorithm. In addition, we setup a naive forecasting model as an. alternative baseline referred to as NA1VE, which maintains a sliding window of size 20 on the most. recent historical observations and forecasts the current values of mean and volatility by the average. mean and variance of the window..\nFor synthetic data experiments, we take four metrics into consideration for performance evaluation:. 1) the negative log-likelihood (NLL) of observing the test sequence with respect to the generative. model parameters; 2) the mean-squared error (MSE) between the predicted mean and the ground truth (-MSE), 3) MSE of the predicted variance against the true variance (o-MSE); 4) smoothness of fit, which is the standard deviation of the differences of succesive variance estimates. As for the real-world scenarios, the trend and volatility are implicit such that no ground truth is accessible to. compare with, we consider only NLL and smoothness as the metrics for evaluation on real-world. data experiment."}, {"section_index": "8", "section_name": "5.2 MODEL IMPLEMENTATION", "section_text": "The implementation of NSVM in experiments is in accordance with the architecture illustrated in Fig. 1: it consists of two neural networks, namely inference network and generative network. Each network comprises a set of RNN/MLP as we have discussed above: the RNN is instantiated by stacked LSTM layers whereas the MLP is essentially a 1-layer fully-connected feedforward network which splits into two equal-sized sublayers with different activation functions - one sublayer applies exponential function to impose the non-negativity and prevents overshooting of variance estimates while the other uses linear function to calculate mean estimates. During experiment, the model is. structured by cascading the inference network and generative network as depicted in Fig. 1. The input layer is of size 20, which is the same as the embedding dimension Dg; the layer on the.\nRepeatable experiment code: https: //github. com/xxj96/nsvm\n(26) {t,t}= MLP(h;) ={[1, 1, 0]h [0, 0, O.5](h)2, [0, 0,1]h} (27) hf = RNN(ht-1,xt-1,zt;$) [0 0 0 1] [0] 0 1 0 ht 0 Xt-1 + Zt ) (28) 1+ [0] 0 0 1+ a (29) Xt = t+OtEt\ninterface of inference network and generative network - we call it latent variable layer - represents the latent variable z, where its dimension is 2. The output layer has the same structure as the inpu one, therefore the latent variable layer acts as a bottleneck of the entire architecture which helps tc extract the key factor. The stacked layers between input layer, latent variable layer and output laye. are the hidden layers of either inference network or generative network, it consists of 1 or 2 LSTM layers with size 10, which contains recurrent connection for temporal dependencies modelling.\nFor econometric models. we utilise several widely-used packages for time series analysis: statsmod\n1a1 w1ae1y-uscu packa Ics dna1ys1s. slulsnO els (http://statsmodels.sourceforge.net/),. arch (https://pypi.python. org/pypi/arch/3.2), Oxford-MFE-toolbox (https://www.kevinsheppard. com/MFe_Toolbox), stochvol (https://cran.r-project.org/web/packages/ stochvol) and fGarch (https://cran.r-project.org/web/packages/fGarch). The implementation of GPvOL is retrived from http://jmh1.org and we adopt the same. hyperparameter setting as in Wu et al. (2014)."}, {"section_index": "9", "section_name": "5.3 SYNTHETIC DATA EXPERIMENT", "section_text": "We build up the synthetic dataset by generating 256 heteroskedastic univariate time series, each with. 2000 data points i.e. 2000 time steps. At each time step, the observation is drawn from a Gaussian. distribution with pre-determined mean and variance, where the tendency of mean and variance is synthesised as linear combinations of sine functions. Specifically, for the trend and variance, we synthesis each using 3 sine functions with randomly chosen amplitudes and frequencies; then the. value of the synthesised signal at each timestep is drawn from a Gaussian distribution with the corresponding value of trend and variance at that timestep. A sampled sequence is shown in Fig. 2a. We expect that this limited dataset could well simulate the real-world scenarios: one usually has very. limited chances to observe and collect a large amount of data from time-invariant distributions. In. addition, it seems that every observable or latent quantity within time series varies from time to time and seldom repeats the old patterns. Hence, we presume that the tendency shows long-term patterns. and the period of tendency is longer than observation. In the experiment, we take the former 1500 time steps as the training set whereas the latter 500 as the test set..\nFor the synthetic data experiment, we simplify the recurrent layers in both inference net and generative net as single LSTM layer of size 10. The actual input {xt} fed to NSVM is Dg- dimensional time-delay embedding (Kennel et al., 1992) of raw univariate observation {xt} such that x = [t+1- D, .., xt]. 2-dimensional latent variable zt is adopted to capture the latent pro- cess, and enforces an orthogonal representation of the process by using diagonal covariance matrix. At each time step, 30 samples of latent variable zt are generated via reparameterisation (Eq. (22))."}, {"section_index": "10", "section_name": "5.4 REAL-WORLD DATA EXPERIMENT", "section_text": "We select 162 out of more than 1500 stocks from Chinese stock market and collect the time series. of their daily closing prices from 3 institutions in China. We favour those with earlier listing date. of trading (from 2006 or earlier) and fewer suspension days (at most 50 suspension days in tota. during the period of observation) so as to reduce the noise introduced by insufficient observatior. or missing values, which has significant influences on the performance but is essentially irrelevan. to the purpose of volatility forecasting. More specifically, the dataset obtained contains 162 time. series, each with 2552 data points (7 years). A sampled sequence is shown in Fig. 2b. We divide the whole dataset into two subsets: the training subset consists of the first 2000 data points while the. test subset contains the rest 552 data points.\nSimilar model configuration is applied to the real-world data experiment: time-delay embedding of dimension D on the raw univariate time series; 2-dimensional latent variable with diagona\nState-of-the-art learning techniques have been applied: we introduce Dropout (Zaremba et al., 2014). into each LSTM recurrent layer and impose L2-norm on the weights of each fully-connected feed- forward layer as regularistion; NADAM optimiser (Dozat, 2015) is exploited for fast convergence,. which is a variant of ADAM optimiser (Kingma & Ba, 2014) incorporated with Nesterov momen-. tum; stepwise exponential learning rate decay is adopted to anneal the variations of convergence as. time goes.\n3 data + -1 -2 -3 0 500 1000 1500 2000 timestep 0.5 0.4 ground truth variance 0.1 garch's prediction nsvm's prediction 0.0 0 500 1000 1500 2000 timestep\n(a) Synthetic time series prediction. (up) The data and the predicted \" and bounds \" *. (down) Th groundtruth data variance and the corresponding prediction from GARCH(1,1) and NSVM.\n(b) Real-world stock price prediction. (up) The data and the predicted ~ and bounds ~ ~. (down) The variance prediction from GARCH(1,1) and NSVM. The prediction of NSVM is more smooth and stable than that of GARCH(1,1), also yielding smaller NLL.\nFigure 2: A case study of time series prediction\ncovariance matrix; 30 sampling for the latent variable at each time step. Instead of single LSTM layers, here we adopt stacked LSTM layers composed of 2 10 LSTM cells.."}, {"section_index": "11", "section_name": "5.5 RESULT AND DISCUSSION", "section_text": "The overall performance of NSVM and baselines is listed in details in Table 1 and case studies o synthetic data and real-world financial data are illustrated in Fig. 2. The results show that NSVM has higher accuracies for modelling heteroskedastic time series on various metrics: NLL shows th fitness of the model under likelihood measure; the smoothness indicates that NSVM obtains mor robust representation of the latent volatility; -MSE and -MSE in synthetic data experiment impl the ability of recognising the underlying patterns of both trend and volatility, which in fact verifie our claim of NSVM's high flexibility and rich expressive power for volatility (as well as trenc modelling and forecasting compared with the baselines. Although the improvement comes at th cost of longer training time before convergence, it can be mitigated by applying parallel computin techniques as well as more advanced network architecture or training procedure.\n3.0 data 2.5 2.0 x 1.5 1.0 0.5 0.0 0 500 1000 1500 2000 2500 timestep 0.16 0.14 garch's prediction 0.12 nsvm's prediction 0.08 0.06 0.04 0.02 0.00 500 1000 1500 2000 2500\n*the same results obtained from AR(20) mean models\nTable 1: Results of the experiments\nThe newly proposed NSVM outperforms standard econometric models GARCH(1,1), EGARCH(1,1), GJR-GARCH(1,1,1) and some other variants as well as the MCMC-based model \"stochvol\" and the recent GP-based model \"GPvOL'. Apart from the higher accuracy. NSVM obtained, it provides us with the ability to simply generalise univariate time series analysis. to multivariate cases by extending network dimensions and manipulating the covariance matrices Furthermore, it allows us to implement and deploy a similar framework on other applications, for example signal processing and denoising. The shortcoming of NSVM comparing to GPVOL is that the training procedure is offline: for short-term prediction, the experiments have shown the. accuracy, but for long-term forecasting, the parameters need retraining, which will be rather time. consuming. The online algorithm for inference will be one of the work in the future.\nSpecifically, our NSVM outperforms GARCH(1,1) on 142 out of 162 stocks on the metric of NLL In particular, NSVM obtains -2.111, -2.044, -2.609 and -1.939 on the stocks corresponding to Fig2(b), Fig 4(a), (b) and (c) respectively, each of which is better than the that of GARCH (0.3433 0.589, 0.109 and 0.207 1ower on NLL).\nIn this paper, a novel volatility model NSVM has been proposed for stochastic volatility estima tion and forecast. We integrated statistical models and RNNs, leveraged the characteristics of each model, organised the dependences between random variables in the form of graphical models, im- plemented the mappings among variables and parameters through RNNs, and finally established a powerful stochastic recurrent model with universal approximation capability. The proposed ar- chitecture comprises a pair of complementary stochastic neural networks: the generative network and inference network. The former models the joint distribution of the stochastic volatility process with both observable and latent variables of interest; the latter provides with the approximate pos- terior i.e. an analytical approximation to the (intractable) conditional distribution of the latent vari- ables given the observable ones. The parameters (and consequently the underlying distributions) are learned (and inferred) via variational inference, which maximises the lower bound for the marginal log-likelihood of the observable variables. Our NSVM has presented higher accuracy compared to GARCH(1,1), EGARCH(1,1) and GJR-GARCH(1,1,1) as well as GPVOL for volatility modelling and forecasting on synthetic data and real-world financial data. Future work on NSVM would be to incorporate well-established models such as ARMA/ARIMA and to investigate the modelling of seasonal time series and correlated sequences.\nAs we have known, for models that evolve explicitly in terms of the squares of the residuals (e? xt - t)2), e.g. GARCH, the multi-step-ahead forecasts have closed-form solutions, which means\nSYNTHETIC DATA STOCK DATA NLL -MSE -MSE smoothness NLL smoothness NSVM 3.932e-2 2.393e-3 6.178e-4 4.322e-3 -2.184 3.505e-3 GARCH(1,1) 6.905e-2 7.594e-3* 8.408e-4 4.616e-3 -1.961 6.659e-3 GJRGARCH(1,1,1) 6.491e-2 7.594e-3* 7.172e-4 4.426e-3 -2.016 4.967e-3 EGARCH(1,1) 5.913e-2 7.594e-3* 8.332e-4 4.546e-3 -2.001 5.451e-3 ARCH(5) 7.577e-2 7.594e-3* 1.610e-3 5.880e-3 -1.955 7.917e-3 TARCH(1,1,1) 6.365e-2 7.594e-3* 7.284e-4 4.727e-3 -2.012 3.399e-3 APARCH(1,1,1) 6.187e-2 7.594e-3* 9.115e-4 4.531e-3 -2.014 4.214e-3 AGARCH(1,1) 6.311e-2 7.594e-3* 9.543e-4 4.999e-3 -2.008 5.847e-3 NAGARCH(1,1,1) 1.134e-1 7.594e-3* 9.516e-4 4.904e-3 -2.020 5.224e-3 IGARCH(1,1) 6.751e-2 7.594e-3* 9.322e-4 4.019e-3 -1.999 4.284e-3 IAVGARCH(1,1) 6.901e-2 7.594e-3* 7.174e-4 4.282e-3 -1.984 4.062e-3 FIGARCH(1,d,1) 6.666e-2 7.594e-3* 1.055e-3 5.045e-3 -2.002 5.604e-3 MCMC-stochvol 0.368 7.594e-3* 3.956e-2 6.421e-4 -0.909 1.511e-3 GPVOL 1.273 7.594e-3* 6.457e-1 4.142e-2 -2.052 5.739e-3 NAIVE 2.037e-1 8.423e-3 3.515e-3 2.708e-2 -0.918 7.459e-3\nOn the other hand, for models that are not linear or do not explicitly evolve in terms of e2, e.g. EGARCH (linear but not evolve in terms of e2), our NSVM (nonlinear and not evolve in terms of e2), the closed-form solutions are absent and thus the analytical forecast is not available. W will instead use simulation-based forecast, which uses random number generator to simulate draw. from the predicted distribution and build up a pre-specified number of paths of the variances at . step ahead. The draws are then averaged to produce the forecast of the next step. For n-step-aheac forecast, it requires n iterations of 1-step-ahead forecast to get there..\nNSVM is designed as an end-to-end model for volatility estimation and forecast. It takes the pric. of stocks as input and outputs the distribution of the price at next step. It learns the dynamics using. RNN, leading to an implicit, highly nonlinear formulation, where only simulation-based forecast i. available. In order to obtain reasonably accurate forecasts, the number of draws should be relativel large, which will be very expensive for computation. Moreover, the number of draws will increas. exponentially as the forecast horizon grows, so it will be infeasible to forecast several time step. ahead. We have planned to investigate the characteristics of NSVM's long-horizontal forecasts an try to design a model specific sampling method for efficient evaluation in the future.."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Torben G Andersen and Tim Bollerslev. Answering the skeptics: Yes, standard volatility models dc provide accurate forecasts. International economic review, pp. 885-905, 1998.\nChristopher M Bishop. Pattern recognition. Machine Learning, 128, 2006\nGeorge EP Box, Gwilym M Jenkins, Gregory C Reinsel, and Greta M Ljung. Time series analysi forecasting and control. John Wiley & Sons, 2015..\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. 2014.\nTimothy Dozat. Incorporating nesterov momentum into adam. 2015.\nRobert F Engle. Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation. Econometrica: Journal of the Econometric Society, pp. 987-1007 1982.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nJustin Bayer and Christian Osendorfer. Learning stochastic recurrent networks. arXiv preprint arXiv:1411.7610. 2014\nTim Bollerslev. Generalized autoregressive conditional heteroskedasticity. Journal of econometrics. 31(3):307-327, 1986\nJohn C Cox, Jonathan E Ingersoll Jr, and Stephen A Ross. A theory of the term structure of interes. rates. Econometrica: Journal of the Econometric Society. pp. 385-407. 1985.\nRobert F Engle and Kenneth F Kroner. Multivariate simultaneous generalized arch. Econometri theory, 11(01):122-150. 1995.\nMarco Fraccaro. Soren Kaae Sonderby, Ulrich Paquet, and Ole Winther. Sequential neural model with stochastic layers. arXiv preprint arXiv:1605.07571, 2016\nKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623. 2015.\nBarbara Hammer. On the approximation capability of recurrent neural networks. Neurocomputing 31(1):107-123. 2000\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nSteven L Heston. A closed-form solution for options with stochastic volatility with applications to bond and currency options. Review of financial studies, 6(2):327-343, 1993.\nGeoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly. Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE. Signal Processing Magazine, 29(6):82-97, 2012.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nJohn C Hull. Options, futures, and other derivatives. Pearson Education India, 2006\nMinh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention based neural machine translation. arXiv preprint arXiv:1508.04025, 2015.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation an approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.\nJosef Stoer and Roland Bulirsch. Introduction to numerical analysis, volume 12. Springer Scienc & Business Media, 2013.\nllya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in neural information processing systems, pp. 3104-3112, 2014.\nAaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks arXiv preprint arXiv:1601.06759, 2016\nYue Wu, Jose Miguel Hernandez-Lobato, and Zoubin Ghahramani. Gaussian process volatility model. In Advances in Neural Information Processing Systems, pp. 1044-1052, 2014..\nWojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization arXiv preprint arXiv:1409.2329. 2014\nMike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing. 45(11):2673-2681. 1997"}, {"section_index": "13", "section_name": "COMPLEMENTARY DISCUSSIONS OF NSYM", "section_text": "In this appendix section we present detailed derivations of NSVM, specifically, the parameters learn ing and calibration, and covariance reparameterisation.."}, {"section_index": "14", "section_name": "A.1 LEARNING PARAMETERS / CALIBRATION", "section_text": "Given the observations X, the objective of learning is to maximise the marginal log-likelihood of X given , where the posterior is involved. However, as we have discussed in the previous subsection the true posterior is usually intractable, which means exact inference is difficult. Hence, approximate inference is applied instead of rather than exact inference by following (Kingma & Welling, 2013. Rezende et al., 2014). We represent the marginal log-likelihood of X in the following form:.\nwhere the expectation term Eqw(z|x) [ln po(X, Z) - ln qw(Z|X)] is referred to as the variational lower bound [q; X, , ] of the approximate posterior qy(Z|X, ). The lower bound is essen- tially a functional with respect to distribution q and parameterised by observations X and parameter sets , of both generative and inference model. In theory, the marginal log-likelihood is max. imised by optimisation on the lower bound [q; X, , ] with respect to and .\nWe apply the factorisations in Eas. (10) and (19) to the integrand within exnectation of Eq. (30)\nlnpo(X,Z)-lnqw(ZX) = > * lnN(xt;$(x<t,Z<t),$(x<t,Z<t)\nlndet +(+ Ae t)t)-1(t+Ae- t) 2N\n+ ln det t + (xt - t)()-1(xt - t) - lndet t + const\nwhere Az(Az)T = z and ez ~ N(0, I) is parameter-independent and considered as constant when calculating derivatives"}, {"section_index": "15", "section_name": "A.2 COVARIANCE PARAMETERISATION", "section_text": "As is known, it entails a computational complexity of O(M3) to maintain and update the full-size covariance with M dimensions (Rezende et al., 2014). In the case of very high dimensions, the full-size covariance matrix would be too computationally expensive to afford. Hence, we use insteac the covariance matrices with much fewer parameters for efficiency. The simplest setting is to use diagonal precision matrix (i.e. the inverse of covariance matrix) -1 = D. However, it draws very strong restrictions on representation of the random variable of interest as the diagonal precisior matrix (and thus diagonal covariance matrix) indicates independence among the dimensions. There fore, the tradeoff becomes low-rank perturbation on diagonal matrix: -1 = D + VVT, where V = {v1,..., vk} denotes the perturbation while each vk is a M-dimensional column vector.\ns there is usually no closed-form solution for the expecation (Eq. (30)), we have to estimate th <pectation by applying sampling methods to latent variable zt through time in accordance wit. e causal dependences. We utilise the reparameterisation of zt as shown in Eq. (22) such tha e sample the corresponding auxiliary standard variable e rather than zt itself and compute th alue of zt on the fly. This ensures that the gradient-based optimisation techniques are applicabl. s the reparameterisation isolates the model parameters of interest from the sampling procedure. B. mpling N sample paths, the estimator of the lower bound is defined as the average of paths:.\nThe corresponding covariance matrix and its determinant is obtained using Woodbury identity anc natrix determinant lemma\n= D-1_ D-1V(I+vTD-1V)-1yD-1\nTo calculate the deviation A for the factorisation of covariance matrix = AA', we first con sider the rank-1 perturbation where K = 1. It follows that V = v is a column vector, and I + VT D-1V = 1 + vT D-1v is a real number. A particular solution of A is obtain:.\nA=D-2-[y-(1-n]D-1vvD-\nAlgorithm 1 gives the detailed calculation scheme\nAlgorithm 1 Calculation of rank-K perturbation of precision matrices Input: The original diagonal matrix D; The rank-K perturbation V = {v1, ..., VK} Output: A such that the factorisation AA' = = (D + VV)-1 holds 1: A(o) = D-1 2: i = 0 3: while i < K do 4: Y(i) =V(iA(i)A(i)U(i) 5: n(i) = (1 + Y(i))- 6: A(i+1)=A(i)-[y(1-(i))]A(i)A()(i)(i)A(i 7: A = A(K)\n3: while i < K do 4: Y(i) = I 5: n(i) =(1+ y(i) 6: A(i+1) = A(i) -vn(iA 7: A=A(K)\nObserve that VVT = k=1 UU, the perturbation of rank K is essentially the superposition of K perturbations of rank 1. Therefore, we can calculate the deviation A iteratively, an algorithm is provided to demonstrate the procedure of calculation. The computational complexity for rank-K. perturbation remains to be O(M) given K < M."}, {"section_index": "16", "section_name": "B MORE CASE STUDIES", "section_text": "The reason of the drops in Fig 4(b) and (c) seems to be that NSVM has captured the jumps and. drops of the stock price using its nonlinear dynamics and modelled the sudden changes as part of. the trend: the estimated trend \"mu\"' goes very close to the real observed price even around the jumps and drops (see the upper figure of Fig 4(b) and (c) around step 1300 and 1600). The residual (i.e difference between the real value of observation and the trend of prediction) therefore becomes quite. small, which lead to a lower volatility estimation..\nOn the other hand, for the baselines, we adopt AR as the trend model, which is a relatively simple linear model compared with the nonlinear NSVM. AR would not capture the sudden changes anc leave those spikes in the residual; GARCH then took the residuals as input for volatility modelling resulting in the spikes in volatility estimation.\ndata +0 -3 0 500 1000 1500 2000 timestep 0.5 0.4 ground truth variance 0.1 garch's prediction. nsvm's prediction. 0.0 0 500 1000 1500 2000 timestep\n(a) Synthetic time series prediction II. (up) The data and the predicted \" and bounds * o*. (down) Th groundtruth data variance and the corresponding prediction from GARCH(1,1) and NSVM..\n2 1 0 X data +o 500 1000 1500 2000 timestep 0.5 0.4 g 0.3 ranne 0.2 ground truth variance 0.1 garch's prediction nsvm's prediction 0.0 500 1000 1500 2000 timestep\n(b) Synthetic time series prediction IV. (up) The data and the predicted \" and bounds \" o\". (down) Th groundtruth data variance and the corresponding prediction from GARCH(1,1) and NSVM.\nFigure 3: A case study of synthetic time series prediction\n3.5 data 3.0 2.5 +o 2.0 X 1.5 1.0 0.5 0.0 C.: 500 1000 1500 2000 2500 timestep 0.18 0.16 garch's prediction 0.14 nsvm's prediction 0.12 0.10 el 0.08 0.06 0.04 0.02 0.00 500 1000 1500 2000 2500\n3.0 data 2.5 2.0 u+ x 1.5 1.0 0.5 0.0! 0 500 1000 1500 2000 2500 timestep 0.16 0.14 garch's prediction 0.12 nsvm's prediction 0.04 0.02 0.00 c. 500 1000 1500 2000 2500\n(b) Real-world stock price prediction III. (up) The data and the predicted and bounds * o. (down) The variance prediction from GARCH(1,1) and NSVM. The prediction of NSVM is more smooth and stable than that of GARCH(1,1), also yielding smaller NLL..\n2.5 data 2.0 + 1.5 X 1.0 0.5 0.0 0 500 1000 1500 2000 2500 timestep 0.16 0.14 garch's prediction 0.12 nsvm's prediction 0.08 0.06 0.04 0.02 0.00 0 500 1000 1500 2000 2500 timester\nFigure 4: A case study of real-world stock time series prediction\na) Real-world stock price prediction II. (up) The data and the predicted ~ and bounds ~ ~. (down) The variance prediction from GARCH(1,1) and NSVM. The prediction of NSVM is more smooth and stable than. hat of GARCH(1,1), also yielding smaller NLL."}]
BycCx8qex
[{"section_index": "0", "section_name": "DRAGNN: A TRANSITION-BASED FRAMEWORK FOR DYNAMICALLY CONNECTED NEURAL NETWORKS", "section_text": "Lingpeng Kong\nCarnegie Mellon University Pittsburgh, PA.\nIn this work, we present a compact, modular framework for constructing new recurrent neural architectures. Our basic module is a new generic unit, the Transi- tion Based Recurrent Unit (TBRU). In addition to hidden layer activations, TBRUs have discrete state dynamics that allow network connections to be built dynami cally as a function of intermediate activations. By connecting multiple TBRUs, we can extend and combine commonly used architectures such as sequence-to- sequence, attention mechanisms, and recursive tree-structured models. A TBRU can also serve as both an encoder for downstream tasks and as a decoder for its own task simultaneously, resulting in more accurate multi-task learning. We call our approach Dynamic Recurrent Acyclic Graphical Neural Networks, or DRAGNN. We show that DRAGNN is significantly more accurate and efficient than seq2seq with attention for syntactic dependency parsing and yields more ac- curate multi-task learning for extractive summarization tasks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "To apply deep learning models to structured prediction, machine learning practitioners must address. two primary issues: (1) how to represent the input, and (2) how to represent the output. The seq2seq. encoder/decoder framework (Kalchbrenner & Blunsom2013) Cho et al.2014)Sutskever et al. 2014) proposes solving these generically. In its simplest form, the encoder network produces a. fixed-length vector representation of an input, while the decoder network produces a linearization of the target output structure as a sequence of output symbols. Encoder/decoder is state of the art. for several key tasks in natural language processing, such as machine translation (Wu et al.2016)..\nHowever, fixed-size encodings become less competitive when the input structure can be explicitly. mapped to the output. In the simple case of predicting tags for individual tokens in a sentence, state- of-the-art taggers learn vector representations for each input token and predict output tags from. those (Ling et al.]2015} Huang et al.]2 2015f Andor et al.2016).When the input or output is a. syntactic parse tree, networks that explicitly operate over the compositional structure of the network. typically outperform generic representations (Dyer et al.2015[Li et al.[2015]Bowman et al.2016). Implictly learned mappings via attention mechanisms can significantly improve the performance. of sequence-to-sequence (Bahdanau et al.]2015} Vinyals et al.]2015), but require runtime that's quadratic in the input size.\nIn this work, we propose a modular neural architecture that generalizes the encoder/decoder concept to include explicit structure. Our framework can represent sequence-to-sequence learning as well as models with explicit structure like bi-directional tagging models and compositional, tree-structured models. Our core idea is to define any given architecture as a series of modular units, where con nections between modules are unfolded dynamically as a function of the intermediate activations produced by the network. These dynamic connections represent the explicit input and output struc- ture produced by the network for a given task.\n{chrisalberti, andor, bogatyy, djweiss}@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We build on the idea of transition systems from the parsing literature (Nivre 2006), which linearize. structured outputs as a sequence of (state, decision) pairs. Transition-based neural networks have recently been applied to a wide variety of NLP problems;Dyer et al.(2015): :Lample et al.(2016);\nBi-LSTM Tagging (3 TBRU) Stack-LSTM (2 TBRU) Transition Based Recurrent Unit (TBRU) Y1Y2 Y4Y5 Y1 Y2 Y3 Y4 Y3 Y5 network activations Discrete Recurrence fcn Network state Input embeddings Cell Encoder/Decoder (2 TBRU) Y1 Y2 Y3 Y4 Y5\nFigure 1: High level schematic of a Transition-Based Recurrent Unit (TBRU), and common network architectures that can be implemented with multiple TBRUs. The discrete state is used to compute recurrences and fixed input embeddings, which are then fed through a network cell. The network predicts an action which is used to update the discrete state (dashed output) and provides activations that can be consumed through recurrences (solid output). Note that we present a slightly simplified version of Stack-LSTM (Dyer et al.]2015) for clarity.\nKiperwasser & Goldberg(2016);Zhang et al.(2016); Andor et al.(2016), among others. We gen eralize these approaches with a new basic module, the Transition-Based Recurrent Unit (TBRU which produces a vector representation for every transition state in the output linearization (Figur 1). These representations also serve as the encoding of the explicit structure defined by the states For example, a TBRU that attaches two sub-trees while building a syntactic parse tree will also pro duce the hidden layer activations to serve as an encoding for the newly constructed phrase. Multipl TBRUs can be connected and learned jointly to add explicit structure to multi-task learning setup and share representations between tasks with different input or output spaces (Figure[2).\nThis inference procedure will construct an acyclic compute graph representing the network archi tecture, where recurrent connections are dynamically added as the network unfolds. We therefore call our approach Dynamic Recurrent Acyclic Graphical Neural Networks, or DRAGNN\nDRAGNN has several distinct modeling advantages over traditional fixed neural architectures. Un. like generic seq2seq, DRAGNN supports variable sized input representations that may contain ex. plicit structure. Unlike purely sequential RNNs, the dynamic connections in a DRAGNN can span. arbitrary distances in the input space. Crucially, inference remains linear in the size of the input. in contrast to quadratic-time attention mechanisms. Dynamic connections thus establish a compro. mise between pure seq2seq and pure attention architectures by providing a finite set of long-range inputs that 'attend' to relevant portions of the input space. Unlike recursive neural networks (Soche. et al.2010 2011) DRAGNN can both predict intermediate structures (such as parse trees) and uti lize those structures in a single deep model, backpropagating downstream task errors through the. intermediate structures. Compared to models such as Stack-LSTM (Dyer et al.[2015) and SPINN Bowman et al.(2016), TBRUs are a more general formulation that allows incorporating dynamically structured multi-task learning (Zhang & Weiss!2016) and more varied network architectures..\nIn sum, DRAGNN is not a particular neural architecture, but rather a formulation for describing neural architectures compactly. The key to this compact description is a new recurrent unitthe TBRU-which allows connections between nodes in an unrolled compute graph to be specified dynamically in a generic fashion. We utilize transition systems to provide succinct, discrete repre- sentations via linearizations of both the input and the output for structured prediction. We provide a straightforward way of re-using representations across NLP tasks that operate on different structures\nWe demonstrate the effectiveness of DRAGNN on two NLP tasks that benefit from explicit struc ture: dependency parsing and extractive sentence summarization (Filippova & Altun2013). First we show how to use TBRUs to incrementally add structure to the input and output of a \"vanilla. seq2seq dependency parsing model, dramatically boosting accuracy over seq2seq with no additiona. computational cost. Second, we demonstrate how the same TBRUs can be used to provide structure intermediate syntactic representations for extractive sentence summarization. This yields better ac. curacy than is possible with the generic multi-task seq2seq (Dong et al.|2015f Luong et al.]2016 approach. Finally, we show how multiple TBRUs for the same dependency parsing task can b. stacked together to produce a single state-of-the-art dependency parsing model..\nBi-LSTM Tagging_(3 TBRU) Stack-LSTM (2 TBRU) Transition Based Recurrent Unit (TBRU) Y1 Y1 Y2 Y2 Y3 Y4Y5 Y3 Y4 Y5 network activations Discrete Recurrence fcn Network state Input embeddings Cell Encoder/Decoder (2 TBRU) Y1 Y2 Y3 Y4 Y5 O>O>O>C\nExtractive summarization TBRU Multi-task Encoder/Decoder Uniformed man laughed Right-to-left LSTM TBRU DROP KEEP KEEP Summarization laughed man Uniformed Right-to-left LSTM Dependency Uniformed man laughed laughed <eos> <eos> SHIFT SHIFT LA(nn) SHIFT LA(nsubj) RA(root) Trees Dependency parsing TBRU DRAGNN w/ Intermediate representations Uniformed man laughed Dynamic links as a function of transition state Right-to-left DROP KEEP KEEP Summarization LSTM laughed man Uniformed Dynamic unrolled links Dependency Trees Uniformed man laughed laughed <eos> <eos> SHIFT SHIFT LA(nn) SHIFT LA(nsubj) RA(root) Intermediate representation\nFigure 2: Using TBRUs to share fine-grained, structured representations. Top left: A high level viev of multi-task learning with DRAGNN in the style of multi-task seq2seq (Luong et al.2016). Botton left: Extending the \"stack-propagation'Zhang & Weiss (2016) idea to included dependency parse trees as intermediate representations. Right: Unrolled TBRUs for each setup for a input fragmen \"Uniformed man laughed\", utilizing the transition systems described in Section|4\nWe use transition systems to map inputs x into a sequence of output symbols, d1 . . . dn. For the pur-. poses of implementing DRAGNN, transition systems make explicit two desirable properties. First. we stipulate that the output symbols represent modifications of a persistent, discrete state, which makes book-keeping to construct the dynamic recurrent connections easier to express. Second, tran- sition systems make it easy to enforce arbitrary constraints on the output, e.g. the output should. produce a valid tree.\nFormally, we use the same setup asAndor et al.(2016), and define a transition system T {S, A, t\nA set of states S(x) A special start state s' E S(x). A set of allowed decisions A(s, x) for all s E S. A transition function t(s. d. x) returning a new state s' f\nWe now formally define how to combine transition systems with recurrent networks into what we call a transition based recurrent unit (TBRU). A TBRU consists of the following:.\nA set of states S(x) . A special start state s' E S(x). . A set of allowed decisions A(s, x) for all s E S. . A transition function t(s, d, x) returning a new state s' for any decision d E A(s, x)\nFor brevity, we will drop the dependence of x in the functions given above. Throughout this work we will use transition systems in which all complete structures for the same input x have the same number of decisions n(x) (or n for brevity), although this is not necessary..\nA complete structure is then a sequence of decision/state pairs (s1, d1) ... (Sn, dn) such that s1 = s d, E A(s) for i = 1... n, and S+1 = t(s;, d,). We will now define recurrent network architectures that operate over these linearizations of input and output structure.\nA transition system T. An input function m(s) that maps states to fixed-size vector representations, for exampl an embedding lookup operation for features from the discrete state:.\nm(s) : S+> RK\nrs={1,3} hi Dependency Parse: h h2 h3 d = Right arc (incorrect) Buffer pur dob dobj on Monday gave flower LSTM / MLP LSTM / MLP LSTM / MLP LSTM / MLP nsubj] iobjJ det amod Cell Cell Cell Cell Bob gave Alice pretty flower on Monday. Bob Alice a pretty Transition state: Stack Buffer d = Shift (correct) ms1 ms2 ms3 msi [gave] flower on Monday [gave] (flower] [on] Monday A nsubj] iobj det amod nsubj] iobj det amod dargmaxwh dargmax wh dEA(s) dEA(8 Alice a K Bob pretty Bob Alice a pretty\nFigure 3: Left: TBRU schematic. Right: Dependency parsing example. For the given gold depen. dency parse tree and a arc-standard transition state with two sub-trees on the stack is shown. From this state, two possible actions are also shown (Shift and Right arc). To reproduce the tree, the Shift action should be taken.\nT sequentially tags each input token, where s, = {1, ..., d;-1}, and A is the set of p tags. We call this the tagger transition system. m(si) = x, the word embedding for the next token to be tagged. r(s) = {i - 1} to connect the network to the previous state. RNN is a single instance of the LSTM cell.\nInference with TBRUs. Given the above, inference in the TBRU proceeds as follows.\n1. Initialize s1 = s+. 2. For i = 1,...,n: (a) Update the hidden state: h; < RNN(m(s),{h; j E r(s)}) (b) Update the transition state: di argmaxdeA(s,) w hi, Si+1t(Si,di)\nA schematic overview of a single TBRU is presented in Figure[3] By adjusting RNN, r, and T, TBRUs can represent a wide variety of neural architectures..\nr(si)={1,3} hi Dependency Parse: h2 d =Right arc (incorrect) h1 h3 Buffer dobj dobj on Monday gave flower LSTM / MLP LSTM / MLP LSTM / MLP LSTM / MLP nsubj (obj) det amod Cell Cell Cell Cell Bob gave Alice a pretty flower on Monday Bob Alice a pretty Transition state: Stack Buffer d = Shift (correct) ms1 ms2 ms3 msi [gave] flower on Monday [gave] flower] on Monday [nsubjJ (0b) det amod subj] iobj [det amod dargmaxwh dargmax wh dEA(s1) dE.A(82) Bob Alice a pretty Bob Alice a pretty\nA recurrrence function r(s) that maps states to a set of previous time steps:. r(s) : S+> P{1,...,i- 1}, where P is the power set. Note that in general r(s)] is not necessarily fixed and can vary. with s. We use r to specify state-dependent recurrent links in the unrolled computation graph. A RNN cel1 that computes a new hidden representation from the fixed and recurrent inputs: LR NNm( (s)Lh:i E r(sl\nrs):S+>P{1,...,i-1}\nhs < RNN(m(s),{hiE r(s)})\nExample 1. Sequential tagging RNN. Let the input x = {x1,..., xn} be a sequence of word embeddings, and the output be a sequence of tags d1,..., dn. Then we can model a simple LSTM tagger as follows:\nT is the arc-standard transition system (Figure 3), so the state contains all words al partially built trees on the stack as well as unseen words on the buffer. m(s) is the concatenation of 52 feature embeddings extracted from tokens based on the positions in the stack and the buffer. r(s,) = {} is empty, as this is a feed-forward network. RNN is a feed-forward multi-layer perceptron (MLP).\nWhile TBRUs are a useful abstraction for describing recurrent models, the primary motivation for this framework is to allow new architectures by combining representations across tasks and compo\nsitional structures. We do this by connecting multiple TBRUs with different transition systems vi the recurrence function r(s). We formally augment the above definition as follows:\nExample 3. \"Input'' transducer TBRUs via no-op decisions. We find it useful to define TBRUs even when the transition system decisions don't correspond to any output. These TBRUs, which we call no-op TBRUs, transduce the input according to some linearization. The simplest is the shift only transition system, in which the state is just an input pointer s = {i}, and there is only one transition which advances it: t(s, ) = {i + 1}. Executing this transition system will produce a hidden representation h, for every input token.\nFor shift-only TBRU: m(si) = X, r(si) = {i - 1} For tagger TBRU: m(Sn+i) =ydn+i-1rsi)={n,n+i-1}\nFor shift-only TBRU: m(s) = X, r(si) = {i - 1} For tagger TBRU: m(Sn+i) = ydn+i-1, r(si) = {n,n + i - 1}\nWe observe that the tagger TBRU starts at step n after the shift-only TBRU finishes, that y; is fixed embedding vector for the output tag j, and that the tagger TBRU has access to both the fina encoding vector hn as well as its own previous time step hn+i-1.\nLeft toright: T = shift-only, m(s) =x, r(s) ={i- 1} Right to left: T = shift-only, m(sn+i) = xn-i, r(sn+i) = {n + i - 1} Tagger: T = tagger, m(s2n+i) ={}, r(s2n+i) ={i, 2n- i}.\nLeft to right: T = shift-only, m(si) = x, r(s) = {i - 1} Right to left: T = shift-only, m(sn+i) = xn-i, r(sn+i) = {n + i - 1} Tagger: T = tagger, m(s2n+i) ={}, r(S2n+i) ={i, 2n - i}\nWe observe that the network cell in the tagger TBRU takes recurrences only from the bi-direction. representations, and so is not recurrent in the traditional sense. See Figure|1|for an unrolled example\nExample 5. Multi-task bi-directional tagging. Here we observe that it's possible to add addi. tional annotation tasks to the bi-directional TBRU stack from Example 4 simply by adding more instances of the tagger TBRUs that produce outputs from different tag sets, e.g. parts-of-speech vs.. morphological tags. Most important, however, is that any additional TBRUs have access to all three earlier TBRUs. This means that we can support the \"stack-propagation\" (Zhang & Weiss2016 style of multi-task learning simply by changing r for the last TBRU:.\nTraditional multi-task: r(s3n+i) = {i, 2n - i} Stack-prop: r(S3n+i) 2n - i : 2n +i Left-to-right Right-to-left Tagger TBRU\nTraditional multi-task: r(s3n+i) = {i, 2n - i} Stack-prop: r(S3n+i) 2n-i , 2n+i Left-to-right Right-to-left Tagger TBRU\nRemark: the raison d'etre of DRAGNN. This example highlights the primary advantage of ou formulation: a TBRU can serve as both an encoder for downstream tasks and as a decoder for it own task simultaneously. This idea will prove particularly powerful when we consider syntacti parsing, which involves compositional structure over the input. For example, consider a no-op. TBRU that traverses an input sequence x1, ..., Xn in the order determined by a binary parse tree. this transducer can implement a recursive tree-structured network in the style of [Tai et al.(2015) which computes representations for sub-phrases in the tree. In contrast, with DRAGNN, we car\n1. We execute a list of T TBRU components, one at a time, so that each TBRU advances a global step counter. Note that for simplicity, we assume an earlier TBRU finishes all of its. steps before the next one starts execution. 2. Each transition state from the t'th component s has access to the terminal states from every prior transition system, and the recurrence function r(s) for any given component. can pull hidden activations from every prior one as well..\nExample 4. Encoder/decoder networks with TBRUs.. We can reproduce the encoder/decoder framework for sequence tagging by using two TBRUs: one using the shift-only transition system to encode the input, and the other using the tagger transition system. For input x = {x1, ..., xn}, we connect them as follows:\nExample 4.Bi-directional LSTM tagger. With three TBRUs, we can implement a simple bi. directional tagger. The first two run the shift-only transition system, but in opposite directions. The final TBRU runs the tagger transition system and concatenates the two representations:\nUnrolled graph (incomplete): Recurrent inputs: SUBTREE(s, So) SUBTREE(s, S1) INPUT(s) Sh Sh Sh R Sh Sh Sh gave flower on Monday TBRU 2 Bob Alice a pretty Stack Buffer TBRU 1 Bob gave Alice a pretty flower on Monday\nFigure 4: Detailed schematic for the compositional dependency parser used in our experiments. The first TBRU consumes each input word right-to-left; the second uses the arc-standard transition system. Note that each \"Shift' action causes the TBRU1->TBRU2 link to advance. The dynamic. recurrent inputs to the given state are highlighted; the stack representations are obtained from the. last \"Reduce' action to modify each sub-tree.\nuse the arc-standard parser directly to produce the parse tree as well as encode sub-phrases int representations.\nFor a given parser state s, we compute two types of recurrences:\nExample 7. Extractive summarization pipeline with parse representations. To model extrac- tive summarization, we follow[Andor et al.(2016) and use a tagger transition system with two tags:. \"Keep\"' and \"Drop.' However, whereas Andor et al.(2016) use discrete features of the parse tree,. we can utilize the SUBTREE recurrence function to pull compositional, phrase-based representa tions of tokens as constructed by the dependency parser. This model is outlined in Figure[2] A full. specification is given in the Appendix.."}, {"section_index": "3", "section_name": "3.2 HOW TO TRAIN A DRAGNN", "section_text": "Given a list of TBRUs, we propose the following learning procedure. We assume training data consists of examples x along with gold decision sequences for one of the TBRUs in the DRAGNN.\n1 This composition function is similar to that in the constituent parsing SPINN model (Bowman et al. 2016) but with several key differences. Since we use TBRUs, we compose new representations for \"Shift\"' actions as well as reductions, we take inputs from other recurrent models, and we can utilize subtree representations in downstream tasks.\nSh SUBTREE(s, So) SUBTREE(s, S) INPUT(s) Sh Sh R Sh Sh Sh L gave flower on Monday TBRU 2 Bob Alice a pretty Stack Buffer TBRU 1 Bob gave Alice a pretty flower on Monday\nExample 6. Compositional representations from arc-standard dependency parsing.. We use the arc-standard transition system (Nivre 2006) to mode1 dependency trees. The system maintains two data structures as part of the state s: an input pointer and a stack (Figure 3). Trees are built. bottom up via three possible attachment decisions. Assume that the stack consists of S = {A, B}. with the next token being C. We use So and S1 to refer to the top two tokens on the stack. Then the decisions are defined as:\nShift: Push the next token on to the stack: S = {A, B, C}, and advance the input pointer Left arc + label: Add an arc A label B, and remove A from the stack: S = {B}. Right arc + label: Add an arc A ->label B, and remove B from the stack: S = {A}\nrinpur(st) = {INpUT(st)}, where INPUT returns the index of the next input token rsTACK(Si) = {SUBTREE(Si, So), SUBTREE(s, S1)}, where SUBTREE(S,1) is a function returning the index of the last decision that modified the i'th token:\nWe show an example of the links constructed by these recurrences in Figure |4] and we investigate. variants of this model in Section4] This model is recursively compositional according to the decision taken by the network: when the TBRU at step s; decides to add an arc A -> B for state, the. activations h; will be used to represent that new subtree in future decisions.\nParsing TBRU recurrence, r(s) C {1,..., n +i} Parsing Accuracy (%) Input links Recurrent edges News Questions Runtime {n} {n+i-1} 27.3 70.1 O(n) {n} {SUBTREE(Si, So), SUBTREE(Si, S1)} 36.0 75.6 O(n) Attention {n+i-1} 76.1 84.8 O(n2) Attention {SUBTREE(Si, So), SUBTREE(Si, S1)} 89.0 91.9 O(n2) INPUT(Si) {n+i-1} 87.1 89.7 O(n) INPUT(Si) {SUBTREE(Si, So), SUBTREE(Si, S1)} 90.9 92.1 O(n)\nTable 1: Dynamic links enable much more accurate, efficient linear-time parsing models on the Treebank Union dev set. We vary the recurrences r to explore utilizing explicit structure in the parsing TBRU. Utilizing the explicit INpUT(s) pointer is more effective and more efficient than a quadratic attention mechanism. Incorporating the explicit stack structure via recurrent links further improves performance.\nL(x,dN+1:N+n;0) = ) log P(dN+i| d1:N, dN+1:N+i-1;0\nThe remaining question is where do the decisions d1 . . . d come from. There are two options here. they can either come as part of the gold annotation (e.g. if we have joint tagging and parsing data), or they will be predicted by unrolling the previous components (e.g. when training stacked extractive summarization model, the parse trees will be predicted by the previously trained parser TBRU).\nWhen training a given TBRU, we unroll an entire input sequence and then use backpropagation through structure (Goller & Kuchler1996) to optimize (1). To train the whole system on a set of C datasets, we use a similar strategy to (Dong et al.]2015 Luong et al.2016); we sample a target task c, 1 c C, from a pre-defined ratio, and take a stochastic optimization step on the objective of that task's TBRU. In practice, task sampling is usually preceded by a deterministic number of pre training steps, allowing, for example, to schedule a certain number of tagger training steps before running any parser training steps."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we evaluate three aspects of our approach on two NLP tasks: English dependency parsing and extractive sentence summarization. For English dependency parsing, we primarily use the the Union Treebank setup from Andor et al.(2016). By evaluating on both news and questions domains, we can separately evaluate how the model handles naturally longer and shorter form text. On the Union Treebank setup there are 93 possible actions considering all arc-label combinations. For extractive sentence summarization, we use the dataset of|Filippova & Altun (2013), where a large news collection is used to heuristically generate compression instances. The final corpus contains about 2.3M compression instances, but since we evaluated multiple tasks using this data, we sub- sampled the training set to be comparably sized to the parsing data (~60K training sentences). The test set contains 160K examples. We implement our method in TensorFlow, using mini-batches of size 4 and following the averaged momentum training and hyperparameter tuning procedure of Weiss et al.(2015).\nWe explore the impact of different types of recurrences on dependency parsing in Table 1 In this setup, we used relatively small models: single-layer LSTMs with 256 hidden units, taking\nNote that, at a minimum, we need such data for the final TBRU. Assuming given decisions d1 . . . dy from prior components 1 . . . T-- 1, we define a log-likelihood objective to train the T'th TBRU along its gold decision sequence d* , d' I m, conditioned on prior decisions:.\nwhere 0 are the combined parameters across all TBRUs. We observe that this objective is locally normalized (Andor et al.]2016), since we optimize the probabilities of the individual decisions in. the gold sequence.\nModel Structure Multi-task? A (%) F1 (%) LAS (%) N 28.93 79.75 Right-to-left Summarize N 29.51 80.03 Right-to-left Left-to-right Summarize Luong et al.(2016 30.07 80.31 89.42 Right-to-left Parse Summarize Zhang & Weiss|(2016 30.56 80.74 89.13 Right-to-left Parse Summarize\nTable 2: Single- vs. multi-task learning with DRAGNN on extractive summarization. \"A' is full- sentence accuracy of the extraction model, \"F1\"' is per-token F1 score, and \"LAS\" is labeled parsing accuracy on the Treebank Union News dev set. Both multi-task models that utilize the parsing data outperform the single-task approach, but the model that uses parses as an intermediate representation in the vein ofZhang & Weiss[(2016) (Figure[2) makes better use of the data. Note that the locally normalized mode1 from Andor et al. (2016) obtains 30.50% accuracy and 78.72% F1 on the test set when trained on 100 more data.\n32-dimensional word or output symbol embeddings as input to each cell. In each case, the pars. ing TBRU takes input from a right-to-left shift-only TBRU. Under these settings, the pure en. coder/decoder seq2seq model simply does not have the capacity to parse newswire text with any. degree of accuracy, but the TBRU-based approach is nearly state-of-the-art at the same exact com- putational cost. As a point of comparison and an alternative to using input pointers, we also im. plemented an attention mechanism within DRAGNN. We used the dot-product formulation from. Parikh et al.(2016), where r(s; ) in the parser takes in all of the shift-only TBRU's hidden states and RNN aggregates over them.\nWe evaluate our approach on the summarization task in Table[2] We compare two single-task LSTM tagging baselines against two multi-task approaches: an adaptation of Luong et al.(2016) and the stack-propagation idea ofZhang & Weiss(2016). In both multi-task setups, we use a right-to left shift-only TBRU to encode the input, and connect it to both our compositional arc-standard dependency parser and the \"Keep/Drop\"' summarization tagging model.\nIn both setups we do not follow seq2seq, but utilize the InpuT function to connect output deci sions directly to input token representations. However, in the stack-prop case, we use the SuBTREF function to connect the tagging TBRU to the parser TBRU's phrase representations directly (Figure 2). We find that allowing the compressor to directly use the parser's phrase representations signif icantly improves the outcome of the multi-task learning setup. In both setups, we pretrained the parsing model for 40oK steps and tuned the subsequent ratio of parser/tagger update steps using a development set."}, {"section_index": "5", "section_name": "4.3 DEEP STACKED BI-DIRECTIONAL PARSING", "section_text": "Here we propose a continuous version of the bi-directional parsing model of|Attardi & Dell'Orlett 2009): first, the sentence is parsed in the left-to-right order as usual; then a right-to-left transitio system analyzes the sentence in reverse order using addition features extracted from the left-to-rigl parser. In our version, we connect the right-to-left parsing TBRU directly to the phrase represer tations of the left-to-right parsing TBRU, again using the SuBTREE function. Our parser has th significant advantage that the two directions of parsing can affect each other during training. Du ing each training step the right-to-left parser uses representations obtained using the predictions c the left-to-right parser. Thus, the right-to-left parser can backpropagate error signals through th left-to-right parser and reduce cascading errors caused by the pipeline.\nZhang & Weiss (201\nTable 3: Deep stacked parsing compared to state-of-the-art on PTB. * indicates that additional re sources beyond the Penn Treebank are used. Our model is roughly comparable to an ensemble of multiple Stack-LSTM models. and the most accurate without any additional resources.\nOur final model uses 5 TBRU units. Inspired byZhang & Weiss|(2016), a left-to-right POS tagging TBRU provides the first layer of representations. Next, we run two shift-only TBRUs, one in eacl. direction, to provide representations to the parsers. Finally, we connect the left-to-right parser to the right-to-left parser using links defined via the SubTREE function. The result (Table|3) is a state-of. the-art dependency parser, yielding the highest published accuracy for a model trained solely on th Penn Treebank with no additional resources.."}, {"section_index": "6", "section_name": "5 CONCLUSIONS", "section_text": "We presented a compact, modular framework for describing recurrent neural architectures. We eva uated our dynamically structured model and found it to be significantly more efficient and accurat than attention mechanisms for dependency parsing and extractive sentence summarization in bot single- and multi-task setups. While we focused primarily on syntactic parsing, the framework prc vides a general means of sharing representations between tasks. There remains low-hanging fru still to be explored: in particular, our approach can be globally normalized with multiple hypothese in the intermediate structure. We also plan to push the limits of multi-task learning by combir ing many different NLP tasks, such as translation, summarization, tagging problems, and reasonin tasks, into a single model."}, {"section_index": "7", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Kuzman Ganchev, Michael Collins, Dipanjan Das, Slav Petrov, Aliaksei Severyn, Chris Dyer, and Noah Smith for their useful feedback and discussion while preparing this draft.\nDaniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev Slav Petrov, and Michael Collins. Globally normalized transition-based neural networks. In. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 2442-2452, 2016. Giuseppe Attardi and Felice Dell'Orletta. Reverse revision and linear tree combination for depen-. dency parsing. In Proceedings of Human Language Technologies: The 2009 Annual Conference. of the North American Chapter of the Association for Computational Linguistics, Companion. Volume: Short Papers, pp. 261-264. Association for Computational Linguistics, 2009..\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.\nDev Test Model UAS LAS UAS LAS POS Right-to-left Parse 93.08 90.89 92.8 90.8 94.01 91.93 93.72 91.83 POS Right-to-left Left-to-right Parse Rev. Parse (Above, but with pretrained word2vec)* 94.07 92.06 94.09 92.12 Bi-LSTM, graph-based (Kiperwasser & Goldberg 2016 93.10 91.00 Stack LSTM (Dyer et al.[2015) 93.10 90.90 20 Stack LSTMs (Kuncoro et al.]2016)* 94.51 92.57 Globally normalized, transition-based|Andor et al. (2016)* 94.61 92.79\nSamuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, an Christopher Potts. A fast unified model for parsing and sentence understanding. ACL, 2016.\nKatja Filippova and Yasemin Altun. Overcoming the lack of parallel data in sentence compression In EMNLP, pp. 1481-1491. Citeseer, 2013\nNal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. EMNLP, 2013\nEliyahu Kiperwasser and Yoav Goldberg. Simple and accurate dependency parsing using bidirec tional lstm feature representations. ACL, 2016\nGuillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyet Neural architectures for named entity recognition. NAACL-HTL, 2016.\nJiwei Li, Minh-Thang Luong, Dan Jurafsky, and Eudard Hovy. When are tree structures necessary for deep learning of representations? EMNLP. 2015\nMinh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task sequence to sequence learning. ICLR, 2016.\nJoakim Nivre. Inductive dependency parsing. Springer, 2006\nAnkur P Parikh, Oscar Tackstrom, Dipanjan Das, and Jakob Uszkoreit. A decomposable attentior model for natural language inferencfne. EMNLP, 2016\nRichard Socher, Christopher D. Manning, and Andrew Y. Ng. Learning continuous phrase represen tations and syntactic parsing with recursive neural networks. In NIPS-2010 Deep Learning and. Unsupervised Feature Learning Workshop, 2010..\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. ACL, 2015..\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. EMNLP, 2014.\nDaxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. Multi-task learning for mul tiple language translation. In Proceedings of the 53rd Annual Meeting of the ACL and the 7th International Joint Conference on Natural Language Processing, pp. 1723-1732, 2015.\nChris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. Transition-based dependency parsing with stack long short-term memory. pp. 334 -343, 2015.\nAdhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A Smith. Distilling an ensemble of greedy dependency parsers into one mst parser. EMNLP, 2016.\nRichard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. Dy-. namic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pp. 801-809, 2011.\nlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks 8104_3112.2014\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey. Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine trans. lation system: Bridging the gap between human and machine translation. arXiv preprint. arXiv:1609.08144, 2016.\nYuan Zhang and David Weiss. Stack-propagation: Improved representation learning for syntax. In Pr0c. ACL, 2016."}]
HJIY0E9ge
[{"section_index": "0", "section_name": "A SIMPLE YET EFFECTIVE METHOD TO PRUNE DENSE LAYERS OF NEURAL NETWORKS", "section_text": "{mb2,paris,rhc}@illinois.edu.edu\nNeural networks are usually over-parameterized with significant redundancy in the number of required neurons which results in unnecessary computation and memory usage at inference time. One common approach to address this issue is to prune these big networks by removing extra neurons and parameters while maintaining the accuracy. In this paper, we propose NoiseOut, a fully automated pruning algorithm based on the correlation between activations of neurons in the hidden layers. We prove that adding additional output neurons with entirely random targets results into a higher correlation between neurons which makes pruning by NoiseOut even more efficient. Finally, we test our method on various networks and datasets. These experiments exhibit high pruning rates while maintaining the accuracy of the original network."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Neural networks and deep learning recently achieved state-of-the-art solutions to many problems ir computer vision (Krizhevsky et al.(2012); He et al.(2015)), speech recognition (Graves et al. (2013)) natural language processing (Mikolov et al.(2013)) and reinforcement learning (Silver et al.(2016)) Using large and oversized networks in these tasks is a common practice. Such oversized networks can easily overfit on the training dataset while having poor generalization on the testing data (Sabc & Yu (2008)). A rule of thumb for obtaining useful generalization is to use the smallest number of parameters that can fit the training data (Reed (1993). Unfortunately, this optimal size is noi usually obvious and therefore the size of the neural networks is determined by a few rules-of-thumb (Heaton (2008)) which do not guarantee an optimal size for a given problem. One common approach to overcome overfitting is to choose an over-sized network and then apply regularization (Ng(2004) and Dropout (Srivastava et al.[(2014). However, these techniques do not reduce the number o1 parameters and therefore do not resolve the high demand of resources at test time.\nAnother method is to start with an oversized network and then use pruning algorithms to remove. redundant parameters while maintaining the network's accuracy (Augasta & Kathirvalavakumar. (2013)). These methods need to estimate the upper-bound size of a network, a task for which there are. adequate estimation methods (Xing & Hu (2009). If the size of a neural network is bigger than what is necessary, in theory, it should be possible to remove some of the extra neurons without affecting. its accuracy. To achieve this goal, the pruning algorithm should find neurons which once removed. result into no additional prediction errors. However, this may not be as easy as it sounds since all the. neurons contribute to the final prediction and removing them usually leads to error..\nIt is easy to demonstrate this problem by fitting an oversized network on a toy dataset. Figure1 shows a two-dimensional toy dataset which contains two linearly separable classes. Hence, only one hidden neuron in a two-layer perceptron should be enough to classify this data and any network with. more than one neuron (such as the network in Figure2fa) is an oversized network and can be pruned However, there is no guarantee that removing one of the hidden neurons will maintain the network's. performance. As shown in the example in Figure[1] removing any of the hidden neurons results into a. more compact, but under-performing network. Therefore, a more complicated process is required for pruning neural networks without accuracy loss.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "2.0 1.5 + 1.0 + 1 0.5 1 14 0.0 0.5 1 -1.0 1 -1.5 I: : -2.0 -2.0 -1.5 -1.0 0.5 0.0 0.5 1.0 1.5 2.0 x0\nFigure 1: Effect of pruning on accuracy. Bold. line represents discriminator learned by a 2-2-1 MLP (Figure2a) on a toy data set. Dash line and dotted line show the results after pruning one of the hidden neurons. As it can be seen removing a. hidden neuron will result into accuracy drop.\nOur goal in this paper is two-fold. First, we introduce NoiseOut, a pruning method based on the correlation between activations of the neurons. Second, we propose an approach which enforces the higher correlation between activations of neurons. Since the effectiveness of NoiseOut hinges on high correlations between neuron activations, the combination of these two methods facilitates more aggressive pruning."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Optimal Brain Damage (LeCun et al.(1989)) and Optimal Brain Surgeon (Hassibi & Stork(1993)) prune networks based on the Hessian of the loss function. It is shown that such pruning is more effective and more accurate than earlier magnitude-based pruning such as weight decay (Hanson. & Pratt (1989)). However, the necessary second-order derivatives require additional computational resources.\nIn this section, we describe the details of the proposed method called NoiseOut. First, we show how. this method can prune a single neuron and then how it can prune a full network, one neuron at a time\nx0 (a) W(1) W(2) x1 x0 W(2) (b) W(1) X1 N W(2)\nFigure 2: a) a simple 2-2-1 MLP; b) the same network with one additional noise output. All the hidden units have a linear activation while the output neurons use sigmoid as activation function. The gray neuron is a noise output which changes its target in each iteration.\nRecently, replacing the fully connected layers with other types of layers has been utilized to reduced the number of parameters in a neural network. Deep fried convnets (Yang et al.[(2015)) replaces these layers with kernel methods while the GoogLenet (Szegedy et al.(2015)) and Network in Network architecture (Lin et al.(2013)) replace them with global average pooling. Alternatively, Han et al. (2015) proposed a pruning method which learns the import connections between neurons, pruning the unimportant connections, and then retraining the remaining sparse network.\nBesides pruning, other approaches have been proposed to reduce the computation and memory requirements of neural networks. HashNets(Chen et al.(2015)) reduce the storage requirement of neural networks by randomly grouping weights into hash buckets. These techniques can be combined with pruning algorithms to achieve even better performance. As an example, Deep Compression(Han et al.(2016)) proposed a three stage pipeline: pruning, trained quantization, and Huffman coding, to reduce the storage requirement of neural networks.\nThe key idea in NoiseOut is to remove one of the two neurons with strongly correlated activations. The main rationale behind this pruning is to keep the signals inside the network as close to the original network as possible. To demonstrate this, assume there exists u, v, l such that [p(h), h))| = 1 => h(t) = ah?) + , where h(l) is the activation of ith neuron in the lth layer. By definition:\nnetwork as possible. To demonstrate this, assume there exists u, v, l such that p(hu =1 =\n+ 2 u r W.k iFu,V W i#u,v\nThis means that neuron u can be removed without affecting any of the neurons in the next layer simply by adjusting the weights of v and neurons' biases. Note that max.[p(h(), n()| = 1 is an\nNoiseOut follows the same logic to prune a single neuron using the following steps\n1. For each i, i, l calculate 2. Find u,v,l = argmax u,U,l,uFU 3. Calculate a, := arg min ( Q, 4. Remove neuron u in layer l 5. For each neuron k in layer l + 1: C 1 (1) -Update the weight w + QW K u,k (1) - Update the bias Wu\nThe key element for successful pruning of neural networks using NoiseOut is the strong correlatio. between activation of the neurons. Essentially, a higher correlation between these activations mean. more efficient pruning. However, there is no guarantee that back-propagation results in correlate. activations in a hidden layer. In this section, we propose a method to encourage higher correlatio. by adding additional output nodes, called noise outputs. The targets for noise outputs will randoml. change in each iteration based on a predefined random distribution. We show that adding nois outputs to the output layer intensifies the correlation between activation in the hidden layers whic will subsequently make the pruning task more effective.."}, {"section_index": "4", "section_name": "3.2.1 EFFECT OF ADDING NOISE OUTPUTS TO THE NETWORK", "section_text": "To demonstrate the effect of adding noise outputs to the network, let us reconsider the toy example described previously in Figure[1] this time with some additional noise outputs in the output layer as shown in Figure[2|b. The result of training this network has been shown in Figure[3] As seen in this figure, the activation of the two hidden neurons has become highly correlated, and each hidden unit converged to the optimal discriminant by itself. This means that either of the extra neurons in the hidden layer can be removed without loss of accuracy.\nideal case. In this ideal scenario, removing one of the neurons results into no change in accuracy since the final output of the network will stay the same. In non-ideal cases, when the highest correlated neurons are not strongly correlated, merging them into one neuron may alter the accuracy. However continuing the training after the merge may compensate for this loss. If this does not happen, it means that the removed neuron was necessary to achieve the target accuracy and the algorithm cannot compress the network any further without accuracy degradation.\na) descriminators learned by hidden b) descriminators learned by hidden neurons in Figure 2-a neurons in Figure 2-b c) result after pruning 1 Hidden Neuron 1 Hidden Neuron 1 Hidden Neuron 2 Hidden Neuron 2 -2.01.5 -1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0-1.5 -1.0 0.5 0.0 0.5 1.0 1.5 2.0 -2.01.51.00.5 0.0 0.5 1.0 1.5 2.0 x0 x0\nThis claim can be proved formally as well. The key to this proof is that neural networks are deterministic at inference time. In other words, the network generates a constant output for a constant input. For noise output, since the target is independent of the input, the training algorithm finds an optimal constant value that minimizes the expected error for every input. This objective can be presented as:\nm minC(f(X;W),[Y,Y]) =min(C(fm(X;W),Y)+C(f,(X;W),Y) W i=0\nwhere W is the adjustable parameters (i.e. weights), X is the input, Y is the target, Y, is the target. of noisy outputs at iteration i, C is the cost function, m is the number of iterations. f(X; W) and ft(X; W) are the outputs of the neural network in the original and noise outputs respectively, at. iteration i. Note that Y, changes in each iteration according to a random distribution P.\nThe first part of Equation2|represents the common objective of training a neural network, while the. second part has been added because of the noise outputs. It is possible to adjust the effect of noise. outputs based on Py. For instance by adding more noise outputs (in the case of Binomial distribution). or adjusting the variance (for Gaussian distribution). Another way of this adjustment is introducing a new multiplier for the second part of the cost. Although Y, changes in each iteration, the constant value that the network infers for any given input would be the same e.g. 0, due to the independent of. P from X. Therefore:\nmin C = min C f(X;W),[Y,Yi] W W min fX;W-Y min f(X;W)-Y2+ W(1),W(2) 2 W(1),W(2) min (f(X;W)-Y)2 min (f(X;w)-0) - W(1),W(2) W(1).W(2)\nFigure 3: Comparison between discriminators learned with and without noise neurons. As it can be. seen with noise neurons the activation of hidden neurons is more correlated: a) discriminators defined by each hidden neuron in Figure2fa; b) discriminators defined by each hidden neuron in Figure[2b c) final discriminator after pruning one hidden neuron in Figure[2|b.\nfm(X;W) = J1xn0\nwhere J1 xn is the matrix of ones of size 1 n (n is the number of samples in the dataset). The actual value of 0 can be estimated for any given network architecture. As an example, let W(1), w(2) and W(2) be the weights of the first hidden layer, the output layer and the noisy neurons in a 2-2-1 MLP network respectively (Figure2b). Assuming linear activation in the hidden neurons and mean square error (MsE) as the cost function, Equation|2|can be rewritten as:\nmean correlation correlation distribution 0.95 1.0 0.90 0.8 0.85 aioon 0.80 0.6 Coorrl Ceorel 0.75 No Noise 0.4 )sqe 0.70 Constant 0.2 Binomial 0.65 Gaussian 0.0 0.60 0 20 40 60 80 100 No_Noise Constant Binomial Gaussian 1.00 1.0 0.95 0.90 0.8 (uo 0.85 0.6 abrreere 0.80 0.75 0.4 0.70 0.2 0.65 0.0 0.60 0 100 200 300 400 500 No Noise Constant Binomial Gaussian epoch epoch\nIn this particular case, 0 can be calculated using derivatives of the cost functions in Equation3\nf(X;W) = Xw(1)w 1 Wi,1 Pp(1) b(1) sgn n\nThe same results can be achieved empirically. Since the output of the noise outputs will converge to E|P, it seems that there may not be any difference between different random distributions with the same expected value. Therefore, we tested different random distributions for E|P with the same E|Pyv, on the network shown in Figure2b. These noise distributions are as follows:\nFigure 4: Effect of noise outputs to the correlation of neurons activation in the hidden layers. The top. row shows the correlation of two hidden neurons in the network of Figure 2|while the bottom row is the correlation between the two neurons on the first hidden layer of a 6 layer MLP (2-2-2-2-2-2-1) The left column represents the mean correlation during training of the network for 100 times. In right. column, yellow is the distribution of correlations at the end of the training and small red line shows the median. As it can be seen in these graphs, adding noise outputs improves the correlation between. neurons in the hidden layer..\nac f(X;W) - f(X;w)-0) de de (f(X;W)-0) (0-f(X;w))=0 ae 0=E[f(X;W)]=E[Py]\nThis means that in this particular network with MSE as the cost function, the final error will be minimized when the network outputs the expected value of targets in noise outputs (E|P) for any given input.\nTo demonstrate how outputting a constant value affects the weights of a network, let's consider the network in Figure2 b. In this case, the output of noisy output will be:\nEquation 6 means that the activation of the hidden neurons has a correlation of 1 or -1. For more than two neurons it can be shown that the output of one neuron will be a linear combination of other. neurons which means the claim still holds..\nAlgorithm 1 NoiseOut for pruning hidden layers in neural networks\nTgorln neuralnetworks 1: procedure TRAIN(X, Y) > X is input, Y is expected output 2: W initialize_weights( 3: for each iteration do 4: Yn generate_random_noise( > generate random expected values 5: Y' concatenate(Y, Y) 6: W back_prop(X, Y') 7: while cost(W) threshold do 8: A, B find_most_correlated_neurons(W, X) 9: a, estimate_parameters(W, X, A, B) 10: W' remove_neuron(W, A) 11: W' adjust_weights(W', B,a,) 12: W W' 13: return W\nAs it can be seen in the top row Figure 4 in a regular network with no noise unit (shown as than O.8, while in the existence of a noise output this value approaches to one, rather quickly. This means that the two hidden neuron are outputting just a different scale of the same value for any given Input. In this case, NoiseOut easily prunes one of the two neurons.\nAs it can be seen in the top row Figure 4] in a regular network with no noise unit (shown a. No_Noi se), the correlation between the output of hidden neurons. b(1)does not go highei\nThe same technique can be applied to correlate the activation of the hidden neurons in networks with more than two layers. The bottom row of Figure4 shows the correlation between the activation of two hidden neurons in the first layer of a six layer MLP (2-2-2-2-2-2-1). As it can be seen in this figure, adding noise outputs helped the neurons to achieve higher correlation compared to a network with no noise output. Binomial noise acts chaotic at the beginning due to its sudden change of expected values in the noise outputs while Gaussian noise improved the correlation the best in these experiments."}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "To illustrate the generality of our method we test it on a core set of common network architectures including fully connected networks and convolutional neural networks with dense layers. In all of these experiments, the only stop criteria is the accuracy decay of the model. We set the threshold\nGaussian P(x) = N(0.1, 0.4) Normal distribution with mean of 0.1 and standard devi. tion of O.4. This noise distribution is appropriate for regression tasks with MSE cost.. Binomial P(x) = B(1, 0.1) Binomial distribution with 1 trial and success probability. 0.1. We chose binomial distribution since it generates random classification labels and appropriate for networks that have to produce binary labels.. Constant P(x) = (x 0.1) In this case, the target of the noise outputs is the consta. value of O.1. This is used as an expected-value \"shortcut\" so that we can examine a stochast. vs. a deterministic approach. No. Noise no noise output for comparison..\nAlgorithm 1 shows the final NoiseOut algorithm. For the sake of readability, this algorithm has. been shown for networks with only one hidden layer. But the same algorithm can be applied to. networks with more that one hidden layer by performing the same pruning on all the hidden layers. independently. It can also be applied to convolutional neural networks that use dense layers, in which we often see over 90% of the network parameters (Cheng et al.(2015)).\nThis algorithm is simply repeating the process of removing a single neuron, as described in the previous section. The pruning ends when the accuracy of the network drops below some given threshold. Note that the pruning process is happening while training..\nTable 1: Results of pruning Lenet-300-100 on MNIST. The accuracy of all the compressed network are the same as the original network.\nNoise Layer 1 Layer 2 Removed Compressior Method Parameters Outpus Neurons Neurons Parameters Rate Ground Truth 300 100 266610 - No_Noise 23 14 15989 94.00% 16.67 Gaussian 512 20 9 15927 94.02% 16.73 Constant 512 20 7 15105 94.33% 17.65 Binomial 512 19 6 11225 95.78% 23.75 No_Noise 13 12 10503 96.06% 20.89 Gaussian 1024 16 7 12759 95.21% 18.58 Constant 1024 18 7 14343 94.62% 17.61 Binomial 1024 19 7 15135 94.32% 25.38\nTable 2: Pruning Lenet-5 on MNIST. The accuracy of all the compressed networks are the same a the original network.\nfor this criteria to match the original accuracy; therefore all the compressed network have the same accuracy as the original network. For each experiment, different random distributions have been used. for P, to demonstrate the difference in practice.\nTable[1|and Table[2|show the result of pruning Lenet-300-100 and Lenet-5 (LeCun et al.(1998) o1 MNIST dataset. Lenet-300-100 is a fully connected network with two hidden layers, with 300 anc 100 neurons each, while Lenet-5 is a convolutional network with two convolutional layers and one dense layer. These networks achieve 3.05% and 0.95% error rate on MNIST respectively (LeCur et al.(1998)). Note that in Lenet-5 over 98% of parameters are in the dense layer and pruning then can decrease the model size significantly.\nDense Noise Removed Compression Method Layer Parameters Neurons Parameters Rate Neurons Ground Truth 512 605546 No_Noise 313 374109 38.21% 1.61 Gaussian 512 3 13579 97.75% 44.59 Constant 512 33 48469 91.99% 12.49 Binomial 512 26 40328 93.34% 15.01\nAs it can be seen in these tables, NoiseOut removed over %95 of parameters with no accuracy. degradation. Astonishingly, pruned Lenet-5 can achieve 0.95% error rate with only 3 neurons in. the hidden layer which reduce the total number of weights in Lenet-5 by a factor of 44. Figure|6 demonstrates the output of these 3 neurons. This graph has been generated by outputting the activation of hidden layer neurons for 1oo0 examples randomly selected from MNIST dataset. Then, the data has been sorted by the target class. As it can be seen in this figure, the three neurons in the hidden layer efficiently encoded the output of convolutional layers to the expected ten classes. Obviously, these values can be utilized by the softmax layer to perform the final classification..\nTo test the effect of pruning of deeper architectures, we prune the network described in Table on SVHN data set.This model which has over 1 million parameters, achieves 93.39% and 93.84% accuracy on training set and test set respectively. As it can be seen in Table[3] NoiseOut pruned more than 85% of the parameters from the base model while maintaining the accuracy.\nTable 3: Pruning the reference network in Table 4 on SVHN dataset.\nIaalascl. conv2 32 3 x 3 9246 conv3 32 3 x 3 9246 Dense Removed Iethod Parameters pool1 2 x 2 Layer - Parameters Neurons conv4 48 3 3 13872 conv5 48 3 3 20784 round Truth 1024 1236250 1 conv6 48 3 x 3 20784 o_Noise 132 313030 74.67% pool2 - 2 x 2 aussian 4 180550 85.39% conv7 64 3 3 27712 onstant 25 202285 83.63% conv8 64 3 3 36928 ionomial 17 194005 84.30% conv9 64 3 x 3 36928 pool3 2 x 2 1 dense 1024 1049600 softmax 10 10250 Pruning results of LeNet-5 on MNiST. Pruning results of LeNet-300-100 on MNIST 1.005 1.00 1.000 0.995 0.98 0.990 ACeunrey ACernney 0.96 0.985 0.980 0.94 X 0.975 ... Training 0.970 Training 0.92 xXx Testing + xXx Testing 0.965 10000 15000 20000 25000 30000 12000 14000 16000 18000 20000 22000 24000 Parameters Parameters\nDense Removed Method Layer Parameters. Parameters Neurons Ground Truth 1024 1236250 No_Noise 132 313030 74.67% Gaussian 4 180550 85.39% Constant 25 202285 83.63% Bionomial 17 194005 84.30%\nFigure 5: Pruning Lenet-300-100 and Lenet-5 on MNIST data set with various accuracy thresholds x axis represents the total number of parameters in the pruned network (including weights in the convolutional layers), while y axis shows the accuracy of the model on test and training dataset"}, {"section_index": "6", "section_name": "4.2 EFFECT OF NOISEOUT ON TEST ACCURACY", "section_text": "To explore the effect of NoiseOut on the test accuracy, we pruned Lenet-300-100 and Lenet-5 on MNIST with multiple accuracy thresholds, using Gaussian distribution as the target for noise outputs. In each one of these experiments, we measured both the training and test accuracy. As expected, the results which have been shown in Figure[5l indicate that lower accuracy thresholds result into more pruned parameters. However, the gap between training and testing threshold stays the same. This shows that pruning the network using NoiseOut does not lead to overfitting.."}, {"section_index": "7", "section_name": "4.3 RELATION TO DROPOUT AND REGULARIZATION", "section_text": "The key point in successful pruning with NoiseOut is a higher correlation between neurons. This. goal might seem to be in contradiction with techniques designed to avoid overfitting such as Dropout and Regularization. To investigate this, we pruned Lenet-5 in the presence and absence of these features and demonstrated the results in Figure[7] As it can be seen in this figure, Dropout helps the pruning process significantly, while L2-regularizations causes more variance. It seems that preventing co-adaptation of neurons using Dropout also intensifies the correlation between them, which helps NoiseOut to remove even more redundant neurons without accuracy loss..\nTable 4: Base model architecture for SVHN with 1236250 parameters.\n3 Nrunnn 2 0 1 - 1 1 2 3 4 5 6 7 8 9 10 Class\nFigure 6: Activation of neurons in a pruned Lenet-5 with only 3 neurons left in the dense layer. The x-axis has been populated by 100 random samples from each class of MNIST, sorted by class y-axis shows the neuron ID. Note that tanh is the activation function in the hid den layer."}, {"section_index": "8", "section_name": "5 CONCLUSION", "section_text": "In this paper, we have presented NoiseOut, a simple but effective pruning method to reduce the number of parameters in the dense layers of neural networks by removing neurons with correlated activation during training. We showed how adding noise outputs to the network could increase the correlation between neurons in the hidden layer and hence result to more efficient pruning. The experimental results on different networks and various datasets validate this approach, achieving significant compression rates without loss of accuracy."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "M Gethsiyal Augasta and T Kathirvalavakumar. Pruning algorithms of neural networks-a comparative study Central European Journal of Computer Science, 3(3):105-115. 2013\nSong Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neura network. In Advances in Neural Information Processing Systems, pp. 1135-1143, 2015..\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2016.\nStephen Jose Hanson and Lorien Y Pratt. Comparing biases for minimal network construction with back propagation. In Advances in neural information processing systems, pp. 177-185, 1989\nBabak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain surgeon Morgan Kaufmann, 1993.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXi preprint arXiv:1512.03385, 2015.\nJeff Heaton. Introduction to neural networks with Java. Heaton Research, Inc., 2008\n500 NoiseOut NoiseOut + DropOut. NoiseOut + L2-Regularization 400 wonnreuunnt 300 200 100 0 0 20 40 60 80 10 Epoch\nFigure 7: Effect of Dropout and L2-regularization on NoiseOut. The y-axis represents the number of remain- ing neurons in the dense layer. Note that more than one neuron can be removed in each epoch. In each curve, the bold line is the median of 10 runs and colored background demonstrates the standard deviation.\nWenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing neura networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015.\nAlan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp 6645-6649. IEEE, 2013.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neura networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013\nAndrew Y Ng. Feature selection, 1 1 vs. 1 2 regularization, and rotational invariance. In Proceedings of the twenty-first international conference on Machine learning, pp. 78. ACM, 2004.\nRussell Reed. Pruning algorithms-a survey. Neural Networks, IEEE Transactions on, 4(5):740-747, 1993\nDevin Sabo and Xiao-Hua Yu. A new pruning algorithm for neural network dimension analysis. In Neural Networks, 2008. IJCNN 2008.(IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on, pp. 3313-3318. IEEE, 2008.\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.\nHong-Jie Xing and Bao-Gang Hu. Two-phase construction of multilayer perceptrons using information theor Neural Networks, IEEE Transactions on, 20(4):715-721, 2009.\nZichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep fried convnets. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1476-1483 2015.\nYann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage In NIPs, volume 89. 1989\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to documen recognition. Proceedings of the IEEE, 86(11):2278-2324. 1998\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. 2013"}]
r1S083cgx
[{"section_index": "0", "section_name": "SEOUENCE GENERATION WITH A PHYSIOLOGICALLY PLAUSIBLE MODEL OF HANDWRITING AND RECURRENT MIXTURE DENSITY NETWORKS", "section_text": "Daniel Berio*1, Memo Akten*1\nFrederic Fol Ley\nThe purpose of this study is to explore the feasibility and potential benefits of. using a physiological plausible model of handwriting as a feature representation. for sequence generation with recurrent mixture density networks. We build on. recent results in handwriting prediction developed by Graves (2013), and we focus on generating sequences that possess the statistical and dynamic qualities of handwriting and calligraphic art forms. Rather than model raw sequence data, we. first preprocess and reconstruct the input training data with a concise representation given by a motor plan (in the form of a coarse sequence of ballistic' targets) and. corresponding dynamic parameters (which define the velocity and curvature of. the pen-tip trajectory). This representation provides a number of advantages, such. as enabling the system to learn from very few examples by introducing artificial variability in the training data, and mixing of visual and dynamic qualities learned. from different datasets."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recent results (Graves]2013) have demonstrated that, given a sufficiently large training data-set. Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber! 1997) Recurrent Mixture Density Networks (RMDNs) (Schuster1999) are capable of learning and generating convincing synthetic. handwriting sequences. In this study we explore a similar network architecture combined with ar intermediate feature representation, given by the parameters of a physiologically plausible model of handwriting: the Sigma Lognormal model (Plamondon 1995 Plamondon et al. 2014).\nIn the work byGraves(2013) and subsequent derivations, the RMDN operates on raw sequences of points recorded with a digitizing device. In our approach we preprocess the training data using an intermediate representation that describes a form of \"motor program' coupled with a sequence of dynamic parameters that describe the evolution of the pen tip. By doing so, we use a representation that is more concise (i.e. lower in dimensionality), meaningful (i.e. every data point is a high level segment descriptor of the trajectory), and is resolution independent..\nThis project stems from the observation that human handwriting results from the orchestration of a large number of motor and neural subsystems, and is ultimately produced with the execution of complex and skillful motions. As such we seek a representation that abstracts the complex task o trajectory formation from the neural network, which is then rather focused on a higher level task oi movement planning. Note that for the scope of this study, we do not implement text-to-handwriting synthesis (Graves2013), but rather focus on the task of generating sequences that possess the statistical and dynamic qualities of handwriting, which can be expanded to calligraphy, asemic handwriting, drawings and graffiti (Berio & Leymarie[2015] Berio et al.||2016). In particular, we focus on two distinct tasks: (1) learning and generating motor plans and (2) given a motor plan\nThe remainder of this paper is organised as follows: in Section [2] after briefly summarising the background context, we then briefly describe the Sigma Lognormal model and RMDNs; in Section 3 we present the data preprocessing step and the RMDN models that build up our system; in Section we propose various applications of the system, including learning handwriting representations from small datasets and mixing styles.\nOur study is grounded on a number of notions and principles that have been observed in the general study of human movement as well as in the handwriting synthesis/analysis field (known as Grapho nomics (Kao et al.l1986)). The speed profile of aiming movements is typically characterised by a \"bell shape' that is variably skewed depending on the rapidity of the movement (Lestienne1979 Nagasakil[1989] Plamondon et al.]2013). Complex movements can be described by the superimpo- sition of a discrete number of \"ballistic' units of motion, which in turn can each be represented by the classic bell shaped velocity profile and are often referred to as strokes. A number of methods synthesise handwriting through the temporal superimposition of strokes, the velocity profile of which is modelled with a variety of functions including sinusoidal functions (Morasso & Mussa Ivaldi 1982f [Maarse[1987] Rosenbaum et al.|1995), Beta functions (Lee & Chof1998] Bezine et al.[2004) and lognormals (Plamondon et al.T|2009). In this study we rely on a family of models known as the Kinematic Theory of Rapid Human Movements, that has been developed by Plamondon et al. in an extensive body of work since the 1990's (Plamondon]1995) Plamondon et al.]2014). Plamondon et al.(2003) show that if we consider that a movement is the result of the parallel and hierarchical interaction of a large number of coupled linear systems, the impulse response of such a system to a centrally generated command asymptotically converges to a lognormal function. This assumption is attractive from a modelling perspective because it abstracts the high complexity of the neuromuscular system in charge of generating movements with a relatively simple mathematical model which further provides state of the art reconstruction of human velocity data (Rohrer & Hogan2006f Plamondon [et al.]2013).\nA number of methods have used neural inspired approaches for the task of handwriting trajectory formation (Schomaker1992 Bullock et al.I 1993]Wada & Kawato]1993]. Similarly to our proposed method, Ltaief et al.(2012) train a neural on a preprocessed dataset where the raw input data is reconstructed in the form of handwriting model parameters.Nair & Hinton (2005) use a sequence of neural networks to learn the motion of two orthogonal mass spring systems from images of handwritten digits for classification purposes. With a similar motivation to ours, Plamondon & Privitera|(1996) use a Self Organising Map (SOM) to learn a sequence of ballistic targets, which describe a coarse motor plan of handwriting trajectories. Our method builds in particular on the work of|Graves|(2013), who describes a system that uses a recurrent mixture density networks (RMDNs) (Bishop|1994) extended with a LSTM architecture (Hochreiter & Schmidhuber1997), to generate synthetic handwriting in a variety of styles.\nOn the basis of Plamondon's Kinematic Theory (Plamondon] 1995), the Sigma Lognormal (A) model (Plamondon & Djioua] 2006) describes complex handwriting trajectories via the vectorial superimposition of a discrete number of strokes. With the assumption that curved handwriting movements are done by rotating the wrist. the curvilinear evolution of strokes is described with a circular arc shape. Each stroke is charactersied by a variably assymmetric \"bell shape\" speed profile which is described with a (3 parameter) lognormal function. The planar evolution of a trajectory necessarily located along the generated trajectory) loci at which each consecutive stroke is aimed The virtual targets provide a low level description of the motor plan for the handwriting trajectory. A smooth trajectory is then generated by integrating the velocity of each stroke over time. The trajectory smoothness can be defined by adjusting the activation-time offset of a given stroke with respect to the\npredicting the corresponding dynamic parameters that determine the visual and dynamic qualities of the pen trace. We then go on to show that this modular workflow can be exploited in ways such as: mixing of dynamic qualities between data-sets (a form of handwriting \"style transfer\"' ) as well as learning from small datasets (a form of \"one shot learning')..\n(a) V4 c (d) V1 Time t02 V2\nFigure 1: A sequence of virtual targets and the corresponding A trajectory. (a), the virtual targets and the corresponding stroke aiming directions. (b), the virtual targets and the corresponding circular arcs. (c), a possible trajectory generated over the given sequence of virtual targets. While the generated trajectory might appear. similar to a polynomial curve such as a B-Spline, it also describes a smooth and physiologically plausible. velocity profile (d).\nprevious stroke, which is denoted with toi; a smaller time offset (i.e. a greater overlap betweer. lognormal components) will result in a smoother trajectory (Fig.1] c). The curvature of the trajectory. can be varied by adjusting the central angle of each circular arc, which is denoted with 0;. Equation and further details for the A model can be found in Appendix|A.\nA sequence of virtual targets provides a very sparse spatial description or \"motor plan\" for the trajectory evolution. The remaining stroke parameters, to and O, define the temporal, dynamic and geometric features of the trajectory and we refer to those as dynamic parameters."}, {"section_index": "2", "section_name": "2.2 RECURRENT MIXTURE DENSITY NETWORKS", "section_text": "Mixture Density Networks (MDN) were introduced byBishop[(1994) in order to model and predict. the parameters of a Gaussian Mixture Model (GMM), i.e. a set of means, covariances and mixture. weights.Schuster(1999) showed that MDNs could be to model temporal data using RNNs. The author used Recurrent Mixture Density Networks (RMDN) to model the statistical properties of speech, and they were found to be more successful than traditional GMMs.Graves (2013) used. LSTM RMDNs to model and synthesise online handwriting, providing the basis for extensions to the. method, also used in|Ha et al.(2016);Zhang et al. (2016). Note that in the case of a sequential model. the RMDN outputs a unique set of GMM parameters for each timestep t, allowing the probability. distribution to change with time as the input sequence develops. Further details can be found in. AppendixC.1\nWe operate on discrete and temporally ordered sequences of planar coordinates. Similarly to Graves (2013), most of our results come from experiments made on the IAM online handwriting database (Marti & Bunke2002). However, we have made preliminary experiments with other datasets, such as the Graffiti Analysis Database (Lab][2009) as well as limited samples collected in our laboratory from a user with a digitiser tablet.\nWe then exploit the modularity of this system to conduct various experiments, details of which ca found in Section4\nAs a first step, we preprocess the raw data and reconstruct it in the form of A model parameters. Section|3.1] We then train and evaluate a number of RMDN models for two distinct tasks:\n1. Virtual target prediction. We use the V2V-model for this task. Given a sequence of virtual targets, this model predicts the next virtual target.. 2. Dynamic parameter prediction. For this task we trained and compared two model ar chitectures. Given a sequence of virtual targets, the task of these models is to predict the. corresponding dynamic parameters. The V2D-model is condititioned only on the previous. virtual targets, whereas the A2D-model is conditioned on both the previous virtual targets. and dynamic parameters.\nA number of methods have been developed by Plamondon et. al in order to reconstruct A-model parameters from digitised pen input data (O'Reilly & Plamondon]2008] Plamondon et al.] 2014 Fischer et al.]2014). These methods provide the ideal reconstruction of model parameters, given a high resolution digitised pen trace. While such methods are superior for handwriting analysis and biometric purposes, we opt for a less precise method (Berio & Leymarie|2015) that is less sensitive to sampling quality and is aimed at generating virtual target sequences that remain perceptually similar to the original trace. We purposely choose to ignore the original dynamics of the input, and base the method on a geometric input data only. This is done in order to work with training sequences that are independent of sampling rate, and in sight of future developments in which we intend to extract handwriting traces from bitmaps, inferring causal/dynamic information from a static input as humans are capable of (Edelman & Flash]1987|Freedberg & Gallese2007).\nOur method operates on a uniformly sampled input contour, which is then segmented in correspon dence with perceptually salient key points: loci of curvature extrema modulated by neighbouring. contour segments (Brault & Plamondon] 1993) Berio & Leymarie2015), which gives an initial. estimate of each virtual target v. We then (i) fit a circular arc to each contour segment in order to estimate the 0, parameters and (ii) estimate the to parameters by analysing the contour curvature in. the region of each key point. Finally, (iii) we iteratively adjust the virtual target positions to minimise the error between the original trajectory and the one generated by the corresponding A parameters.. For Further details on the A parameter reconstruction method, the reader is referred to Appendix|B\noriginal reconstructed (a) (b) original reconstructed\nFigure 2: A parameter reconstruction. (a) The original and reconstructed trajectories. (b) The reconstructed virtual targets. Note that the virtual targets define a shape that is perceptually similar to the input. (c) Aligned and scaled speed profiles of the original (gray) and reconstructed (black) trajectories. Although the dynamic information in the input is ignored (due to uniform sampling), the two speed profiles show similarities in number and relative-height of peaks."}, {"section_index": "3", "section_name": "3.2 DATA AUGMENTATION", "section_text": "We can exploit the A parameterisation to generate many variations over a single trajectory, which are visually consistent with the original, and with a variability that is similar to the one that would be seen in multiple instances of handwriting made by the same writer (Fig.3) (Djioua & Plamondon 2008af Fischer et al.2014] Berio & Leymarie2015). Given a dataset of n training samples, we randomly perturb the virtual target positions and dynamic parameters of each sample np times, which results in a new augmented dataset of size n + n np where legibility and trajectory smoothness is maintained across samples. This would not be possible on the raw online dataset, as perturbations for each data-point would eventually result in a noisy trajectory.\n(a) b\nThe V2V-model is conditioned on a history of virtual targets and given a new virtual target it predicts the next virtual target (hence the name V2V). Note that each virtual target includes the corresponding\noriginal reconstructed (a) (b) original reconstructed c\npen state - up (not touching the paper) or down (touching the paper). Repeatedly feeding the. predicted virtual target back into the model at every timestep allows the model to synthesise sequence. of arbitrary length. The implementation of this model is very similar to the handwriting predictior demonstrated by Graves (2013), although instead of operating directly on the digitised pen positions we operate on the much coarser virtual target sequences which are extracted during the preprocessing step. The details of this model can be found in AppendixC.3.\nThe goal of these models is to predict the corresponding dynamic parameters (toi, 0) for a given. sequence of virtual targets. We train and compare two model architectures for this task. The V2D. model is conditioned on the history of virtual targets, and given a new virtual target, this model. predicts the corresponding dynamic parameters (to, 0) for the current stroke (hence the name V2D). Running this model incrementally for every stroke of a given virtual target sequence allows us. to predict dynamic parameters for each stroke. The implementation of this model is very similar to. the V2V-model, and details can be found in AppendixC.4.\nAt each timestep, the V2D model outputs and maintains internal memory of a probability distribution. for the predicted dynamic parameters. However, the network has no knowledge of the parameters that. are sampled and used. Hence, dynamic parameters might not be consistent across timesteps. This problem can be overcome by feeding the sampled dynamic parameters back into the model at the. next timestep. From a human motor planning perspective this makes sense as, for a given drawing. style, when we decide the curvature and smoothness of a stroke we will take into consideration the. choices made in previously executed strokes..\nThe A2D model predicts the corresponding dynamic parameters (to;, 0,) for the current stroke conditioned on a history of both virtual targets and dynamic parameters (i.e. all A parameters - hence the name A2D). We use this model in a similar way to the V2D model, whereby we run it incrementally for every stroke of a given virtual target sequence. However, internally, at every. timestep the predicted dynamic parameters are fed back into the model at the next timestep along with the virtual target from the given sequence. The details of this implementation can be found in. AppendixC.5\nPredicting Virtual Targets. In a first experiment we use the V2V model, trained on the prepro. cessed IAM dataset, to predict sequences of virtual targets. We prime the network by first feeding it a sequence from the test dataset. This conditions the network to predict sequences that are similar to the prime. We can see from the results (Fig. 4) that the network is indeed able to produce sequences. that capture the statistical qualities of the priming sequence, such as overall incline, proportions, and. oscillation frequency. On the other hand, we observe that amongst the generated sequences, there are often patterns which do not represent recognisable letters or words. This can be explained by the. high variability of samples contained in the IAM dataset, and by the fact that our representation is very concise, with each data-point containing high significance. As a result, the slightest variation in a prediction is likely to cause a large error in the next. To overcome this problem, we train a new. model with a dataset augmented with 10 variations as described in Section 3.2] Due to our limited computing resources[' we test this method on 1/10th of the dataset, which results in a new dataset. with the same size as the original, but with a lower number of handwriting specimens with a number of subtle variations per specimen. With this approach, the network predictions maintain statistical similarity with the priming sequences, and patterns emerge that are more evocative of letters of the. alphabet or whole words, with fewer unrecognizable patterns (Fig. 4). To validate this result, we also. test the model's performance training it on 1/1Oth of the dataset, without data augmentation, and the results are clearly inferior to the previous two models. This suggests that the data augmentation step. is highly beneficial to the performance of the network..\n1We are thus not able to thoroughly test the large network architectures that would be necessary to train on the whole augmented dataset.\n(a) KHiA wisibsMlo hae It is th Lnirmm Hc ws sbl h yeh m+ twUs1 (b) crkwq's t+Mn1^VK, 7*K lxvzsA Vnnnus wns > Wn M3wl s9WM^ gmAy snmV Mvxzn g~yewy L<md +nnyy bk f w4myLMLy|A7 iWm m J LsM^Vs*xniMN< np (c) YV Tk wwm th nvyniNNM VMzr J Uc YMsVvz nuMA Dm7 74yFKsitym+bFc<Ev xs AW L LqAsHYu\\sNZ tY v+$v7M sy\nFigure 4: Predicting virtual targets. (a) Virtual targets from the test set (not seen during V2V training) usec to prime the V2V models. (b) Sequences generated with the V2V model. (c) Sequences generated with the augmented V2V model. Note that the non-augmented V2V model produces more undesired 'errors'. This is more visibly noticable when rendered with dynamic parameters (Fig.[6)\nPredicting Dynamic Parameters. We first evaluate the performance of both the V2D and A2D models on virtual targets extracted from the test set. Remarkably, although the networks have not been trained on these sequences, both models predict dynamic parameters that result in trajectories that are readable, and are often similar to the target sample. We settle on the A2D model trained on a 3 augmented dataset, which we qualitatively assess to produce the best results (Fig. 5).\nIk is thm ^Mirmm + yh nt (a) SYik wisibsMo kh f is the Uairman. lc wase sbIe ho plch dowws1 (b) guide missiles onto bach. f is the Cairmau HC Nns a5Ie h yCh towe s 1 () qride wissiles on'o hach. fe is th Charmnau quide wlissiles onto bacG. <b!e h ch Howus! x ic th oiruur (r wr) Oh rut d-wn5, (d) 5lAk qissiles Ju*o bh.\nFigure 5: Dynamic parameter prediction. (a) Virtual targets from samples in the test set (not seen during training). (b) The original trajectories provided for comparison. (c) Trajectories reconstructed using predicted dynamic parameters. (d) Trajectories reconstructed with random dynamic parameters provided for comparison.\nWe then proceed with applying the same A2D model on virtual targets generated by the V2V models. primed on the test set. We observe that the predictions on sequences generated with the augmented dataset are highly evocative of handwriting and are clearly different depending on the priming sequence (Fig.6] c), while the predictions made with the non-augmented dataset are more likely to. resemble random scribbles rather than human readable handwriting (Fig.6] b). This further confirms. the utility of the data augmentation step..\n(b) eserebmJ,'s fcy@ri 1 nMC,e hfLcQ~snoe Onwnus wG 3 W mMus'en gr~ytchy Seonc +^ntg oNun gmfu A nd 1nee 1n nmyNlg$y;4u ou1 50 bom^yc sosnin %p.T mc /hr/nnt 1mns Ygie n\\y^to (c) eM Cl j neec 11(nu Loe y>uesn(\\ebus) C rMe wwu th yJyghd`QNA s MwM er eaW j 4c 1msnwven E|O;trem+c5rc Gux5 A e reCdsHn|omZ t nwMn I m vacv>aM\nFigure 6: Trajectories reconstructed with dynamic parameters predicted for generated virtual targets from Fig. using (b) non-augmented V2V, (c) augmented V2V.\n(a) 4c wns sblc h ych Mt #wUs A FMMM (b) 7*LLxFyzsy AnMnus wns > wn M3m Y f 4WM^ qmAm Mnxzn s~qKhy Lwd +nny sALnM wn/nqZ 1ny 7n<anmyLMLy|A7iWn m1 =J Ln^VXs*^i1M< mp (c) rT< wum th n\\vynKzif#N LNAN lHn nwWnv VMr nwMA Dm7 ykKihym+bFc<Ev xs AW L L4rsHYu\\sNZ # vwv7n s4y\n(b) s re Gni 1 m1S, e hrGCQ~<noC Onunus wG 3 Wd seke musen ge~y^0hqp Seovc +^ntg Mun U2ndb u1neL 1nq neggASg;Wu ouJ 90 6qm^y c sosnin ~p.6 mr jhP/nnt 1mns Fgie ny7to (c) et Cl J ne/ec 11(ru Loe y3desn(\\e&u5) C rMse aw th y Jyqf2d`QvA Rad lsHVy 0V Au 1 numn Im y4qF|<ihrem+c5rc Geuxs AW e neCds1n|omL t~ vacv>a\nUser defined virtual targets. The dynamic parameter prediction models can also be used ir combination with user defined virtual target sequences (Fig.7). Such a method can be used to quickly and interactively generate handwriting trajectories in a given style, by a simple point and click procedure. The style (in terms of curvature and dynamics) of the generated trajectory is determined by the data used to train the A2D model, and by priming the A2D model with different samples, we can apply different styles to the user defined virtual targets.\nFigure 7: Dynamic parameters generated over user specified virtual targets for the word Res', using the A2D model trained on the IAM database.\nOne shot learning. In a subsequent experiment, we apply the data augmentaion method described in Section|3.2|to enable both virtual target and dynamic prediction models to learn from a small dataset of calligraphic samples recorded by a user using a digitiser tablet. We observe that with a low number of augmentations (50) the models generate quasi-random outputs, and seem to learn only the left to right trend of the input. With higher augmentation (700), the system generates outputs that are consistent to the human eye with the input data (Fig.8). We also train our models using only a single sample (augmented 7000) and again observe that the model is able to reproduce novel sequences that are similar to the input sample (Fig. 9l. Naturally, the output is a form of recombination of the input, but this is sufficient to synthesise novel outputs that are qualitatively similar to the input. It should be noted that we are judging the performance of the one-shot learned models qualitatively, and we may not be testing the full limits of how well the models are able tc generalise. On the other hand, these results, as well as the \"style transfer' capabilities exposed in following section suggest a certain degree of generalisation.\n(a) (b) c\nFigure 8: Training with small (n = 4) datasets. (a) Training set with 4 samples. (b) Output of the networks when using 50 data augmentation. (c) Output of the networks with 700 data augmentation.\nthe red br^own fox hc ned hro wn FOf e renyL1 a\nFigure 9: Training with single training samples. For each row: (a) Training sample (augmented 7o0). ( Output of combined V2V/A2D models primed on the training sample. (c) Output without priming\nStyle Transfer. Here, with a slight abuse of terminology, we utilise the term \"style\" to refer to th dynamic and geometric features (such as pen-tip acceleration and curvature) that determine the visua qualities of a handwriting trajectory. Given a sequence of virtual targets generated with the V2 model trained on one dataset, we can also predict the corresponding dynamic parameters with the A2I model trained on another. The result is an output that is similar to one dataset in lettering structure, bi possesses the fine dynamic and geometric features of the other. If we visually inspect Fig.10] we ca see that both the sequence of virtual targets reconstructed by the dataset preprocessing method, an the trajectory generated over the same sequence of virtual targets with dynamic parameters learne from a different datasets, are both readable. This emphasises the importance of using perceptuall salient points along the input for estimating key-points in the data-set preprocessing step (Sectio 3.1).\n(a) (b c\n(a) gn J noW r fW wort (b) c AnV\nEy rA s^zwn f te yec brown-Fox Ehe red br^own fox HETEDBRDNF9 Ec reA u^pvu fvx the red b^own fox Ehe red b^own for Ehe sC^ krown Fox Ey ~LL k^pUu FDx\nEz FA k7^zwn f He yee brown Fox Ehe red b^own fox FHEREDBRDNF9X EE reA u7^pvu fvx the red b^own fox Ehe red brown For Ehe sCA krown fox thy ~L k7^pUu FDx\nWe have presented a system that is able to learn the parameters for a physiologically plausible model of handwriting from an online dataset. We hypothesise that such a movement centric approach is advantageous as a feature representation for a number of reasons. Using such a representation provides a performance that is similar to the handwriting prediction demonstrated by Graves (2013 and Ha et al.(2016), with a number of additional benefits. These include the ability to: (i) capture both the geometry and dynamics of a hand drawn/written trace with a single representation, (ii) express the variability of different types of movement concisely at the feature level, (iii) demonstrate greater flexibility for procedural manipulations of the output, (iv) mix \"styles\"' (applying curvature and dynamic properties from one example, to the motor plan of another), (v) learn a generative model from a small number of samples (n < 5), (vi) generate resolution independent outputs.\nThe reported work provides a solid basis for a number of different future research avenues. As a firs extension, we plan to implement the label/text input alignment method described in Graves' origina work that should allow us to synthesise readable handwritten text and also to provide a more thorougl comparison of the two methods. Our method strongly relies on an accurate reconstruction of th input in the preprocessing step. Improvements should target especially parts of the latter method tha depend on user tuned parameters, such as the identification of salient points along the input (whicl requires a final peak detection pass), and measuring the sharpness of the input in correspondence with salient points.\nFurthermore, we can perform the same type of operation within a single dataset, by priming the A2D model with the dynamic parameters of a particular training example, while feeding it with the virtual targets of another. To test this we train both (V2V, A2D) models on a corpus containing 5 samples of the same sentence written in different styles and then augmented 1400 (Fig.11). We envision the utility of such as system in combination with virtual targets interactively specified by a user.\nFigure 10: Style transfer mixing training sets. (a) The priming sequence from the V2V dataset (IAM). (b) A2D is trained on a different, single user specified sample. (c) The virtual targets from (a) rendered with the dynamic parameters predicted form the A2D model from (b).\nFigure 11: Style transfer using priming. The leftmost column shows the entire training set consisting of 5 uset drawn samples. The top row (slightly greyed out) shows the virtual targets for two of the training examples Each cell in the table shows the corresponding virtual targets rendered using the dynamic parameters predicted with the A2D model primed with the sample in the corresponding row.\nJames Bergstra and Yoshua Bengio. Random Search for Hyper-Parameter Optimization. Journal of Machine Learning Research, 13:281-305, 2012.\nDaniel Berio and Frederic Fol Leymarie. Computational Models for the Analysis and Synthesis of Graffiti Tag Strokes. In Paul Rosin (ed.), Computational Aesthetics. Eurographics Association, 2015.\nChristopher M Bishop. Mixture density networks. 1994\nJ-J Brault and Rejean Plamondon. Segmenting handwritten signatures at their perceptually important points IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(9):953-957, 1993.\nJoeri De Winter and Johan Wagemans. Perceptual saliency of points along the contour of everyday objects: large-scale study. Perception & Psychophysics, 70(1):50-64, 2008.\nShimon Edelman and Tamar Flash. A model of handwriting. Biological cybernetics, 57(1-2):25-36, 1987\nAnath Fischer, Rejean Plamondon, Colin O'Reilly, and Yvon Savaria. Neuromuscular representation and synthetic generation of handwritten whiteboard notes. In Frontiers in Handwriting Recognition (ICFHR) 2014 14th International Conference on, pp. 222-227. IEEE, 2014.\nTamar Flash and Amir A Handzel. Affine differential geometry analysis of human arm movements. Biologica cybernetics, 96(6):577-601, 2007.\nDavid Freedberg and Vittorio Gallese. Motion, emotion and empathy in esthetic experience. Trends in cognitive sciences, 11(5):197-203, 2007.\nFelix A Gers, Jurgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with LSTM Neural Computation, 12(10):2451-2471, 2000\nMartin Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URI http://tensorflow.org/ Software available from tensorflow.org\nDaniel Bullock, Stephen Grossberg, and Christian Mannes. A neural network model for cursive script production Biological Cybernetics, 70(1):15-28, 1993.\nSylvain Calinon. A tutorial on task-parameterized movement learning and retrieval. Intelligent Service Robotics 9(1):1-29, 2016.\nJacob Feldman and Manish Singh. Information along contours and object boundaries. Psychological review 112(1):243, 2005.\nAlex Graves. Supervised Se sence Labelling with Recurrent Neural Networks. PhD thesis, 2008\nAlex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013\nDavid Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780 1997.\nFrancesco Lacquaniti, Carlo Terzuolo, and Paolo Viviani. The law relating the kinematic and figural aspects of drawing movements. Acta psychologica, 54(1):115-130, 1983\nHenry W Lin and Max Tegmark. Why does deep and cheap learning work so well? arXiv preprint arXiv:1608.08225, 2016.\nFrans J Maarse. The study of handwriting movement: Peripheral models and signal processing techniques. Lisse [etc.]: Swets & Zeitlinger, 1987.\nVinod Nair and Geoffrey E Hinton. Inferring motor programs from images of handwritten digits. In Advances in neural information processing systems, pp. 515-22, 2005.\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. Ir International Conference on Machine Learning ICML, volume 28, pp. 1310-1318, 2013.\nVu Pham, Theodore Bluche, Christopher Kermorvant, and Jerome Louradour. Dropout Improves Recurrent Neural Networks for Handwriting Recognition. In Frontiers in Handwriting Recognition (ICFHR), 2014 14th International Conference on, pp. 285-290. IEEE, 2014.\nHenry SR Kao, Rumjahn Hoosain, and GP Van Galen. Graphonomics: Contemporary research in handwriting Elsevier, 1986.\nDo-Hoon Lee and Hwan-Gue Cho. The beta-velocity model for simulating handwritten korean scripts. In Electronic Publishing, Artistic Imaging, and Digital Typography, pp. 252-264. Springer, 1998..\nF Lestienne. Effects of inertial load and velocity on the braking process of voluntary limb movements. Experi mental Brain Research. 35(3):407-418. 1979\nXiaolin Li, Marc Parizeau, and Rejean Plamondon. Segmentation and reconstruction of on-line handwritten scripts. Pattern recognition, 31(6):675-684, 1998\nU-V Marti and Horst Bunke. The iam-database: an english sentence database for offline handwriting recognition International Journal on Document Analysis and Recognition, 5(1):39 46, 2002\nR. Plamondon et al. Recent developments in the study of rapid human movements with the kinematic theory Pattern Recognition Letters, 35:225-35, 2014.\nRejean Plamondon and Moussa Djioua. A multi-level representation paradigm for handwriting stroke generatior Human Movement Science, 25(4):586-607, 2006.\nBrandon Rohrer and Neville Hogan. Avoiding spurious submovement decompositions II. Biological cybernetics 94(5):409-14, 2006.\nDavid A Rosenbaum, Loukia D Loukopoulos, Ruud GJ Meulenbroek, Jonathan Vaughan, and Sascha Engelbrecht. Planning reaches by evaluating stored postures. Psychological review, 102(1):28. 1995\nLambert Schomaker. A neural oscillator-network model of temporal pattern generation. Human movement science, 11(1):181-192, 1992\nMike Schuster. Better Generative Models for Sequential Data Problems: Bidirectional Recurrent Mixtur Density Networks. In Advances in Neural Information Processing Systems (NIPS), pp. 589-595, 1999\nIlya Sutskever. Training Recurrent neural Networks. PhD thesis, University of Toronto, 2013\nP Viviani and C Terzuolo. Trajectory determines movement dynamics. Neuroscience, 7(2):431-437, 1982\nYasuhiro Wada and Mitsuo Kawato. A neural network model for arm trajectory formation using forward an inverse dynamics models. Neural Networks. 6(7):919-932. 1993.\nThe Sigma Lognormal model (Plamondon & Djioual[2006) describes complex handwriting trajectorie. via the vectorial superimposition of lognormal strokes. The corresponding speed profile A, (t) assumes. a variably asymmetric \"bell shape\" which is described with a 3 parameter lognormal function.\nRejean Plamondon. A Kinematic Theory of Rapid Human Movements. Part I . Movement Representation and Generation. Biological cybernetics, 72(4):295-307, 1995.\nRejean Plamondon, Moussa Djioua, and Christian O'Reilly. Recent Developments in the Study of Rapid Human. Movements with the Kinematic Theory. Traitement Du Signal. 26:377-394. 2009. ISSN 0765-0019\nRejean Plamondon, Christian O'Reilly, Celine Remi, and Theresa Duval. The Lognormal Handwriter: Learning Performing and Declining. Frontiers in Psychology, 4(945), 2013. 1SsN 1664-1078\nFreek Stulp and Olivier Sigaud. Many regression algorithms, one unified model: A review. Neural Networks, 69. 60-79, 2015.\nTamas Varga, Daniel Kilchhofer, and Horst Bunke. Template-based Synthetic Handwriting Generation for the Training of Recognition Systems. In Proc. of 12th Conf. of the International Graphonomics Society, pp. 206-211, 2005.\n1 (ln(t-toi) xp V2(t - to, 20i2\nwhere toi defines the activation time of a stroke and the parameters ; and o; determine the shape of the lognormal function. ; is referred to as log-time delay and is biologically interpreted as the rapidity of the neuromuscular system to react to an impulse generated by the central nervous system (Plamondon et al.]2003); ; is referred to as log-response time and determines the spread and asymmetry of the lognormal.\nThe curvilinear evolution of strokes is described with a circular arc shape, which results in\nln(t - to) $it)=0+0 1 + erf O;V2\nwhere 0, is the central angle of the circular arc that defines the shape of the ith stroke\nm-1 $(t) =v1+ dtA;(T) ) i()(Vi+1-Vi) i=1 h(0i)cos$i(t) 20i -h(0i)sin$i(t) if |sin0 2sin0i and h(0i)sin$i(t) -h(0i)cos$i(t) otherwis\nwhich scales the extent of the stroke based on the ratio between the perimeter and the chord length o the circular arc.\ni= ln(1+Qi)\nt0i =t1i-e-30 t1i=t1(i-1)+ti t1(0) = 0,\nFigure 12: Lognormals with varying \"skeweness\" parameter and corresponding values for , . As -> ( the lognormal approaches a Gaussian.\nThe planar evolution of a trajectory is defined by a sequence of virtual targets {v,}=I, where a. trajectory with m virtual targets will be characterised by m - 1 circular arc strokes. A A trajectory, parameterised by the virtual target positions, is given by.\nIntermediate parameterisation. In order to facilitate the precise specification of timing and profile shape of each stroke, we recur to an intermediate parametrisation that takes advantage of a few known properties of the lognormal (Djioua & Plamondon]2008b) in order to define each stroke with (i) a time offset t; with respect to the previous stroke, (ii) a stroke duration T; and (iii) a shape parameter Q;, which defines the skewedness of the lognormal. The corresponding A parameters {toi, i, } can be then computed with:\nUi = -ln Ti\nwhere t1; is the onset time of the lognormal stroke profile. As a approaches 0, the shape of the lognormal converges to a Gaussian, with mean t1 + e-o2 (the mode of the lognormal) and standard deviation d. 6\n3.5 Gaussian 3 a=0.1 =0.95 o=0.10 a=0.2 =0.27 o=0.18 a=0.3 =-0.15 o=0.26 2.5 a=0.4 =-0.46 o=0.34 a=0.5 =-0.72 o=0.41 2 a=0.6 =-0.94 o=0.47 a=0.7 =-1.14 o=0.53 a=0.8 =-1.33 =0.59 1.5 a=0.9 =-1.50 =0.64 a=1 =-1.66 o=0.69 0.5 0.2 0.4 0.6 0.8 1.2 1.4 1.6\na=0.2 =0.27 =0.18 a=0.3 =-0.15 =0.26 a=0.4 =-0.46 =0.34 a=0.5 =-0.72 =0.41 a=0.6 =-0.94 =0.47 a=0.7 =-1.14 =0.53 a=0.8=-1.33 =0.59 a=0.9 =-1.50 =0.64 a=1 =-1.66 =0.69"}, {"section_index": "4", "section_name": "RECONSTRUCTING A PARAMETERS FROM AN ONLINE DATASET", "section_text": "The A parameter reconstruction method operates on a input contour uniformly sampled at a fixec distance which is defined depending on the extent of the input, where we denote the kth sampled poin along the input with p[k|. The input contour is then segmented in correspondence with perceptuall salient key points, which correspond with loci of curvature extrema modulated by neighbourin? contour segments (Brault & Plamondon,[1993) Berio & LeymarieJ 2015). The proposed approacl shares strong similarities with previous work done for (i) compressing online handwriting data with a circular-arc based segmentation (Li et al.||1998) and (ii) for generating synthetic data for handwriting recognisers (Varga et al.|2005). The parameter reconstruction algorithm can be summarised with the following steps:\nThe details for each step are highlighted in the following paragraphs\nEstimating input key-points. Finding significant curvature extrema (which can be counted as. convex and concave features for a closed/solid shape) is an active area of research, as relying on. discrete curvature measurements remains challenging. We currently rely on a method described by. Feldman & Singh (2005), and supported experimentally byDe Winter & Wagemans(2008): first we measure the turning angle at each position of the input p|k| and then compute a smooth version of the signal by convolving it with a Hanning window. We assume that the turning angles have\n0.0040 Turning angle surprisal 0.0035 0.0030 0.0025 0.0020 0.0015 0.0010 0.0005 0.0000 0 10 20 30 40 50 60 70\n0.0040 0.0035 0.0030 0.0025 0.0020 0.0015 0.0010 0.0005 0.0000 0 10 20 30 40 50 60 70\nFigure 13: Input key-point estimation. Left, the (smoothed) turning angle surprisal signal and the key-points estimated with peak detection. Right, the corresponding key-points along the input trajectory..\nbeen generated by a random process with a Von Mises distribution with mean at 0 degrees, which corresponds with giving maximum probability to a straight line. We then measure the surprisal (i.e. the negative logarithm of the probability) for each sample as defined by[Feldman & Singh[(2005] which normalised to the [0, 1] range simplifies to:.\nwhere 0[k] is the (smoothed) turning angle. The first and last sample indices of the surprisal signa] together with its local maxima results in m key-point indices { z }. The corresponding key-points along the input contour are then given by {p [i}.\n'ind m key-points in the input contour.. . Fit a circular arc to each contour segment defined between two consecutive key-points. (defining individual strokes), and obtain an estimate of each curvature parameter 0,.. . For each stroke compute the corresponding t; parameter by analysing the curvature signal. in the region of the corresponding key-point.. . Define an initial sequence of virtual targets with m positions corresponding with each input. key-point. Repeat the following until convergence or until a maximum number of iterations is reached Berio & Leymarie(2015): -Integrate the A trajectory with the current parameter estimate.. -Identify m key-points in the generated trajectory. - Move the virtual target positions to minimise the distance between the key-points of. the generated trajectory and the key-points on the input contour..\n1 cos(0[k])\nEstimating stroke curvature parameters. For each section of the input contour defined between two consecutive key-points, we estimate the corresponding stroke curvature parameter 0, by first. computing a least square fit of a circle to the contour section. We then compute the internal angle of. the arc supported between the two key-points, which is equal 20t, i.e. two times the corresponding. curvature parameter 0i .\nFigure 14: Fitting circles (dotted red) and circular arcs (red) to the input\nEstimating stroke time-overlap parameters. This step is based on the observation that a smaller. values of toi, i.e. a greater time overlap between strokes, result in smoother trajectories. On the contrary, a sufficiently large value of toi will result in a sharp corner in proximity of the corresponding virtual target. We exploit this notion, and compute an estimate of the toi parameters by examining the sharpness of the input contour in the region of each key-point..\nTo do so we examine the previously computed turning angle surprisal signal, in which we can observe that sharp corners in the contour correspond with sharper peaks, while smoother corners correspond with smooth peaks with a larger spread. By treating the surprisal signal as a probability density function, we can then use statistical methods to measure the shape of each peak with a mixture of parametric distributions, and examine the shape of each mixture component in order to get an estimate of the corresponding sharpness along the input contour. To do so we employ a variant of Expectation Maximisation (EM) (Dempster et al.1977) in which we treat the distance along the contour as a random variable weighted by the corresponding signal amplitude normalised to the [0, 1] range. Once the EM algorithm has converged, we treat each mixture component as a radial basis function (RBF) centred at the corresponding mean, and use linear regression as in Radial Basis Function Networks (Stulp & Sigaud[2015) to fit the mixture parameters to the original signal (Calinon]2016). Finally we generate an estimate of sharpness , (bounded in the [0, 1] range) for each key point using as a logarithmic function of the mixture parameters and weights. The corresponding toi parameters are then given by\nwhere tmin. and tmar are user specified parameters that determine the range of the to; estimates\nSharpness GMM 0.005 Before nudge 0.004 0.003 0.002 0.001 0.000 0 10 20 30 40 50 60 70\nSharpness GMM 0.005 Before nudge 0.004 0.003 0.002 0.001 0.000 0 10 20 30 40 50 60 70\nFigure 15: Sharpness estimation. Left, the GMM components estimated from the turning angle surprisal signal Right, the A trajectory generated before the final iterative adjustment step. Note that at this stage the virtual target positions correspond with the estimated input key-points.\nt; = tmin + (tmax-tmin) Ai\nNote that we currently utilise an empirically defined function for this task. But in future steps, we intend to learn the mapping between sharpness and mixture component parameters from synthetically samples generated with the A model (for which to, and consequently A,, are known).\nIteratively estimating virtual target positions. The loci along the input contour corresponding with the estimated key-points provide an initial estimate for a sequence of virtual targets, where each virtual target position is given by v; = p[2]. Due to the trajectory-smoothing effect produced by the time overlaps, the initial estimate will result in a generated trajectory that is likely to have a reduced scale with respect to the input we wish to reconstruct (Varga et al.]2005). In order to produce a more accurate reconstruction, we use an iterative method that shifts each virtual targe1 towards a position that will minimise the error between the generated trajectory and the reconstructed input. To do so, we compute an estimate of m output key-points {& (z)} in the generated trajectory where z2, ..., m are the time occurrences at which the influence of one stroke exceeds the previous These will correspond with salient points along the trajectory (extrema of curvature) and can be easily computed by finding the time occurrence at which two consecutive lognormals intersect. Similarly tc the input key-point case, &(z1) and (zm) respectively denote the first and last points of the generated trajectory. We then iteratively adjust the virtual target positions in order to move each generated\nReconstruction i final z V i initial\nReconstruction i final y(z. i initial\nkey-point (z) towards the corresponding input key-point p[with\nViVi+pZi-(zi)\nThe iteration continues until the Mean Square Error (MsE) of the distances between every pair p [ and g(z) is less than an experimentally set threshold or until a maximum number of iterations is reached (Fig. 16). This method usually converges to a good reconstruction of the input within few iterations (usually < 5). Interestingly, even though the dynamic information of the input is discarded the reconstructed velocity profile is often similar to the original (in number of peaks and shape) which can be explained by the extensively studied relationships between geometry and dynamics of movement trajectories (Viviani & Terzuolo]1982| Lacquaniti et al.]1983] Viviani & Schneider1991 Flash & Handzel2007).\nIn order to increase the expressive generative capabilities of our networks, we train them to model. parametric probability distributions. Specifically, we use Recurrent Mixture Density Networks that output the parameters of a bivariate Gaussian Mixture Model..\nIf a target variable zt can be expressed as a bivariate GMM, then for K Gaussians we can use a network architecture with output dimensions of 6K. This output vector would then consist of.\nFigure 16: Final trajectory reconstruction step. Left, iterative adjustment of virtual target positions. Right, the final trajectory generated with the reconstructed dynamic parameters."}, {"section_index": "5", "section_name": "GMM via ( (Graves 2013", "section_text": "JIavCs = t : means for k'th Gaussian, t E IR2 o = exp(o) : standard deviations for k'th Gaussian, ot E IR2 pt = tanh(pt) : correlations for k'th Gaussian, pt E (-1, 1) K = softmax() : mixture weight for k'th Gaussian , Tt k 1 k\nt = : means for k'th Gaussian, t E IR2 = exp ) : standard deviations for k'th Gaussian, ot E IR2. = tanh(ot) : correlations for k'th Gaussian, pt E (-1, 1) 0+ K = softmax(^ ) : mixture weight for k'th Gaussian , . T t\nWe can then formulate the probability distribution function P, at timestep t as\n0ML = argmaxPr(S|0) S 11 Pr(y|x,0) - arg max 0 (x,y)\nSince the logarithm is a monotonic function, a common method for maximizing this likelihood is minimizing its negative logarithm, also known as the Negative Log Likelihood (NLL), Hamiltonian or surprisal (Lin & TegmarkJ 2016). We can then define our cost function J as\nInput At each timestep i, the input to the V2V model is x; E IR3, where the first two elements. are given by v, (the relative position displacement for the i'th stroke, i.e. between the i'th virtua. target and the next), and the last element is u; E {0, 1} (the pen-up state during the same stroke). Given input x; and its current internal state (c, h), the network learns to predict x+1, by learning. the parameters for the Probability Density Function (PDF) : Pr(xi+1|x, C, h). With a slight abuse. of notation, this can be expressed more intuitively as Pr(xi+1 | x, x,-1, ..., x-n) where n is the. maximum sequence length.\nK Pt=nkN(zt|t,0t,Pt), where k 1 Z N(x|,,p) = and 20102 2(1 - p2) (x1 - 1)2 (x2 2)2 2p(x1- 2)(x2- 2) Z 0102\nIf we let 0 denote the parameters of a network, and given a training set S of input-target pairs (x EX, y E Y), our training objective is to find the set of parameters 0mL which has the maximum. likelihood (ML). This is the 0 that maximises the probability of training set S and is formulated as (Graves2008)\ns 11 J = - ln Pr(y|x,0) (x,y) S ln Pr(y|x,0). (x,y)\nOutput We express the predicted probability of v; as a bivariate GMM as described in Section C.1. and u, as a Bernoulli distribution. Thus for K Gaussians the network has output dimensions of (6K + 1) which, in addition to eqn. (11), contains e, which we use to calculate the pen state probability via (Graves! 2013\n1 ei E (0,1) ei 1+ exp(ei)\nArchitectureWe use Long Short-Term Memory (Hochreiter & Schmidhuber1997) networks with input, output and forget gates (Gers et al.] 20oo), and we use Dropout regularization as described by Pham et al.(2014). We employ both a grid search and a random search (Bergstra & Bengio|2012) on various hyperparameters in the ranges: sequence length {64, 128}, number of hidden recurrent layers {1, 2, 3}, dimensions per hidden layer {64, 128, 256, 400, 512, 900, 1024}, number of Gaussians {5 10, 20}, dropout keep probability {50%, 70%, 80%, 90%, 95%} and peepholes {with, without}.\nFor comparison we also tried a deterministic architecture whereby instead of outputing a probability distribution, the network outputs a direct prediction for xi+1. As expected, the network was unable. to learn this function, and all sequence of virtual targets synthesized with this method simply travel in a repeating zig-zag line.\nTrainingWe use a form of Truncated Backpropagation Through Time (BPTT) (Sutskever|2013) whereby we segment long sequences into overlapping segments of maximum length n. In this case. long-term dependencies greater than length n are lost, however with enough overlap the network car. effectively learn a sliding window of length n timesteps. We shuffle our training data and reset the internal state after each sequence. We empirically found an overlap factor of 50% to perform well. though further studies are needed to confirm the sensitivity of this figure..\nWe use dynamic unrolling of the RNN, whereby the number of timesteps to unroll to is not set at compile time, in the architecture of the network, but unrolled dynamically while training, allowing. variable length sequences. We also experimented with repeating sequences which were shorter than. the maximum sequence length n, to complete them to length n. We found that for our case they. produced desirable results, with some side-effects which we discuss in later sections..\nWe split our dataset into training: 70%, validation: 20% and test: 10% and use the Adam optimizer. (Kingma & Ba][2014) with the recommended hyperparameters. To prevent exploding gradients we clip gradients by their global L2 norm as described in (Pascanu et al.|2013). We tried thresholds of both 5 and 10, and found 5 to provide more stability..\nWe formulate the loss function J to minimise the Negative Log Likelihood as described in Sectior C.2|using the probability density functions described in eqn. (12) and eqn. (19).\nInput The input to this network at each timestep i is identical to that of the V2V-model, x, E IR3. where the first two elements are v; (normalised relative position displacement for the i'th stroke) and u; E {0, 1} (the pen state during the same stroke). Given input x; and its current internal state (c, h), the network learns to predict the dynamic parameters (toi, 0t) for the current stroke i, by learning the parameters for Pr(toi, 0 x, Ci, ht). Again with an abuse of notation, this can be. expressed more intuitively as Pr(toi, 0 | x, x;-1, ..., xi-n) where n is the maximum sequence. length.\nTraining We use the same procedure for training as the V2V-model\nArchitecture We explored very similar architecture and hyperparamereters as the V2V-model, but found that we achieved much better results with a shorter maximum sequence length. We trained a number of models with a variety of sequence lengths {3, ..., 8, 13, 16, 21, 32}.\nInput The input to this network x; E IR at each timestep i is slightly different to the V2V and. V2D models. Similar to the V2V and V2D models, the first two elements are v; (normalised relative position displacement for the i'th stroke), and the third element is u; E {0, 1} (the pen state during the same stroke). However in this case the final two elements are the dynamic parameters for the previous stroke (to-1, 0-1), normalized to zero mean and unit standard deviation.\nOutput The output of this network is identical to that of the V2D model.\nTraining We use the same rocedure for training as the V2V-model.\nWe evaluated and batch rendered the outputs of many different architectures and models at differen training epochs, and settled on models which were amongst those with the lowest validation erroi out also produced visibily more desirable results. Once we picked the models, the results displaye are not cherry picked.\nThe preprocessed IAM dataset contains 12087 samples (8460 in the training set) with maximun. sequence length 305, minimum 6, median 103 and mean 103.9. For the V2V/V2D/A2V models trained on the IAM database we settle on an architecture of 3 recurrent layers, each with size 512, a. maximum sequence length of 128, 20 Gaussians, dropout keep probability of 80% and no peepholes\nFor V2V we used L2 normalisation on. input. and for A2D/V2D we used\nGiven input x, and its current internal state (c, h), the network learns to predict the dynamic param eters (toi, 0) for the current stroke i, by learning the parameters for Pr(toi, 0 | x, C, h). Again with an abuse of notation, this can be expressed more intuitively as Pr(toi, 0 x, x-1, ..., x-n, where n is the maximum sequence length.\nhitecture We explored very similar architecture and hyperparamereters as the V2D model\nFor the augmented one-shot learning models we used similar architectures, but found that 2 recurrent layers each with size 256 was able to generalise better and produce more interesting results that both. captured the prime inputs without overfitting.\nWe also tried a number of different methods for normalising and representing v; on the input to the. models. We first tried normalising the components individually to have zero mean and unit standard deviation. We also tried normalising uniformly on L2 norm again to have zero mean and unit standard. deviation. Finally, we tried normalised polar coordinates, both absolute and relative..\nInput (online handwriting data) A-model parameter extraction Artificial variability with parameter perturbations (optional) Preprocessed input Virtual targets Dynamic parameters (t,, 0.) parameters action plan training training naed nnde! V2V model ndL V2D/A2D models synthesize virtual target from predict model parameters seed virtual targets. for synthesized virtual targets Squash vackts. Acam Top >y^s /N>. +<sM V MK MZ P WCc 1`reu M generate trajectories with. predict model parameters for random model parameters user-drawn action plan. A> /NRr).\nV2V model\nFigure 17: Schematic overview of the system\ntraining indu! V2D/A2D models\npredict model parameters. for synthesized virtual targets"}]
ryMxXPFex
[{"section_index": "0", "section_name": "DISCRETE VARIATIONAL AUTOENCODERS", "section_text": "Figures 11, 12, and 13 repeat the analysis of Figure 5 for statically binarized MNIST, Omniglot. and Caltech-101 Silhouettes. Specifically, they show the generative output of a discrete VAE as. the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held constant across each sub-row of five samples, and variation amongst these samples is due to the layers of. continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbs. sampling passes through the large, low-probability space between the modes only infrequently. As a. result, consistency of the object class over many successive rows in Figures 11, 12, and 13 indicates that the RBM prior has well-separated modes..\nJason Tyler Rolfe\nOn statically binarized MNIST, the RBM still learns distinct, separated modes corresponding to most of the different digit types. However, these modes are not as well separated as in dynamically binarized MNIST, as is evident from the more rapid switching between digit types in Figure 11. There are not obvious modes for Omniglot in Figure 12; it is plausible that an RBM with 128 units could not represent enough well-separated modes to capture the large number of distinct character types in the Omniglot dataset. On Caltech-101 Silhouettes, there may be a mode corresponding to large, roughly convex blobs.\nProbabilistic models with discrete latent variables naturally capture datasets com- posed of discrete classes. However, they are difficult to train efficiently, since. backpropagation through discrete variables is generally not possible. We present. a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through. the discrete latent variables. The associated class of probabilistic models com- prises an undirected discrete component and a directed hierarchical continuous. component. The discrete component captures the distribution over the discon-. nected smooth manifolds induced by the continuous component. As a result, this. class of models efficiently learns both the class of objects in an image, and their. specific realization in pixels, from unsupervised data; and outperforms state-of-. the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101. Silhouettes datasets."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Unsupervised learning of probabilistic models is a powerful technique, facilitating tasks such a denoising and inpainting, and regularizing supervised tasks such as classification (Hinton et al 2006: Salakhutdinov & Hinton, 2009; Rasmus et al., 2015). Many datasets of practical interest ar projections of underlying distributions over real-world objects into an observation space; the pixel of an image, for example. When the real-world objects are of discrete types subject to continuou transformations, these datasets comprise multiple disconnected smooth manifolds. For instance natural images change smoothly with respect to the position and pose of objects, as well as scen lighting. At the same time, it is extremely difficult to directly transform the image of a person to on of a car while remaining on the manifold of natural images\nIt would be natural to represent the space within each disconnected component with continuous vari ables, and the selection amongst these components with discrete variables. In contrast, most state. of-the-art probabilistic models use exclusively discrete variables - as do DBMs (Salakhutdinov & Hinton, 2009), NADEs (Larochelle & Murray, 2011), sigmoid belief networks (Spiegelhalter & Lau. ritzen, 1990; Bornschein et al., 2016), and DARNs (Gregor et al., 2014) - or exclusively continuous variables as do VAEs (Kingma & Welling, 2014; Rezende et al., 2014) and GANs (Goodfellow et al., 2014).1 Moreover, it would be desirable to apply the efficient variational autoencoder frame. work to models with discrete values, but this has proven difficult, since backpropagation througl. discrete variables is generally not possible (Bengio et al., 2013; Raiko et al., 2015)..\nWe introduce a novel class of probabilistic models, comprising an undirected graphical model de. fined over binary latent variables, followed by multiple directed layers of continuous latent variables. This class of models captures both the discrete class of the object in an image, and its specific con. tinuously deformable realization. Moreover, we show how these models can be trained efficientl using the variational autoencoder framework, including backpropagation through the binary laten. variables. We ensure that the evidence lower bound remains tight by incorporating a hierarchical. approximation to the posterior distribution of the latent variables, which can model strong corre. lations. Since these models efficiently marry the variational autoencoder framework with discrete. latent variables, we call them discrete variational autoencoders (discrete VAEs)..\nSpike-and-slab RBMs (Courville et al., 2011) use both discrete and continuous latent variables"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Conventionally, unsupervised learning algorithms maximize the log-likelihood of an observed dataset under a probabilistic model. Even stochastic approximations to the gradient of the log likelihood generally require samples from the posterior and prior of the model. However, sampling from undirected graphical models is generally intractable (Long & Servedio, 2010), as is sampling. from the posterior of a directed graphical model conditioned on its leaf variables (Dagum & Luby 1993).\nIn contrast to the exact log-likelihood, it can be computationally efficient to optimize a lower bound on the log-likelihood (Jordan et al., 1999), such as the evidence lower bound (ELBO, (x, 0, ): Hinton & Zemel, 1994):\nL(x,0,) = logp(x[0) - KL[q(z[x,$)|[p(z[x,0)]\nL(x,0,$) =KL[q(z[x,$)][p(z[0)]+Eq logp(x[z,0)] KL term autoencoding term\nIn many cases of practical interest, such as Gaussian q(z[x) and p(z), the KL term of Equation 2 can be computed analytically. Moreover, a low-variance stochastic approximation to the gradient of the autoencoding term can be obtained using backpropagation and the reparameterization trick, so long as samples from the approximating posterior q(z[x) can be drawn using a differentiable, deterministic function f (x, , p) of the combination of the inputs, the parameters, and a set of input- and parameter-independent random variables p ~ D. For instance, samples can be drawn from a Gaussian distribution with mean and variance determined by the input, N (m(x, ), v(x, $)), using\n1 a q(z|x,s) [logp(x|z,0)] logp(xf(x,P,$),0) N O~\nwhere F is the conditional-marginal cumulative distribution function (CDF) defined by\nx F(x) = |x1,...,X OX\nHowever, this generalization is only possible if the inverse of the conditional-marginal CDF exists and is differentiable.\nA formulation comparable to Equation 3 is not possible for discrete distributions, such as restricted Boltzmann machines (RBMs) (Smolensky. 1986):\n1\nwhere q(z|x, ) is a computationally tractable approximation to the posterior distribution p(z|x, 0) We denote the observed random variables by x, the latent random variables by z, the parameters of the generative model by 0, and the parameters of the approximating posterior by $. The variational. autoencoder (VAE; Kingma & Welling, 2014; Rezende et al., 2014; Kingma et al., 2014) regroups. the evidence lower bound of Equation 1 as:.\nThe reparameterization trick can be generalized to a large set of distributions, including nonfactorial approximating posteriors. We address this issue carefully in Appendix A, where we find that an analog of Equation 3 holds. Specifically, D; is the uniform distribution between 0 and 1, and\nf(x)=F-1(x)\nwhere z E {0,1}\", Zp is the partition function of p(z), and the lateral connection matrix W is triangular. Any approximating posterior that only assigns nonzero probability to a discrete domain corresponds to a CDF that is piecewise-contant. That is, the range of the CDF is a proper subset of the interval [0, 1]. The domain of the inverse CDF is thus also a proper subset of [0, 1], and its derivative is not defined, as required in Equations 3 and 4.2."}, {"section_index": "3", "section_name": "1.2 RELATED WORK", "section_text": "Recently, there have been many efforts to develop effective unsupervised learning techniques by building upon variational autoencoders. Importance weighted autoencoders (Burda et al., 2016) Hamiltonian variational inference (Salimans et al., 2015), normalizing flows (Rezende & Mohamed. 2015), and variational Gaussian processes (Tran et al., 2016) improve the approximation to the pos. terior distribution. Ladder variational autoencoders (Sonderby et al., 2016) increase the power of the. architecture of both approximating posterior and prior. Neural adaptive importance sampling (D. et al., 2015) and reweighted wake-sleep (Bornschein & Bengio, 2015) use sophisticated approxi. mations to the gradient of the log-likelihood that do not admit direct backpropagation. Structured. variational autoencoders use conjugate priors to construct powerful approximating posterior distri. butions (Johnson et al., 2016).\nPrior efforts by Makhzani et al. (2015) to use multimodal priors with implicit discrete variables governing the modes did not successfully align the modes of the prior with the intrinsic clusters of the dataset. Rectified Gaussian units allow spike-and-slab sparsity in a VAE, but the discrete variables are also implicit, and their prior factorial and thus unimodal (Salimans, 2016). Graves (2016) computes VAE-like gradient approximations for mixture models, but the component models are assumed to be simple factorial distributions. In contrast, discrete VAEs generalize to powerful multimodal priors on the discrete variables. and a wider set of mappings to the continuous units\nThe generative model underlying the discrete variational autoencoder resembles a deep belief net- work (DBN; Hinton et al., 2006). A DBN comprises a sigmoid belief network, the top layer of which is conditioned on the visible units of an RBM. In contrast to a DBN, we use a bipartite Boltz- mann machine, with both sides of the bipartite split connected to the rest of the model. Moreover, all hidden layers below the bipartite Boltzmann machine are composed of continuous latent variables with a fully autoregressive layer-wise connection architecture. Each layer j receives connections from all previous layers i < j, with connections from the bipartite Boltzmann machine mediated by a set of smoothing variables. However, these architectural differences are secondary to those in the gradient estimation technique. Whereas DBNs are traditionally trained by unrolling a succession of RBMs, discrete variational autoencoders use the reparameterization trick to backpropagate through the evidence lower bound.\n2 BACKPROPAGATING THROUGH DISCRETE LATENT VARIABLES BY ADDINC CONTINUOUS LATENT VARIABLES\nWhen working with an approximating posterior over discrete latent variables, we can effectively smooth the conditional-margina1 CDF (defined by Equation 5 and Appendix A) by augmenting the latent representation with a set of continous random variables. The conditional-marginal CDF over the new continuous variables is invertible and its inverse is differentiable, as required in Equations 3 and 4. We redefine the generative model so that the conditional distribution of the observed variables given the latent variables only depends on the new continuous latent space. This does not alter\n3Strictly speaking, the prior contains a bipartite Boltzmann machine, all the units of which are connected to. the rest of the model. In contrast to a traditional RBM, there is no distinction between the \"visible' units and the \"hidden' units. Nevertheless, we use the familiar term RBM in the sequel, rather than the more cumbersome. fully hidden bipartite Boltzmann machine.'.\nIn the following sections, we present the discrete variational autoencoder (discrete VAE), a hierar chical probabilistic model consising of an RBM.3 followed by multiple directed layers of continuous latent variables. This model is efficiently trainable using the variational autoencoder formalism, as in Equation 3, including backpropagation through its discrete latent variables.\nIt is easy to construct a stochastic approximation to the gradient of the ELBO that admits both discrete and continuous latent variables, and only requires computationally tractable samples. Un- fortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Mnih & Rezende, 2016), which we discuss in greater detail in Appendix B.\nx X q(z =1|x,) Z1 Z2 Z3 Z1 Z2 Z3 F q(S|x,s)(P) p(x|S,) (a) Approximating posterior q(S, z|x) (b) Prior p(x, (, z) (c) Autoencoding term\nFigure 1: Graphical models of the smoothed approximating posterior (a) and prior (b), and the. network realizing the autoencoding term of the ELBO from Equation 2 (c). Continuous latent vari. ables ( are smoothed analogs of discrete latent variables z, and insulate z from the observed vari. ables x in the prior (b). This facilitates the marginalization of the discrete z in the autoencoding tern of the ELBO, resulting in a network (c) in which all operations are deterministic and differentiable. given independent stochastic input p ~ U [0, 1].\nthe fundamental form of the model, or the KL term of Equation 2; rather, it can be interpreted a adding a noisy nonlinearity, like dropout (Srivastava et al., 2014) or batch normalization with a smal. minibatch (Ioffe & Szegedy, 2015), to each latent variable in the approximating posterior and the. prior. The conceptual motivation for this approach is discussed in Appendix C..\nSpecifically, as shown in Figure 1a, we augment the latent representation in the approximating pos terior with continuous random variables c,4 conditioned on the discrete latent variables z of the RBM:\nq(S,z[x,$) = r(S[z):q(z[x,$) where r(S|z) =II r(Si|zi). 2\nThe support of r((|z) for all values of z must be connected, so the marginal distribution. q(C[x, ) = z r([z) : q(z[x, $) has a constant, connected support so long as 0 < q(z[x, ) < 1. We further require that r(C[z) is continuous and differentiable except at the endpoints of its support. so the inverse conditional-marginal CDF of q(C[x, ) is differentiable in Equations 3 and 4, as we discuss in Appendix A.\nAs shown in Figure 1b. we corres oondingly augment the prior with (:\np(C,z[0) =r(C[z):p(z[0)\np(x[S,z,0) =p(x[(,0)\nThe smoothing distribution r(([z) transforms the model into a continuous function of the distri bution over z, and allows us to use Equations 2 and 3 directly to obtain low-variance stochastic approximations to the gradient.\nGiven this expansion, we can simplify Equations 3 and 4 by dropping the dependence on z an applying Equation 16 of Appendix A, which generalizes Equation 3:\n1 a q(S,z|x,p) [logp(x|S, z,0)] ~ ogp N do p~U(0,1)n\n4We always use a variant of z for latent variables. This is zeta, or Greek z. The discrete latent variables z can conveniently be thought of as English z.\nAs we shall demonstrate in Section 2.1, F-1 q(z = 1|x, ) is a deterministic probability value calculated by a parameterized function, such a a neural network. The autoencoder implicit in Equation 8 is shown in Figure 1c. Initially, input s is passed into a deterministic feedforward network q(z = 1|x, ), for which the final nonlinearity i the logistic function. Its output q, along with an independent random variable p ~ U[0, 1], is passe into the deterministic function F-1 input x, is finally passed to log p (x|$, 0). The expectation of this log probability with respect to p is the autoencoding term of the VAE formalism, as in Equation 2. Moreover, conditioned on the inpu and the independent p, this autoencoder is deterministic and differentiable, so backpropagation cai be used to produce a low-variance, computationally-efficient approximation to the gradient.\nAs a concrete example consistent with sparse coding, consider the spike-and-exponential transfor mation from binary z to continuous C:\nif G = 0 Fr(Si|zi=0)(C')=1 otherwise BeBs if OSi1 Fr(Si|zi=1)(S otherwise\nFq(S|x,s)(C')=(1-q(z=1|x,$))Fr(ci|z=0)(C)+ q(z=1|x,$)Fr(ci|zr=1)S = q(z = 1x, + 1 .\nlog if p>1-q otherwise\n5In the limit -> oo, S = zi almost surely, and the continuous variables ( can effectively be removed from. the model. This trick can be used after training with finite to produce a model without smoothing variables C.\nTo evaluate the autoencoder of Figure 1c, and through it the gradient approximation of Equation 8 we must invert the conditional-marginal CDF Fq(c|x,$):\nwhere we use the substitution q(z = 1|x, $) -> q to simplify notation. For all values of the inde- pendent random variable p ~ U[0, 1], the function F1! Fq(|x,s)(p) rectifies the input q(z = 1|x, ) if q 1 - p in a manner analogous to a rectified linear unit (ReLU), as shown in Figure 2a. It is also quasi-sigmoidal, in that F-1 is increasing but concave-down if q > 1 - p. The effect of p on F-1 is qualitatively similar to that of dropout (Srivastava et al., 2014), depicted in Figure 2b, or the noise injected by batch normalization (Ioffe & Szegedy, 2015) using small minibatches, shown in Figure 2c.\nOther expansions to the continuous space are possible. In Appendix D.1, we consider the case where. both r(Ci|z; = 0) and r(S|z; = 1) are linear functions of (; in Appendix D.2, we develop a spike- and-slab transformation; and in Appendix E, we explore a spike-and-Gaussian transformation where the continuous ( is directly dependent on the input x in addition to the discrete z..\nFigure2: Inverse CDF of the spike-and-exponential smoothing transformation for. p E {0.2, 0.5, 0.8}; =1 (dotted), =3 (solid), and = 5 (dashed) (a). Rectifiedlinear unit with dropout rate 0.5 (b). Shift (red) and scale (green) noise from batch normalization; with magnitude 0.3 (dashed), -0.3 (dotted), or 0 (solid blue); before a rectified linear unit (c). In all. cases, the abcissa is the input and the ordinate is the output of the effective transfer function. The novel stochastic nonlinearity F-1 Fq(c|x,g)(p) from Figure 1c, of which (a) is an example, is qualitatively similar to the familiar stochastic nonlinearities induced by dropout (b) or batch normalization (c)\nWhen a probabilistic model is defined in terms of a prior distribution p(z) and a conditional dis. tribution p(x[z), the observation of x often induces strong correlations in the posterior p(z[x) due to phenomena such as explaining-away (Pearl, 1988). Moreover, we wish to use an RBM as the prior distribution (Equation 6), which itself may have strong correlations. In contrast, to maintain tractability, many variational approximations use a product of independent approximating posterior. distributions (e.g., mean-field methods, but also Kingma & Welling (2014); Rezende et al. (2014))..\nq(Z1,S1,..,Zk,Sk|x,$) = r(Sj|zj) q(zj|Si<j,x,$) where 1<j<k e9j(Si<j,x,$)T:zj q(Zj|Si<j,x,$) (1+ e9zi(Si<j,x,$))\nzj E {0, 1}^, and g;(Si<j, x, $) is a parameterized function of the inputs and preceding Si, such as. a neural network. The corresponding graphical model is depicted in Figure 3a, and the integratior of such hierarchical approximating posteriors into the reparameterization trick is discussed in Ap. pendix A. If each group z, contains a single variable, this dependence structure is analogous to tha. of a deep autoregressive network (DARN; Gregor et al., 2014), and can represent any distribution However, the dependence of z; on the preceding discrete variables zi<j is always mediated by the. continuous variables (i<j:\nThis hierarchical approximating posterior does not affect the form of the autoencoding term in Equa tion 8. except to increase the depth of the autoencoder. as shown in Figure 3b. The deterministic probability value q(z; = 1|Si<, x, ) of Equation 10 is parameterized, generally by a neural net- work, in a manner analogous to Section 2. However, the final logistic function is made explicit in Equation 10 to simplify Equation 12. For each successive layer j of the autoencoder, input x and all previous (i< are passed into the network computing q(z = 1|Si<j, x, $). Its output qj, along with an\n6The continuous latent variables ( are divided into complementary disjoint groups (1, ...\n(d'x)f (0)($'x|3)bj 1 0.8 x 0.3 p > 0.5 x:10.3 0.5 no noise .5 0.2 p < 0.5 O 0 0.2 0.4 0.6 0.8 1 -1 -0.5 0 0.5 1 -1 0.5 0 0.5 1 q(z =1|x,) x x (a) Spike-and-exp, E {1, 3, 5} (b) ReLU with dropout. (c) ReLU with batch norm\nTo accommodate strong correlations in the posterior distribution while maintaining tractability, we introduce a hierarchy into the approximating posterior q(z[x) over the discrete latent variables. Specifically, we divide the latent variables z of the RBM into disjoint groups, z1,..., 2k,' and. define the approximating posterior via a directed acyclic graphical model over these groups:.\nX x q q q(z3 = 1|Si<3,x,) q1 q2 q3 Z1 Z2 Z3 H q3(S3|Si<3,x,q p(x|S,$) (a) Hierarch approx post q((, z[x) (b) Hierarchical ELBO autoencoding term\nFigure 3: Graphical model of the hierarchical approximating posterior (a) and the network realizing the autoencoding term of the ELBO (b) from Equation 2. Discrete latent variables z; only depend on the previous zi<; through their smoothed analogs Ci<i. The autoregressive hierarchy allows the approximating posterior to capture correlations and multiple modes. Again, all operations in (b) are deterministic and differentiable given the stochastic input p.\nindependent random variable p ~ U[0, 1], is passed to the deterministic function F-1 q(Sj|Si<j,x,$) to produce a sample of Ss. Once all S; have been recursively computed, the full ( along with th original input x is finally passed to log p (x|, 0). The expectation of this log probability with respe. to p is again the autoencoding term of the VAE formalism, as in Equation 2..\nIn Appendix F, we show that the gradients of the remaining KL term of the ELBO (Equation 2) can be estimated stochastically using:\nd dEp(z,0) 0Ep(z,0) KL[q][p] = Eq(z1|x,g de |Si<k,x,$) de (ze de\na KL[q]|p]= E] A dq\nIn particular, Equation 12 is substantially lower variance than the naive approach to calculate KL [q|[p], based upon REINFORCE.\n4 MODELLING CONTINUOUS DEFORMATIONS WITH A HIERARCHY OF CONTINUOUS LATENT VARIABLES\nWe can make both the generative model and the approximating posterior more powerful by adding additional layers of latent variables below the RBM. While these layers can be discrete, we focus or continuous variables, which have proven to be powerful in generative adversarial networks (Goodfel low et al., 2014) and traditional variational autoencoders (Kingma & Welling, 2014; Rezende et al. 2014). When positioned below and conditioned on a layer of discrete variables, continuous variables can build continuous manifolds, from which the discrete variables can choose. This complement the structure of the natural world, where a percept is determined first by a discrete selection of the types of objects present in the scene, and then by the position, pose, and other continuous attributes of these objects.\nSpecifically, we augment the latent representation with continuous random variables 3,' and define both the approximating posterior and the prior to be layer-wise fully autoregressive directed graphi cal models. We use the same autoregressive variable order for the approximating posterior as for the\nWe always use a variant of z for latent variables. This is Fraktur z, or German z\nFigure 4: Graphical models of the approximating posterior (a) and prior (b) with a hierarchy o. continuous latent variables. The shaded regions in parts (a) and (b) expand to Figures 3a and 1 respectively. The continuous latent variables 3 build continuous manifolds, capturing properties like position and pose, conditioned on the discrete latent variables z, which can represent the discrete types of objects in the image.\norior, as in DRAw (Gregor et al., 2015), variational recurrent neural networks (Chung et al., 2015) he deep VAE of Salimans (2016), and ladder networks (Rasmus et al., 2015; Spnderby et al., 2016) We discuss the motivation for this ordering in Appendix G.\nThe directed graphical model of the ap oroximating posterior and prior are defined by\nI1 q (3m|3l<m,x, $ . 0<m<n I1 30,...,3n(0) = p(3m|3l<m,0) 0<m<n\nThe full set of latent variables associated with the RBM is now denoted by 3o = { z1, S1, . . . , Zk, Sk} However, the conditional distributions in Equation 13 only depend on the continuous (. Each 3m>1 denotes a layer of continuous latent variables, and Figure 4 shows the resulting graphical model.\nThe ELBO decomposes as:\nL(x,0,q) =Eq(3|x,s) [l0gp(x|3,0)]-Eq(31<m|x,p) [KL[q(3m|31<m,x,$)|P(3m|3l<m,0)]] m\nIf both q(3m|31<m,x, ) and p(3m|31<m,0) are Gaussian, then their KL divergence has a simple closed form, which is computationally efficient if the covariance matrices are diagonal. Gradients can be passed through the q(3t<m[x, $) using the traditional reparameterization trick, described in Section 1.1."}, {"section_index": "4", "section_name": "5 RESULTS", "section_text": "Discrete variational autoencoders comprise a smoothed RBM (Section 2) with a hierarchical approx imating posterior (Section 3), followed by a hierarchy of continuous latent variables (Section 4). We parameterize all distributions with neural networks, except the smoothing distribution r((|z) dis- cussed in Section 2. Like NVIL (Mnih & Gregor, 2014) and VAEs (Kingma & Welling, 2014: Rezende et al., 2014), we define all approximating posteriors q to be explicit functions of x, with parameters shared between all inputs x. For distributions over discrete variables, the neural net- works output the parameters of a factorial Bernoulli distribution using a logistic final layer, as in Equation 10; for the continuous 3, the neural networks output the mean and log-standard deviation of a diagonal-covariance Gaussian distribution using a linear final layer. Each layer of the neu- ral networks parameterizing the distributions over z. 3. and x consists of a linear transformation\n31 32 33 31 32 33 (a) Approx post w/ cont latent vars q(3, (, z[x) (b) Prior w/ cont latent vars p(x, 3, (, z)\nThe hierarchical structure of Section 4 is very powerful, and overfits without strong regularizatior of the prior, as shown in Appendix H. In contrast, powerful approximating posteriors do not induce significant overfitting. To address this problem, we use conditional distributions over the inpu p(x[(, 0) without any deterministic hidden layers, except on Omniglot. Moreover, all other neura networks in the prior have only one hidden layer, the size of which is carefully controlled. Or statically binarized MNIST, Omniglot, and Caltech-101, we share parameters between the layers oi the hierarchy over 3. We present the details of the architecture in Appendix H.\nWe train the resulting discrete VAEs on the permutation-invariant MNIST (LeCun et al., 1998), Om. niglot* (Lake et al., 2013), and Caltech-101 Silhouettes datasets (Marlin et al., 2010). For MNIST. we use both the static binarization of Salakhutdinov & Murray (2008) and dynamic binarization. Estimates of the log-likelihood' of these models, computed using the method of (Burda et al., 2016). with 104 importance-weighted samples, are listed in Table 1. The reported log-likelihoods for dis-. crete VAEs are the average of 16 runs; the standard deviation of these log-likelihoods are 0.08, 0.04,. 0.05, and 0.11 for dynamically and statically binarized MNIST, Omniglot, and Caltech-101 Silhou-. ettes, respectively. Removing the RBM reduces the test set log-likelihood by 0.09, 0.37, 0.69, and. 0.66.\nMNIST (dynamic binarization) MNIST (static binarization) LL ELBO LL DBN -84.55 HVI -88.30 -85.51 IWAE -82.90 DRAW -87.40 Ladder VAE -81.74 NAIS NADE -83.67 Discrete VAE -80.15 Normalizing flows. -85.10 Variational Gaussian process -81.32 Discrete VAE -84.58 -81.01 Omniglot Caltech-101 Silhouettes LL LL IWAE -103.38 IWAE -117.2 Ladder VAE -102.11 RWS SBN -113.3 RBM -100.46 RBM -107.8 DBN -100.45 NAIS NADE -100.0 Discrete VAE -97.43 Discrete VAE -97.6\nTable 1: Test set log-likelihood of various models on the permutation-invariant MNIST, Omniglot,. and Caltech-101 Silhouettes datasets. For the discrete VAE, the reported log-likelihood is estimated with 104 importance-weighted samples (Burda et al., 2016). For comparison, we also report perfor- mance of some recent state-of-the-art techniques. Full names and references are listed in Appendix I.\nWe further analyze the performance of discrete VAEs on dynamically binarized MNIST: the larges. of the datasets, requiring the least regularization. Figure 5 shows the generative output of a discrete. VAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held con. stant across each sub-row of five samples, and variation amongst these samples is due to the layer.. of continuous latent variables. Given a multimodal distribution with well-separated modes, Gibb. sampling passes through the large, low-probability space between the modes only infrequently. A. a result, consistency of the digit class over many successive rows in Figure 5 indicates that the RBM. prior has well-separated modes. The RBM learns distinct, separated modes corresponding to the. different digit types, except for 3/5 and 4/9, which are either nearby or overlapping; at least tens o.\n8we use the partitioned, preprocessed Omniglot dataset of Burda et al. (2016), available from https://github.com/yburda/iwae/tree/master/datasets/OMNIGLOT.\n9The importance-weighted estimate of the log-likelihood is a lower bound, except for the log partitior. function of the RBM. We describe our unbiased estimation method for the partition function in Appendix H.1\nbatch normalization (Ioffe & Szegedy, 2015) (but see Appendix H.2), and a rectified-linear point-. wise nonlinearity (ReLU). We stochastically approximate the expectation with respect to the RBM. prior p(z[0) in Equation 11 using block Gibbs sampling on persistent Markov chains, analogous to persistent contrastive divergence (Tieleman, 2008). We minimize the ELBO using ADAM (Kingma. & Ba, 2015) with a decaying step size.\nLO X G 3\nFigure 6: Log likelihood versus the number of iterations of block Gibbs sampling per minibatch (a) the number of units in the RBM (b), and the number of layers in the approximating posterior over the RBM (c). Better sampling (a) and hierarchical approximating posteriors (c) support better per formance, but the network is robust to the size of the RBM (b).\nThe large mixing time of block Gibbs sampling on the RBM suggests that training may be con strained by sample quality. Figure 6a shows that performance1o improves as we increase the num. ber of iterations of block Gibbs sampling performed per minibatch on the RBM prior: p(z|0) in. Equation 11. This suggests that a further improvement may be achieved by using a more effective sampling algorithm, such as parallel tempering (Swendsen & Wang, 1986)..\n10 All models in Figure 6 use only 10 layers of continuous latent variables, for computational efficiency\nFigure 5: Evolution of samples from a discrete VAE trained on dynamically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM out independent continuous latent variables, and shows the variation induced by the continuous ayers as opposed to the RBM. The long vertical sequences in which the digit ID remains constant demonstrate that the RBM has well-separated modes, each of which corresponds to a single (o) occasionally two) digit IDs, despite being trained in a wholly unsupervised manner.\n80.3 80.4 80 80.5 1 10 100 8 16 32 64 128 1 2 4 8 (a) Block Gibbs iterations (b) Num RBM units (c) RBM approx post layers\nCommensurate with the small number of intrinsic classes, a moderately sized RBM yields the bes performance on MNIST. As shown in Figure 6b, the log-likelihood plateaus once the number of units in the RBM reaches at least 64. Presumably, we would need a much larger RBM to model a dataset like Imagenet, which has many classes and complicated relationships between the elements of various classes.\nThe benefit of the hierarchical approximating posterior over the RBM, introduced in Section 3, is apparent from Figure 6c. The reduction in performance when moving from 4 to 8 layers in the approximating posterior may be due to the fact that each additional hierarchical layer over the ap- proximating posterior adds three layers to the encoder neural network: there are two deterministic hidden layers for each stochastic latent layer. As a result, expanding the number of RBM approx imating posterior layers significantly increases the number of parameters that must be trained, and increases the risk of overfitting.\nWe avoid this problem by symmetrically projecting the approximating posterior and the prior into a continuous space. We then evaluate the autoencoding term of the evidence lower bound exclusivel in the continous space, marginalizing out the original discrete latent representation. At the sam time, we evaluate the KL divergence between the approximating posterior and the true prior in the original discrete space; due to the symmetry of the projection into the continuous space, it does no contribute to the KL term. To increase representational power, we make the approximating posterio over the discrete latent variables hierarchical, and add a hierarchy of continuous latent variable below them. The resulting discrete variational autoencoder achieves state-of-the-art performance or the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in Neural Information Processing Systems, pp. 3084-3092, 2013.\nYoshua Bengio, Nicholas Leonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiy preprint arXiv:1308.3432. 2013.\nSamuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. In Proceedings of the 2Oth SIGNLL Conference on Computational Natural Language Learning, pp. 10-21, 2016.\nDatasets consisting of a discrete set of classes are naturally modeled using discrete latent variables However. it is difficult to train probabilistic models over discrete latent variables using efficient gradient approximations based upon backpropagation, such as variational autoencoders, since it is generally not possible to backpropagate through a discrete variable (Bengio et al., 2013).\nZhengbing Bian, Fabian Chudak, Arash Vahdat helped run experiments. Jack Raymond provided. the library used to estimate the log partition function of RBMs. Mani Ranjbar wrote the cluster management system, and a custom GPU acceleration library used for an earlier version of the code.. We thank Evgeny Andriyash, William Macready, and Aaron Courville for helpful discussions; and. one of our anonymous reviewers for identifying the problem addressed in Appendix D.3..\norg Bornschein, Samira Shabanian, Asja Fischer, and Yoshua Bengio. Bidirectional Helmholtz machines. In Proceedings of The 33rd International Conference on Machine Learning, pp. 2511- 2519, 2016.\nYuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. Proceed- ings of the International Conference on Learning Representations, arXiv:1509.00519. 2016\nSteve Cheng. Differentiation under the integral sign with weak derivatives. Technical report, Work ing paper, 2006.\nKyungHyun Cho, Tapani Raiko, and Alexander Ilin. Enhanced gradient for training restricted Boltz mann machines. Neural Computation, 25(3):805-831, 2013.\nAaron C. Courville, James S. Bergstra, and Yoshua Bengio. Unsupervised models of images b spike-and-slab rbms. In Proceedings of the 28th International Conference on Machine Learning pp. 1145-1152, 2011.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems. pp. 2672-2680. 2014.\nAlex Graves. Stochastic backpropagation through mixture density distributions. arXiv preprint arXiv:1607.05690, 2016\nGeoffrey E. Hinton and R. S. Zemel. Autoencoders, minimum description length, and Helmholtz free energy. In J. D. Cowan, G. Tesauro, and J. Alspector (eds.), Advances in Neural Information Processing Systems 6, pp. 3-10. Morgan Kaufmann Publishers, Inc., 1994..\nMatthew Johnson, David K Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P. Adams. Composing graphical models with neural networks for structured representations and. fast inference. In Advances in Neural Information Processing Svstems. pp. 2946-2954. 2016\nMichael I. Jordan. Zoubin Ghahramani. Tommi S. Jaakkola, and Lawrence K. Saul. An introductio. to variational methods for graphical models. Machine learning, 37(2):183-233. 1999\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, pp. 448-456, 2015.\nBenjamin M Marlin, Kevin Swersky, Bo Chen, and Nando de Freitas. Inductive principles fo restricted Boltzmann machine learning. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, pp. 509-516, 2010.\nAndriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. Pro ceedings of the 31st International Conference on Machine Learning, pp. 1791-1799, 2014.\nAndriy Mnih and Danilo J. Rezende. Variational inference for Monte Carlo objectives. In Proceed ings of the 33rd International Conference on Machine Learning.. pp. 2188-2196. 2016\nIain Murray and Ruslan R. Salakhutdinov. Evaluating probabilities under high-dimensional latent. variable models. In Advances in Neural Information Processing Systems, pp. 1137-1144, 2009\nRadford M. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56(1):71-113 1992.\nBruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607-609, 1996.\nJohn Paisley, David M. Blei, and Michael I. Jordan. Variational Baysian inference with stochastic search. In Proceedings of the 29th International Conference on Machine Learning, 2012\nJudea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Mor gan Kaufmann, 1988.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 86(11):2278-2324. 1998.\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko... Semi supervised learning with ladder networks. In Advances in Neural Information Processing Systems. pp. 3546-3554, 2015.\nRuslan Salakhutdinov and Geoffrey E. Hinton. Deep Boltzmann machines. In Proceedings of th 12th International Conference on Artificial Intelligence and Statistics. r pp. 448-455. 2009\nMichael R. Shirts and John D. Chodera. Statistically optimal analysis of samples from multiple equilibrium states. The Journal of Chemical Physics, 129(12), 2008.\nPaul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In. D. E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed Processing, volume 1, chapter 6 pp. 194-281. MIT Press, Cambridge, 1986.\nDavid J. Spiegelhalter and Steffen L. Lauritzen. Sequential updating of conditional probabilities or directed graphical structures. Networks, 20(5):579-605, 1990\nTijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine Learning, pp. 1064- 1071. ACM, 2008.\nRonald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992\nA MULTIVARIATE VAES BASED ON THE CUMULATIVE DISTRIBUTION FUNCTION\nThe reparameterization trick is always possible if the cumulative distribution function (CDF) of q(z|x, $) is invertible, and the inverse CDF is differentiable, as noted in Kingma & Welling (2014). However, for multivariate distributions, the CDF is defined by:.\nX1 F(x) X\nRuslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th International Conference on Machine Learning, pp. 872-879. ACM, 2008.\nThe multivariate CDF maps Rn [0, 1], and is generally not invertible.11\nIn place of the multivariate CDF, consider the set of conditional-marginal CDFs defined by:1\nThat is, F,(x) is the CDF of x, conditioned on all x; such that i < h, and marginalized over. all xk such the j < k. The range of each F; is 0,1, so F maps the domain of the original distribution to p E [0, 1]n. To invert F, we need only invert each conditional-marginal CDF in turn,. conditioning x; = F-1(p) on x1 = F-1(p),..., j-1 = F1(p). These inverses exist so long as the conditional-marginal probabilities are everywhere nonzero. It is not problematic to effectively. define F-1(p) based upon x<j, rather than Pi<j, since by induction we can uniquely determine. Xi<j given Pi<j\nUsing integration-by-substition, we can compute the gradient of the ELBO by taking the expectation. of a uniform random variable p on [0, 1]n, and using F-1 of z on which p(x[z, 0) is conditioned. To perform integration-by-substitution, we will require the determinant of the Jacobian of F-1\nThe derivative of a CDF is the probability density function at the selected point, and F; is a simpl CDF when we hold fixed the variables x<; on which it is conditioned, so using the inverse functior. theorem we find:\nmarginal CDFs F; are independent of the value of the later xk, j < k, over which they are marginal- ized. Moreover, the inverse conditional-marginal CDFs have the same dependence structure as F so the Jacobian of F-1 is also triangular. The determinant of a triangular matrix is the product of the diagonal elements\nx Fi X X1,...,X X\nOF. 0 1 dpj F}(F-1(p)) 1 xj=F-(D)|xi<j\nUsing these facts to perform a multivariate integration-by-substitution, we obtain:\nEq(z|x,o) logp(x[z,0)] q(z|x,$) : logp(x[z, 0)\nThe gradient with respect to $ is then easy to approximate stochastically:\n1 (z|x,s) [logp(x|z,0)] N p~U(0,1)n"}, {"section_index": "6", "section_name": "B THE DIFFICULTY OF ESTIMATING GRADIENTS OF THE ELBO WITI REINFORCE", "section_text": "It is easy to construct a stochastic approximation to the gradient of the ELBO that only requires computationally tractable samples, and admits both discrete and continuous latent variables. Un fortunately, this naive estimate is impractically high-variance, leading to slow training and pooi performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Bengio et al., 2013; Mnih & Rezende, 2016):\nq(z|x,s) [logp(x|z,0)] =Eq(z|x,b) [logp(x|z,0) - B(x) 1 [logp(x|z,0) - B(x)] N z~q(z|x,$)\nwhere B(x) is a (possibly input-dependent) baseline, which does not affect the gradient, but ca reduce the variance of a stochastic estimate of the expectation.\nEquation 18 of REINFORCE captures much less information about p(x|z, 0) per sample than Equa- tion 3 of the variational autoencoder, which actively makes use of the gradient. In particular, the change of p(x|z, 0) in some direction d can only affect the REINFORCE gradient estimate if a sam- ple is taken with a component in direction d. In a D-dimensional latent space, at least D samples are\n(z|x, $) : logp(x[z, 0 xF det\nThe variable p has dimensionality equal to that of z; O is the vector of all Os; 1 is the vector of all 1s\nNote that if q(z|x, ) is factorial (i.e., the product of independent distributions in each dimension zt).. then the conditional-marginal CDFs F; are just the marginal CDFs in each direction. However, even if q(z|x, ) is not factorial, Equation 17 still holds so long as F is nevertheless defined to be the set of conditional-marginal CDFs of Equation 15..\ndifference approximation to the derivative. The autoencoding term is a function of the conditional. og-likelihood logp(x[z, 0), composed with the approximating posterior q(z[x, $), which deter mines the value of z at which p(x[z, 0) is evaluated. However, the conditional log-likelihood is never differentiated directly in REINFORCE, even in the context of the chain rule. Rather, the con ditional log-likelihood is evaluated at many different points z ~ q(z|x, ), and a weighted sum o1. these values is used to approximate the gradient, just like in the finite difference approximation..\nC AUGMENTING DISCRETE LATENT VARIABLES WITH CONTINUOUS LATENT VARIABLES\nIntuitively, variational autoencoders break the encoder13 distribution into \"packets\"' of probability of infinitessimal but equal mass, within which the value of the latent variables is approximately constant. These packets correspond to a region r; < Pi < r; + for all i in Equation 16, and the expectation is taken over these packets. There are more packets in regions of high probability, sc high-probability values are more likely to be selected. More rigorously, Fq(z|,) (S) maps intervals of high probability to larger spans of 0 p 1, so a randomly selected p ~ U [0, 1] is more likely to be mapped to a high-probability point by F-1 q(z|x.o) (p).\nIn contrast, REINFORCE (Equation 18) breaks the latent represention into segments of infinites simal but equal volume; e.g., zi z' < zi + S for all i (Williams, 1992; Mnih & Gregor, 2014; Bengio et al., 2013). The latent variables are also approximately constant within these segments. but the probability mass varies between them. Specifically, the probability mass of the segment z z' < z + o is proportional to q(z|x, $).\nOnce a segment is selected in the latent space, its location is independent of the encoder and decoder In particular, the gradient of the loss function does not depend on the gradient of the decoder with respect to position in the latent space, since this position is fixed. Only the probability mass assigned to the segment is relevant.\nAlthough variational autoencoders can make use of the additional gradient information from the decoder, the gradient estimate is only low-variance so long as the motion of most probability packets has a similar effect on the loss. This is likely to be the case if the packets are tightly clustered (e.g., the encoder produces a Gaussian with low variance, or the spike-and-exponential distribution of Section 2.1), or if the movements of far-separated packets have a similar effect on the total loss (e.g., the decoder is roughly linear).\n2014) or standout (Ba & Frey, 2013) regularization. Like dropout and standout, element-wise stochastic nonlinearity applied to a hidden layer. Since F-1 q(z|x,g) (p) selects a point in the probability distribution, it rarely selects an improbable point. Like standout, the distribution of the hidden layer is learned. Indeed, we recover the encoder of standout if we use the spike-and. Gaussian distribution of Section E.1 and let the standard deviation o go to zero..\n13Since the approximating posterior q(z|x, ) maps each input to a distribution over the latent space, it is sometimes called the encoder. Correspondingly, since the conditional likelihood p(x[z, 0) maps each configu ration of the latent variables to a distribution over the input space, it is called the decoder..\nrequired to capture the variation of p(x|z, 0) in all directions; fewer samples span a smaller subspace. Since the latent representation commonly consists of dozens of variables, the REINFORCE gradi. ent estimate can be much less efficient than one that makes direct use of the gradient of p(x[z, 0) Moreover, we will show in Section 5 that, when the gradient is calculated efficiently, hundreds of. latent variables can be used effectively..\nAs the parameters of the encoder are changed, the location of a packet can move, while its mass is held constant. That is, = F-! with a region of p-space is constant by definition. So long as F-1 small change in will correspond to a small change in the location of each packet. This allows us to use the gradient of the decoder to estimate the change in the loss function, since the gradient of the decoder captures the effect of small changes in the location of a selected packet in the latent space.\nwith a region of p-space is constant by definition. So long as I exists and is differentiable, a\nHowever, variational autoencoders cannot be used directly with discrete latent representations, since changing the parameters of a discrete encoder can only move probability mass between the allowed discrete values, which are far apart. If we follow a probability packet as we change the encoder parameters, it either remains in place, or jumps a large distance. As a result, the vast majority of probability packets are unaffected by small changes to the parameters of the encoder. Even if we are lucky enough to select a packet that jumps between the discrete values of the latent representation,\nthe gradient of the decoder cannot be used to accurately estimate the change in the loss functior since the gradient only captures the effect of very small movements of the probability packet\nTo use discrete latent representations in the variational autoencoder framework, we must first trans form to a continuous latent space, within which probability packets move smoothly. That is, we. must compute Equation 17 over a different distribution than the original posterior distribution. Sur prisingly, we need not sacrifice the original discrete latent space, with its associated approximating. posterior. Rather, we extend the encoder q(z[x, ) and the prior p(z[) with a transformation to continuous, auxiliary latent representation S, and correspondingly make the decoder a function o this new continuous representation. By extending both the encoder and the prior in the same way. we avoid affecting the remaining KL divergence in Equation 2.14.\nThe gradient is defined everywhere if we require that each point in the original latent space map to. nonzero probability over the entire auxiliary continuous space. This ensures that, if the probability of some point in the original latent space increases from zero to a nonzero value, no probability. packet needs to jump a large distance to cover the resulting new region in the auxiliary continuous. space. Moreover, it ensures that the conditional-marginal CDFs are strictly increasing as a functior. of their main argument, and thus are invertible..\nIf we ignore the cases where some discrete latent variable has probability 0 or 1, we need only. require that, for every pair of points in the original latent space, the associated regions of nonzero probability in the auxiliary continuous space overlap. This ensures that probability packets can move continuously as the parameters of the encoder, q(z[x, ), change, redistributing weight amongst. the associated regions of the auxiliary continuous space.\nD ALTERNATIVE TRANSFORMATIONS FROM DISCRETE TO CONTINUOUS LATENT REPRESENTATIONS\nAs another concrete example, we consider a case where both r((i[zi = O) and r(Ci[zi = 1) are linear functions of Ci:\n2.(1-Si) if0S1 (CiZi 0) Frs|z=0)S)=2C-C 0 otherwise 2.Si,j if 0Si< 1 Fr(Si|zi=1)(C) 0, otherwise\nFq(c|x,o)C)=(1-qz=1|x, 1[x,) =2qz=1|x,9\n14Rather than extend the encoder and the prior, we cannot simply prepend the transformation to continuous. space to the decoder, since this does not change the space of the probabilty packets\nThe spike-and-exponential transformation from discrete latent variables z to continuous latent vari ables ( presented in Section 2.1 is by no means the only one possible. Here, we develop a collection of alternative transformations\nWe can calculate F-1 -' -> ( in Equation 19 to simplify notation:\nq2+2p-1)q+(1-p 2q - 1 (q-1)2+2q-1)p q 2q - 1\np = 0.8 0.8 (0)(p'x|5)b 0.6 p = 0.5 p = 0.2 0.4 0.2 0 0.2 0.4 0.6 0.8 1 q(z =1|x,$)\nFigure 7: Inverse CDF of the mixture of ramps transformation for p E {0.2, 0.5, 0.8\nIn Equation 20, F-1 is concave-up; if p > 0.5, F-1 is concave-down; if p ~ 0.5, F-1 is sigmoid. In no case is F-1 extremely flat, so it does not kill gradients. In contrast, the sigmoid probability of z inevitably flattens."}, {"section_index": "7", "section_name": "D.2 SPIKE-AND-SLAB", "section_text": "if C = 0 Fr(ci|z;=0)(C')=1 otherwise if 0<S1 Fr(5i|z=1)(C')= Si| otherwise\nFq(c|x,x)(C') =(1-q(z =1|x,$)) Fr(5i|z;=0)(C')+ q(z =1|x,$) Fr(Si|z;=1)S =qz=1x,)-1)+1.\n0 = 2: q -) +2-c2 0=(2q-1c2+21q-p 2(q-1)/4(1-2q+ q?)+4(2q-1) 2(2q - 1) q-1q2+2p-1q+1-p 2q - 1\nWe can also use the spike-and-slab transformation, which is consistent with sparse coding and proven in other successful generative models (Courville et al., 2011):.\nifp>1-q H 1Sx otherwise\nWe plot F ) as a function of q for various values of p in Figure 8\n0.8 (o)(9*a|1)b 0.6 p = 0.8 0.4 p = 0.5 0.2 -0.2 0 0 0.2 0.4 0.6 0.8 1 q(z =1|x,)\nFigure 8: Inverse CDF of the spike-and-slab transformation for p E {0.2, 0.5, 0.8\na OF aF 0 de de dz de OF de\nwhere z = F-1(p). Consider the case where r(Si|z; = 0) and r(Ci|zi = 1) are unimodal, but have little overlap. For instance, both distributions might be Gaussian, with means that are many standard. deviations apart. For values of (i between the two modes, F(() ~ q(zi = O|x, ), assuming. without loss of generality that the mode corresponding to z; = 0 occurs at a smaller value of (; than dq r(G) even if r(S) ~ 0. In this case, the stochastic estimates of the gradient in equation 8, which depend upon. OF-\nIt is not necessary to define the transformation from discrete to continuous latent variables in the approximating posterior, r((|z), to be independent of the input x. In the true posterior distribution,.\nWe can calculate I plicitly, using the substitution q(z = 1|x. > q to simplify notation\nIf the smoothing transformation is not chosen appropriately, the contribution of low-probability. regions to the expected gradient of the inverse CDF may be large. Using a variant of the inverse function theorem, we find:\nThese high-variance gradient estimates arise because r((i[zi = 0) and r((i[zi = 1) are too well separated, and the resulting smoothing transformation is too sharp. Such disjoint smoothing trans. formations are analogous to a sigmoid transfer function o(c . x), where is the logistic function. and c -> oo. The smoothing provided by the continuous random variables ( is only effective. if there is a region of meaningful overlap between r((|z = O) and r((|z = 1). In particular, z. r(Si|Zi = 0) +r(Si|Zi = 1) > 0 for all C between the modes of r(Si|Zi = O) and r(Si|Zi = 1), so p(z) remains moderate in equation 21. In the spike-and-exponential distribution described in. Section 2.1, this overlap can be ensured by fixing or bounding ..\npC p(S,x[z) = Z p(S[z,x): p(x[z)\nThis is implausible if the number of discrete latent variables is much smaller than the entropy of the input data distribution. To address this, we can define:.\nq(S,z[x,$) = q(z|x,$)q(S[z,x, p(S,z[0) =p(C[z):p(z0)\nThis leads to an evidence lower bound that resembles that of Equation 2, but adds an extra term:\nThe extension to hierarchical approximating posteriors proceeds as in sections 3 and 4\nIf both q(S|z, x, ) and p(C[z) are Gaussian, then their KL divergence has a simple closed form. which is computationally efficient if the covariance matrices are diagonal. However, while the gra. dients of this KL divergence are easy to calculate when conditioned on z, the gradients with respect. of q(z|x, ) in the new term seem to force us into a REINFORCE-like approach (c.f. Equation 18):\nd log q(z|x, $) dq(z|x, .KL[q(S|z,x,$)|p(S|z)] = IEq(z|x,s) KL[q(S|z,x,$)||p(S|z)] do Z (23) The reward signal is now KL [q(S[z, x, $)[[p(C[z)] rather than log p(x[z, 0), but the effect on the variance is the same, likely negating the advantages of the variational autoencoder in the rest of the loss function\ndq(z|x,$) d log q(z|x,$) KL[q(S|z,x,$)||p(S|z)] =Eq(z{x,s) KL[q(S|z,x,$)|p(S|z)] do do\nHowever, whereas REINFORCE is high-variance because it samples over the expectation, we can perform the expectation in Equation 23 analytically, without injecting any additional variance Specifically, if q(z[x, ) and q(C[z, x, $) are factorial, with q((i[zi,x, ) only dependent on Zi, then KL [q(S|z, x, )|[p(S[z)] decomposes into a sum of the KL divergences over each variable, as"}, {"section_index": "8", "section_name": "E.1 SPIKE-AND-GAUSSIAN", "section_text": "We might wish q(C|z, x, $) to be a separate Gaussian for both values of the binary zi. However, it. is difficult to invert the CDF of the resulting mixture of Gaussians. It is much easier to use a mixture of a delta spike and a Gaussian, for which the CDF can inverted piecewise:.\np(xS,0) :p(S[z,0):p(z|0) q(S|z,x,$) : q(z|x,$) : log q(S[z,x,$): q(z|x,$) =Eq(S|z,x,s).q(z|x,s) [logp(x|C,0)]- KL[q(z|x,$)||p(z|0)] >`q(z|x,$):KL[q(S|z,x,$)||p(S|z)]\nof the form IE KL[qi|pi] d log qi due to the identity explained in Equation 27. We then use the\n0. if Gi < 0 q(Si|Zi=O,x,$)=0(Si) Fq(Si|zi=0,x,s)(Si) = H(Ci) otherwise Uai(x. q(Si|Zi=1,x,$) =N(q,i(x,$),3,i(x,$)) Fq(Si|zi=1,x,$) + erf X.\nwhere q(x, ) and q(x, $) are functions of x and $. We use the substitutions q(z; = 1|x, $) > q. q,i(x, $) -> q,i, and q,i(x, ) -> q,i in the sequel to simplify notation. The prior distribution p is similarly parameterized.\nWe can now find the CDF for q(C|x, ) as a function of q(z\nq(C|x (S) =(1-qiHC qi erf 2\nSince z, = 0 makes no contribution to the CDF until C. = 0. the value of o at which (, = 0 is.\nif pi< Pi step Hq,i +20g.i erf- step+(1-qi 0 .i : erf-1 otherwise Uq,i+\nGradients are always evaluated for fixed choices of p, and gradients are never taken with respect to p. As a result, expectations with respect to p are invariant to permutations of p. Furthermore,\nif pi<1-q 2Pi .i : erf-1 otherwise qi\nAll parameters of the multivariate Gaussians should be trainable functions of x, and independent q. The new term in Equation 22 is:.\nTo train q(z; = 1|x, ), we thus need to backpropagate KL [q(S|zj x, )[p((z; = 1)l into i\naKL[q][p] Pq,i - Pp,i 2 P,i aKL[q]|p] 1 Oq,i d0q,i qi P,i\nstep qi Pq,i 1 + erf 2\n2Pi 2(p -1) +1 qi qi\n>`q(z|x,$) :KL[q(S|z,x,$)|p(S|z)] = >q(Zi =1|x,)KL[q(Si|zi=1,x,$)|p(Si|zi=1)] Z, i +(1-q(zi =1|x,$)):KL[q(Si[zi =0,x,$)[[p(Si[zi = O)\nq(z|x,$) : KL[q(S|z,x,$)|Ip(S|z)] = >`q(zi=1|x,$)KL[q(Si|zi=1,x,$)||p(Si|zi=1) Z,i 1x 6D): KL.[a(C:z 0.x.0)lp(C(z; = 0)\nIf zi = O, then q(Si[zi = O,x,$) =p(Si[zi= 0,0), and KL[q(Si[zi= 0,x,$)|[p(Si[zi = O,0)]= 0 as in Section 2. The KL divergence between two multivariate Gaussians with diagonal covariance\n1 KL[ql|p]= log Op,i - log 0q,i 2\nKL[q]lp] = q(zi = 1|x, Ua Up. ) KL[q]lp] = q(zi = 1|x, a. 9 Z\nFor p, it is not useful to make the mean values of ( adjustable for each value of z, since this is redundant with the parameterization of the decoder. With fixed means, we could still parameterize the variance, but to maintain correspondence with the standard VAE, we choose the variance to be One.\nThe KL term of the ELBO (Equation 2) is not significantly affected by the introduction of additiona continuous latent variables (, so long as we use the same expansion r(C[z) for both the approximat ing posterior and the prior:\nH1<j<kr(Sj|zj) :q(zj|Si<j,x) KL[q|p] = II r(Sj|zj) q(zj|Si<j,x) . log p(z):I1<j<kr(Sj|zj) 1<j<k I1<j<k 9(zj|Si<j,x) II r(Sj|zj) q(zj|Si<j,x) . log p(z) 1<j<k\nThe gradient of Equation 24 with respect to the parameters 0 of the prior, p(z[0), can be es timated stochastically using samples from the approximating posterior, q(S, z[x, ), and the true prior, p(z|0). When the prior is an RBM, defined by Equation 6, we find:\ndEp(z,0) OEp(z,0) KL [q]p] = - )q(S,z|x,$ p(z|0) de de de S,z dEp(z,0) aEp(z,0) + Ep(z\\0) Z1x Si<k,x,$ de de\nThe final expectation with respect to q(zk|Si<k, x, $) can be performed analytically; all other expec- tations require samples from the approximating posterior. Similarly, for the prior, we must sample. from the RBM, although Rao-Blackwellization can be used to marginalize half of the units.\nIn contrast, the gradient of the KL term with respect to the parameters of the approximating posterior is severely complicated by a nonfactorial approximating posterior. We break KL [q[p] into two terms, the negative entropy z,c q log q, and the cross-entropy z,c q log p, and compute their gradients separately.\nWe can regroup the negative entropy term of the KL divergence so as to use the reparameterization trick to backpropagate through <a q(zjSi<i, x):\n-H(q) = II r(Sj|zj) q(zj|Si<j,x) . log I q(zj|Si<j,x) 1<j<k 1<j<k Ir(Sj|zj) q(zj|Si< IIr(Si|zi) q(zi|Sn<i,x) :log q(Zj|Si<j,x i<j q(zj|Si<j,x):logq(zj|Si<j,x sq(Si<j,Zi<j|x,$ Epi<j )q(zj|Pi<j,x) logq(zj|Pi<j; Zj\nwhere indices i and j denote hierarchical groups of variables. The probability q(zj|Pi<j,x) i evaluated analytically, whereas all variables zi<j and (i< are implicitly sampled stochasticall. via pi<j:\nWe wish to take the gradient of - H(q) in Equation 26. Using the identity:\nd ogq=c>` dc\nH(q) = Epi<j i|Pi<j,x\nMoreover, we can eliminate any log-partition function in log q(z|Pi<j, x) by an argument analogou: to Equation 27.15 By repeating this argument one more time, we can break $q(zj|Pi<j, x) into its factorial components.16 If z; E {0,1}, then using Equation 10, gradient of the negative entropy reduces to:\nq lEj Zl dg. =1- dd\ndqT(zj=1) a H(q) =Epi<j do\nH(q) => I r(Sj|zj) q(zj|Si<j,x) . log q(zj|Si<j,x 1<j<k 1<j<k Ir(Sj|zj):q(zj|Si<j,x log q(zj|Si<j,x L Ir(Si|zi):q(zi|Sn<i,x) : log q(zj|Si<j,x) Z i<j Eq(Si<j,zi<j|x,$) `q(zj|Si<j,x) l0gq(zj|Si<z Epi<j q(Zj|Pi<j,x) : logq(Zj|Pi<j,x (2\na lEj dg =1-qz=1)\nwhere t and z, correspond to single variables within the hierarchical groups denoted by j. In Ten- sorFlow, it might be simpler to write:.\na d qlog p= q : log Zp = E do da\nqEp=-Ep[z.Wz+bz]\nz=bEqz=1\nThe approximating posterior q is continuous, with nonzero derivative, so the reparameterization trick can be applied to backpropagate gradients:\nEo. =\n.W.z= Wi: Zi i,j\ndepends upon variables that are not usually in the same hierarchical level, so in general\nwhere without loss of generality z; is in an earlier hierarchical layer than z; however, it is not clear how to take the derivative of zi, since it is a discontinuous function of Pk<i..\nThe naive approach would be to take the gradient of the expectation using the gradient of log probabilities over all variables:\na d E[WijZiZj] = E g9 Og qk|l<k 9211 k 1 1,q2|1 i Z qk|l<k ) k\nFor since those terms can be pulled out of the expectation over qk, and we can apply Equation 27. However, for terms involving zi>k or zj>k that occur hierarchically after k, the expected value of zi or z; depends upon the chosen value of zk.\nThe gradient calculation in Equation 28 is an instance of the REINFORCE algorithm (Equation 18 Moreover, the variance of the estimate is proportional to the number of terms (to the extent that th terms are independent). The number of terms contributing to each gradient cally with number of units in the RBM. We can introduce a baseline, as in NVIL (Mnih & Gregor 2014):\nW ijZiZj - og q\nEp[WijZiZj] =Wij.Epk<i Zi:Eek>i[zj]]\na E|Wj g dd log qk|l<k (2 2 do 1 92 qk|l<k do\nJqk|l<k terms are independent). The number of terms contributing to each gradien grows quadrati\nWhen using the spike-and-exponential, spike-and-slab, or spike-and-Gaussian distributions of sec- tions 2.1 D.2, and E.1, we can decompose the gradient of E [W;zz;] using the chain rule. Previ- ously, we have considered z to be a function of p and . We can instead formulate z as a function oi q(z = 1) and p, where q(z = 1) is itself a function of p and . Specifically,\nif p<1-qiz=1=qiz=0 i(qi(Z=1),Pi Otherwise.\nUsing the chain rule, dzi dqj(zj=1) . where dzi holds all qkj fixed, even j dqj(zj=1) dqj (zj=1) though they all depend on the common variables p and parameters $. We use the chain rule to differentiate with respect to q(z = 1) since it allows us to pull part of the integral over p inside the derivative with respect to $. In the sequel, we sometimes write q in place of q(z = 1) to minimize notational clutter.\nExpanding the desired gradient using the reparameterization trick and the chain rule, we find\nWe can change the order of integration (via the expectation) and differentiation since\nWi;ZiZj| Wij< 00\nfor all p and bounded (Cheng, 2006). Although z(q, p) is a step function, and its derivative is. a delta function, the integral (corresponding to the expectation with respect to p) of its derivative is finite. Rather than dealing with generalized functions directly, we apply the definition of the. derivative, and push through the matching integral to recover a finite quantity..\nFor simplicity, we pull the sum over k out of the expectation in Equation 30, and consider eacl summand independently. From Equation 29, we see that z; is only a function of qi, so all terms. in the sum over k in Equation 30 vanish except k = i and k = j. Without loss of generality, we. consider the term k = i; the term k = j is symmetric. Applying the definition of the gradient to one. of the summands, and then analytically taking the expectation with respect to pi, we obtain:.\nOWijZi(q,p)zj(q,p) dqi(Zi=1) E dqi(Zi =1) do Wijzi(q+8qi,P)zj(q+8qi,P)-Wijzi(q,p)zj(q,p)dqi(zi=1) lim dqi do 8qi(zi=1)->0 Wij1zj(qp)-Wi0zj(qp) dqi(zi=1) lim dqi 8qi(zi=1)->0 dqi Pi=qi(zi=0) dqi(Zi = 1) Pi=qi(Zi=0)\nThe third line follows from Equation 29, since z(q + 8qi, p) differs from z(q, p) only in the region of p of size Sqi around qi(zi = O) = 1 qi(Zi = 1) where zi(q + dqi,P) zi(q,p). Regardless of the choice of p, zj(q + dqi,P) = zj(q, P).\nThe third line fixes p to the transition between z = 0 and z; = 1 at q(zi = O). Since zi = 0 implies (; = 0,17 and ( is a continuous function of p, the third line implies that C, = 0. At the same n dqi is not affected by time, since qi is only a function of Pk<i from earlier in the hierarchy, the term the choice of p,.18 As noted above, due to the chain rule, the perturbation dq, has no effect on other\n17we chose the conditional distribution r(S|zi = 0) to be a delta spike at zero. 181n contrast, zi is a function of Pi..\nEq[WijZiZj] Eo[Wi;ZiZj OWijZiZj dqi dqk(Zk =1) k\nOWijzi(q,p)zj(,p) dqi(Zi=1) E p dqi(Zi = 1) do Wijzi(q+8qi,P)z(q+8qi,P)-Wizi(qp)z(q,p) dqi(Zi=1) lim 8qi(zi=1)->0 Oqi do Wi1z(qp)-Wi0zj(qp) dqi(Zi=1) lim dqi 8qi(zi=1)->0 dqi do Pi=qi(z=0) dqi(Zi = 1) do Pi=qi(zi=0)\nSince p; is fixed such that (; = 0, all units further down the hierarchy must be sampled consis tent with this restriction. A sample from p has (i = O if z = 0, which occurs with probability qi(zi = 0).19 We can compute the gradient with a stochastic approximation by multiplying each. sample by 1 - zi, so that terms with (i 0 are ignored,20 and scaling up the gradient when z = 0. by\nG MOTIVATION FOR BUILDING APPROXIMATING POSTERIOR AND PRIOR HIERARCHIES IN THE SAME ORDER\nIntuition regarding the difficulty of approximating the posterior distribution over the latent variables. given the data can be developed by considering sparse coding, an approach that uses a basis set of. spatially locallized filters (Olshausen & Field, 1996). The basis set is overcomplete, and there are. generally many basis elements similar to any selected basis element. However, the sparsity prior. pushes the posterior distribution to use only one amongst each set of similar basis elements.\nAs a result, there is a large set of sparse representations of roughly equivalent quality for any single input. Each basis element individually can be replaced with a similar basis element. However having changed one basis element, the optimal choice for the adjacent elements also changes so the filters mesh properly, avoiding redundancy or gaps. The true posterior is thus highly correlated. since even after conditioning on the input, the probability of a given basis element depends strongly on the selection of the adjacent basis elements."}, {"section_index": "9", "section_name": "H ARCHITECTURE", "section_text": "The stochastic approximation to the ELBO is computed via one pass down the approximating pos terior (Figure 4a), sampling from each continuous latent layer ( and 3m>1 in turn; and another pass down the prior (Figure 4b), conditioned on the sample from the approximating posterior. In the pass down the prior, signals do not flow from layer to layer through the entire model. Rather, the input to each layer is determined by the approximating posterior of the previous layers, as follows from Equation 14. The gradient is computed by backpropagating the reconstruction log-likelihood, and the KL divergence between the approximating posterior and true prior at each layer, through this differentiable structure.\n191t might also be the case that G, = 0 when z, = 1, but with our choice of r(S|z), this has vanishingly small probability. ftho S01\nd 1-Zi dqi(zi=1) E[Wi;ziZj] =E 1-qiz=1)\nThese equivalent representations can easily be disambiguated by the successive layers of the rep. resentation. In the simplest case, the previous layer could directly specify which correlated set of basis elements to use amongst the applicable sets. We can therefore achieve greater efficiency by. inferring the approximating posterior over the top-most latent layer first. Only then do we compute. the conditional approximating posteriors of lower layers given a sample from the approximating. posterior of the higher layers, breaking the symmetry between representations of similar quality..\nAll hyperparameters were tuned via manual experimentation. Except in Figure 6, RBMs have 128. units (64 units per side, with full bipartite connections between the two sides), with 4 layers o. hierarchy in the approximating posterior. We use 100 iterations of block Gibbs sampling, with 20. persistent chains per element of the minibatch, to sample from the prior in the stochastic approxi. mation to Equation 11.\nWhen using the hierarchy of continuous latent variables described in Section 4, discrete VAEs overfit. if any component of the prior is overparameterized, as shown in Figure 9a. In contrast, a larger. and more powerful approximating posterior generally did not reduce performance within the range. examined, as in Figure 9b. In response, we manually tuned the number of layers of continuous latent variables, the number of such continuous latent variables per layer, the number of deterministic hidden units per layer in the neural network defining each hierarchical layer of the prior, and the. use of parameter sharing in the prior. We list the selected values in Table 2. All neural networks. implementing components of the approximating posterior contain two hidden layers of 2000 units.\nFigure 9: Log likelihood on statically binarized MNIST versus the number of hidden units per neural. network layer, in the prior (a) and approximating posterior (b). The number of deterministic hidden. layers in the networks parameterizing the prior/approximating posterior is 1 (blue), 2 (red), 3 (green). in (a/b), respectively. The number of deterministic hidden layers in the final network parameterizing p(x[3) is 0 (solid) or 1 (dashed). All models use only 10 layers of continuous latent variables, with. no parameter sharing.\nTable 2: Architectural hyperparameters used for each dataset. Successive columns list the numbe. of layers of continuous latent variables, the number of such continuous latent variables per laye. the number of deterministic hidden units per layer in the neural network defining each hierarchica. layer of the prior, and the use of parameter sharing in the prior. Smaller datasets require more. regularization, and achieve optimal performance with a smaller prior..\nOn statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes, we further regularize using. recurrent parameter sharing. In the simplest case, each p(3m|31<m, 0) and p(x|3, 0) is a func- tion of i<m 3t, rather than a function of the concatenation [3o, 31,...,3m-1]. Moreover, all. p (3m1|31<m, 0) share parameters. The RBM layer 30 is rendered compatible with this parame- terization by using a trainable linear transformation of (, M : (; where the number of rows in M is\n82 84 86 88 100 200 300 400 500 500 1,000 1,500 2,000 Num hidden units per decoder layer Num hidden units per encoder layer (a) Prior (b) Approximating posterior\nNum Vars per Hids per Param layers layer prior layer sharing MNIST (dyn bin) 18 64 1000 none MNIST (static bin) 20 256 2000 2 groups Omniglot 16 256 800 2 groups Caltech-101 Sil 12 80 100 complete\nOn datasets of intermediate size, a degree of recurrent parameter sharing somewhere between full independence and complete sharing is beneficial. We define the n group architecture by dividing the continuous latent layers 3m>1 into n equally sized groups of consecutive layers. Each such group is independently subject to recurrent parameter sharing analogous to the complete sharing architecture and the RBM layer 3o is independently parameterized.\nWe use the spike-and-exponential transformation described in Section 2.1. The exponent is a train able parameter, but it is bounded above by a value that increases linearly with the number of training epochs. We use warm-up with strength 20 for 5 epochs, and additional warm-up of strength 2 on the RBM alone for 20 epochs (Raiko et al., 2007; Bowman et al., 2016; Sonderby et al., 2016)..\nWhen p(x|3) is linear, all nonlinear transformations are part of the prior over the latent variables. In contrast. it is also possible to define the prior distribution over the continuous latent variables to be a simple factorial distribution, and push the nonlinearity into the final decoder p(x[3), as in. traditional VAEs. The former case can be reduced to something analogous to the latter case using. the reparameterization trick.\nHowever, a VAE with a completely independent prior does not regularize the nonlinearity of th prior; whereas a hierarchical prior requires that the nonlinearity of the prior (via its effect on th true posterior) be well-represented by the approximating posterior. Viewed another way, a com pletely independent prior requires the model to consist of many independent sources of variance so the data manifold must be fully unfolded into an isotropic ball. A hierarchical prior allows th data manifold to remain curled within a higher-dimensional ambient space, with the approximatin posterior merely tracking its contortions. A higher-dimensional ambient space makes sense whe modeling multiple classes of objects. For instance, the parameters characterizing limb positions an orientations for people have no analog for houses."}, {"section_index": "10", "section_name": "H.1 ESTIMATING THE LOG PARTITION FUNCTION", "section_text": "We estimate the log-likelihood by subtracting an estimate of the log partition function of the RBM. (log Zp from Equation 6) from an importance-weighted computation analogous to that of Burda et al.. (2016). For this purpose, we estimate the log partition function using bridge sampling, a variant. of Bennett's acceptance ratio method (Bennett, 1976; Shirts & Chodera, 2008), which produces. unbiased estimates of the partition function. Interpolating distributions were of the form p(x). and sampled with a parallel tempering routine (Swendsen & Wang, 1986). The set of smoothing. parameters in [0, 1] were chosen to approximately equalize replica exchange rates at 0.5. This standard criteria simultaneously keeps mixing times small, and allows for robust inference. We make a conservative estimate for burn-in (0.5 of total run time), and choose the total length of run and number of repeated experiments, to achieve sufficient statistical accuracy in the log partition. function. In Figure 10, we plot the distribution of independent estimations of the log-partition function for a single model of each dataset. These estimates differ by no more than about O.1, indicating that the estimate of the log-likelihood should be accurate to within about 0.05 nats..\nRather than traditional batch normalization (Ioffe & Szegedy, 2015), we base our batch normaliza tion on the L1 norm. Specifically, we use:\ny=x-x Xbn. Os+0\nwhere x is a minibatch of scalar values. x denotes the mean of x. O indicates element-wise mul tiplication, e is a small positive constant, s is a learned scale, and o is a learned offset. For the approximating posterior over the RBM units, we bound 2 s 3, and -s o s. This helps ensure that all units are both active and inactive in each minibatch, and thus that all units are used.\nFigure 10: Distribution of estimates of the log-partition function, using Bennett's acceptance ratio method with parallel tempering, for a single model trained on dynamically binarized MNIST (a) statically binarized MNIST (b), Omniglot (c), and Caltech-101 Silhouettes (d)"}, {"section_index": "11", "section_name": "1 COMPARISON MODELS", "section_text": "In Table 1, we compare the performance of the discrete variational autoencoder to a selection of recent, competitive models. For dynamically binarized MNIST, we compare to deep belief networks (DBN; Hinton et al., 2006), reporting the results of Murray & Salakhutdinov (2009); importance weighted autoencoders (IWAE; Burda et al., 2016); and ladder variational autoencoders (Ladde VAE; Sonderby et al., 2016).\nFor the static MNIST binarization of (Salakhutdinov & Murray, 2008), we compare to Hamilto. nian variational inference (HVI; Salimans et al., 2015); the deep recurrent attentive writer (DRAW;. Gregor et al., 2015); the neural adaptive importance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015); deep latent Gaussian models with normalizing flows (Nor-. malizing flows; Rezende & Mohamed, 2015); and the variational Gaussian process (Tran et al.,. 2016).\nFinally, for Caltech-101 Silhouettes, we compare to the importance-weighted autoencoder (IWAE. Burda et al., 2016), reporting the results of Li & Turner (2016); reweighted wake-sleep with a. deep sigmoid belief network (RwS SBN; Bornschein & Bengio, 2015); the restricted Boltzmann. machine (RBM; Smolensky, 1986), reporting the results of Cho et al. (2013); and the neural adaptive importance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015).\n10-2 10-2 Frrrnee tr erntess 6 6 4 4 2 2 0 0 33.6 33.65 33.7 40.1 40.15 40.2 (a) MNIST (dyn bin). (b) MNIST (static bin). 10-2 .10-2 8 8 esnnnness 6 6 4 4 10 2 2 0 0 34.1 34.15 34.2 21.1 21.15 21.2 Log partition function estimate Log partition function estimate (c) Omniglot (d) Caltech-101 Silhouettes\nOn Omniglot, we compare to the importance-weighted autoencoder (IWAE; Burda et al., 2016);. ladder variational autoencoder (Ladder VAE; Spnderby et al., 2016); and the restricted Boltzmann machine (RBM; Smolensky, 1986) and deep belief network (DBN; Hinton et al., 2006), reporting. the results of Burda et al. (2015).\n3 8 8 R 0 3 5\nFigure 11: Evolution of samples from a discrete VAE trained on statically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM. between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers. as opposed to the RBM. Vertical sequences in which the digit ID remains constant demonstrate that. the RBM has distinct modes, each of which corresponds to a single digit ID, despite being trained. in a wholly unsupervised manner."}, {"section_index": "12", "section_name": "SUPPLEMENTARY RESULTS", "section_text": "To highlight the contribution of the various components of our generative model, we investigate. performance on a selection of simplified models.21 First, we remove the continuous latent layers.. The resulting prior, depicted in Figure 1b, consists of the bipartite Boltzmann machine (RBM), the. smoothing variables (, and a factorial Bernoulli distribution over the observed variables x defined via. a deep neural network with a logistic final layer. This probabilistic model achieves a log-likelihood of 86.9 with 128 RBM units and 85.2 with 200 RBM units.\nNext, we further restrict the neural network defining the distribution over the observed variables x given the smoothing variables ( to consist of a linear transformation followed by a pointwise logistic. nonlinearity, analogous to a sigmoid belief network (SBN; Spiegelhalter & Lauritzen, 1990; Neal.. 1992). This decreases the negative log-likelihood to -92.7 with 128 RBM units and 88.8 with. 200 RBM units\nWe then remove the lateral connections in the RBM, reducing it to a set of independent binary. random variables. The resulting network is a noisy sigmoid belief network. That is, samples are produced by drawing samples from the independent binary random variables, multiplying by an independent noise source, and then sampling from the observed variables as in a standard SBN With this SBN-like architecture, the discrete variational autoencoder achieves a log-likelihood of 97.0 with 200 binary latent variables..\nFinally, we replace the hierarchical approximating posterior of Figure 3a with the factorial approxi. mating posterior of Figure 1a. This simplification of the approximating posterior, in addition to the prior, reduces the log-likelihood to -102.9 with 200 binary latent variables..\n21In all cases, we report the negative log-likelihood on statically binarized MNIST (Salakhutdinov & Mur ray, 2008), estimated with 104 importance weighted samples (Burda et al., 2016)..\nFigure 12: Evolution of samples from a discrete VAE trained on Omniglot, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM.\nFigure 13: Evolution of samples from a discrete VAE trained on Caltech-101 Silhouettes, using. persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM. between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Vertical sequences in which the silhouette shape remains similar. demonstrate that the RBM has distinct modes, each of which corresponds to a single silhouette type. despite being trained in a wholly unsupervised manner..\nB 8 b L b 4 ES D a] 149 1 5 i 5 3 5 9 P R LC R RE F 3 EAD A F 3} 5 9 3 V A FL 10 FG 4 3. 3 C D E F R A 11 L 8 + T A A 5 e 0 b A t0 3 G 3 M dl B C D C L D C A a. 15 de 5 C L E R ~ D G 3 R F3 C e FT 0 F t3"}]
ByG8A7cee
[{"section_index": "0", "section_name": "REFERENCE-AWARE LANGUAGE MODELS", "section_text": "Zichao Yang1 *, Phil Blunsom2,3, Chris Dyer1,2, and Wang Ling2 1Carnegie Mellon University, 2DeepMind, and 3University of Oxford"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Referring expressions (REs) in natural language are noun phrases (proper nouns, common nouns, and pronouns) that identify objects, entities, and events in an environment. REs occur frequently and they play a key role in communicating information efficiently. While REs are common, previ- ous works neglect to model REs explicitly, either treating REs as ordinary words in the model or replacing them with special tokens. Here we propose a language modeling framework that explicitly incorporates reference decisions.\nn Figure we list examples of REs in the context of the three tasks that we consider in this work Firstly, reference to a database is crucial in many applications. One example is in task orientec lialogue where access to a database is necessary to answer a user's query (Young et al., 2013; t al., 2016; Vinyals & Le, 2015; Wen et a, 2015; Sordoni et al., 2015; Serban et al., 2016; Borde & Weston, 2016; Williams & Zweig, 2016; Shang et al., 2015; Wen et al., 2016). Here we conside he domain of restaurant recommendation where a system refers to restaurants (name) and thei ttributes (address, phone number etc) in its responses. When the system says \"the nirala is a ice restaurant\"', it refers to the restaurant name the nirala from the database. Secondly, many nodels need to refer to a list of items (Kiddon et al.l, 2016; Wen et al., 2015). In the task of recip generation from a list of ingredients (Kiddon et al, 2016), the generation of the recipe will frequentl eference these items. As shown in Figure , in the recipe \"Blend soy mi1k and...\", soy mi1l efers to the ingredient summaries. Finally, we address references within a document (Mikolov et al. 2010; Ji et al, 2015; Wang & Cho, 2015), as the generation of words will ofter refer to previously generated words. For instance the same entity will often be referred to throughout a document. Ir Figure , the entity you refers to I in a previous utterance.\nIn this work we develop a language model that has a specific module for generating REs. A series of latent decisions (should I generate a RE? If yes, which entity in the context should I refer to? How should the RE be rendered?) augment a traditional recurrent neural network language model and the two components are combined as a mixture model. Selecting an entity in context is similar to familiar models of attention (Bahdanau et al, 2014), but rather than being a deterministic function that reweights representations of elements in the context, it is treated as a distribution over contextual elements which are stochastically selected and then copied or, if the task warrants it, transformed (e.g., a pronoun rather than a proper name is produced as output). Two variants are possible for updating the RNN state: one that only looks at the generated output form; and a second that looks at values of the latent variables. The former admits trivial unsupervised learning, latent decisions are conditionally independent of each other given observed context, whereas the latter enables more"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We propose a general class of language models that treat reference as an explicit stochastic latent variable. This architecture allows models to create mentions of entities and their attributes by accessing external databases (required by, e.g., di- alogue generation and recipe generation) and internal state (required by, e.g. lan guage models which are aware of coreference). This facilitates the incorporation of information that can be accessed in predictable locations in databases or dis course context, even when the targets of the reference may be rare words. Ex periments on three tasks show our model variants outperform models based on deterministic attention.\nFigure 1: Reference-aware language models\nexpressive models that can extract information from the entity that is being referred to. In each of the three tasks, we demonstrate our reference aware model's efficacy in evaluations against models that do not explicitly include a reference operation.\nWe denote each document as a series of tokens x1, . . . , x L, where L is the number of tokens in the. document. Our goal is to maximize the probabilities p(x, c, ), for each word in the document based on its previous context c; = x1,..., x-1. In contrast to traditional neural language models, we. introduce a variable at each position zi, which controls the decision on which source x; is generated. from. The token conditional probably is then obtained by:.\nIn dialogue modeling and recipe generation, z; will simply taken on values in {0, 1}. Where z; = denotes that x; is generated as a reference, either to a database entry or an item in a list. However z, can also be defined as a distribution over previous entities, allowing the model to predict x conditioned on its a previous mention word. This will be the focus of the coreference language model. When z; is not observed (which it generally will not be), we will train our model to maximize the marginal probability in Eq. directly."}, {"section_index": "3", "section_name": "2.1 DIALOGUE MODEL WITH DATABASE SUPPORT", "section_text": "We can observe from this example, users get recommendations of restaurants based on queries that specify the area, price and food type of the restaurant. We can support the system's decisions by incorporating a mechanism that allows the model to query the database allowing the model to find restaurants that satisfy the users queries. Thus, we crawled TripAdvisor for restaurants in the\nreference example the dialogue moderate M: the nirala is a nice restuarant nirala table recipe 1 cpu plain soy milk. Blend soy milk and ... ingredients um and [] think ... [you] .. coreference [][Linda]2 [you]]... coref\nAanTpr dialogue the moderate M: the nirala is a nice restuarant nirala table 1 cpu plain soy milk Blend soy milk and .. recipe ingredionts [][Linda]2 [you]]... um and [] think ... [you]] coreference coref\nWe propose a general framework to model reference in language and instantiate it in the context of dialogue modeling, recipe generation and coreference based language models.. We build three data sets to test our models. There lack existing data sets that satisfy our need, so we build these data sets ourselves. These data sets are either built on top existing. data set (we constructed the table for DSTC2 data set for dialogue evaluation), crawled. from websites (we crawled all recipes in www.allrecipes.com) or annotated with NLP tools (we annotate the coreference with Gigaword corpus for our evaluation).. We perform comprehensive evaluation of our models on the three data sets and verify our. models perform better than strong baselines..\no(xici)=p(xi Zj,Cj)p(ZiCj)\nTable 1: Example dialogue. M stands for Machine and U stands for Use.\nTable 2: Fragment of database for dialogue system\nCambridge area, where the dialog dataset was collected. Then, we remove restaurants that do not appear in the data set and create a database with 109 entries with restaurants and their attributes (e.g food type). A sample of our database is shown in Table. . We can observe that each restaurant contains 6 attributes that are generally referred in the dialogue dataset. As such, if the user requests a restaurant that serves \"indian\"' food, we wish to train a model that can search for entries whose \"food\"' column contains \"indian'\". Now, we describe how we deploy a model that fulfills these requirements.\nFigure 2: Hierarchica1 RNN Seq2Seq model\nConsider a dialogue with T turns, and the utterance from a user is denoted as X = {x}=1, where. i is the i-th utterance, whereas the utterance from a machine is denoted as Y = {y}T-1, where i. 1xi in the i-th utterance from the user, whereas yiv denotes the v-th token in the i-th utterance from. the machine. Finally, x and y denote the number of tokens in the user and machine utterances respectively. The dialogue sequence starts with machine utterance {y1, x1, y2, x2, ..., yT, xT}. We. would like to model the utterances from the machine.\nIp(yi|Y<i,x<i) = p(y1, Y2,..., YT[X1, X2,..., XT) 1]p(Yi,v|Yi,<v,Y<i,x<i] i i,U\nwhere y<; denotes all the utterances before i and y.<y denotes the first v - 1 tokens in the i-th utterance of the machine. A neural model is employed to predict p(yi,v|yi,<v, Y<i, x<i), which operates as follows:\nSentence Encoder: We first encode previous utterances y<; and x<; into continuous space by gen erating employing a LSTM encoder. Thus, for a given utterance x,, and start with the initial LSTM state ho and apply the recursion h, = LSTMe(Wexi,, hi-1), where Wex,; denotes a word\nI: Hello , welcome to the Cambridge restaurant system? You can ask for restaurants by area, price range or food type. How may Ihelp you?. J: i want a moderately priced restaurant in the north town. I: the ni rala is a nice restaurant in the north of town in the moderate price range J: what is the address I: Sure,the niralaison 7 Milton Road Chesterton.\nNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE ali baba moderate lebanese centre 59 Hills Road City Cen- CB 2, 1 NT 01462 432565 tre the moderate indian north 7 Milton Road Chester- CB 4, 1 UY 01223 360966 nirala ton\ntre 7 Milton Road Chester ton\nWe build a model based on the hierarchical RNN model described in (Serban et al. 2016), as in dialogues, the generation of the response is not only dependent on the previous sentence, but on all sentences leading to the response. We assume that a dialogue is alternated between a machine and a user. An illustration of the model is shown in Figure\nembedding lookup for the token x,, and LSTM denotes the LSTM transition function describec in Hochreiter & Schmidhuber (1997). The representation of the user utterance is represented b. representation hy = hy.\nTurn Encoder: Then, combine all the representations of all the utterances with a second LSTM. which encodes the sequence {h?, h+, ..., h?, hx} into a continuous vector. Once again, we start with an initial state uo and feed each of the utterance representation to obtain the following LSTM state, until the final state is obtained. For simplicity, we shall refer to this as u,, which can be seen as the hierarchical encoding of the previous i utterances.\nSeq2Seq Decoder: As for decoding, in order to generate each utterance yi, we can feed u-1 into the decoder LSTM as the initial state s;,o = ui1 and decode each token in yi. Thus, we can express the decoder as:\n- LSTMp(WEYi,v-1, Si,v- - softmax(Wsy .W\nwhere the desired probability p(yi. i, x<i) is expressed by py\nAttention based decoder: We can also incorporate the attention mechanism in our hierarchical model. An attention model builds a representation d by averaging over a set of vectors p. We define. the attention function as a = ATTN(p, q), where a is a probability distribution over the set of vectors. p, conditioned on any input representation q. A full description of this operation is described in (Bah. danau et al., 2014). Thus, for each generated token yi,v, we compute the attentions ai,v, conditioned. : 01 K = [h-y | be the number of tokens in previous turn. Thus, we obtain the attention probabilities. over all previous tokens ai, as ATTN(st,, h,1). Then, the weighted sum is computed over these from previous turn. The resulting vector d,., is used to obtain the probability of the following word.\nU - LSTMp(|WEYi,v-1, di i.v = xy li-1,k kEK = softmax(W[s.,, d\nFigure 3: Table based decoder\nWe now extend the attention model in order to allow the attention to be computed over a table allowing the model to condition the generation on a database..\nWe denote a table with R rows and C columns as {fr.c}, r E [1, R], c E [1, C], where fr.c is the cell in row r and column c. The attribute of each column is denoted as sc, where c is the c-th attribute fr.c and sc are one-hot vector.\nTable Encoding: To encode the table, we build an attribute vector gc for each column. For each cell fr.c of the table, we concatenate it with the corresponding attribute gc and then feed it through a one-layer MLP as follows: gc = WEsc and then er,c = tanh(W[We fr,c, gc]).\nTable Attention: The diagram for table attention is shown in Figure 3a. The attention over cells. in the table is conditioned on a given vector q, similarly to the attention model for sequences ATTN(p, q). However, rather than a sequence p, we now operate over a table f. Our attentior. model computes a attribute attention followed by row attention of the table. We first use the atten tion mechanism on the attributes to find out which attribute the user asks about. Suppose a usei says cheap, then we should focus on the price attribute. After we get the attention probabil ity pa = ATTN({ gc}, q), over the attribute, we calculate the weighted representation for each rov er = c perc conditioned on pa. Then e, has the price information of each row. We further use. attention mechanism on er and get the probability p* = ATTN({er}, q) over the rows. Then restau rants with cheap price will be picked. Then, using the probabilities p, we compute the weightec average over the all rows ec = r per,c, which is used in the decoder. The detailed process is:.\nPr = ATTN({er},q) ec = Vc\nThis is embedded in the decoder by replacing the conditioned state q as the current decoder state at each step. The detailed diagram of table attention is shown in Figure 3a!."}, {"section_index": "4", "section_name": "2.1.3 INCORPORATING TABLE POINTER NETWORKS", "section_text": "We now describe the mechanism used to refer to specific database entries during decoding. At each timestep, the model needs to decide whether to generate the next token from an entry of the database or from the word softmax. This is performed as follows\nPointer Switch: We use zi.y E [0, 1] to denote the decision of whether to copy one cell from the table. We compute this probability as follows:.\n= sigmoid(W sv, d,y)\nObjective: As we treat z; as a latent variable, we wish to maximize the marginal probability of th sequence y over all possible values of zy. Thus, our objective function is defined as:\n)) + pcopyp(1|Si,u) p(Yi,v[Si,v) Si a\nThus, if zi,v = 1, the next token yi,v will be generated from the database, whereas if zi,v = 0, then the following token is generated from a softmax. We shall now describe how we generate tokens from the database.\nTable Pointer: If z,.y = 1, the token is generated from the table. The detailed process of calculating the probability distribution over the table is shown in Figure Bbl. This is similar to the attention mechanism, except that we perform a column attention to compute the probabilities of copying from each column after Equation. 5. More formally:\n=ATTN{ec},q) copy\nwhere pc is a probability distribution over columns, whereas pr is a probability distribution over rows. In order to compute a matrix with the probability of copying each cell, we simply compute the outer product pcopy = pr pc.\nThe model can also be trained in a fully supervised fashion, if zy , is observed. In such cases we simply maximize the likelihood of p(zi,v[Si,v), based on the observations, rather than using the. marginal probability over zi,v.\nsoy pcopy pvocab Yes No ingredients decod soy Blend encoder\nsoy ncopy pvocab Yes No ingredients decoder soy Blend encoder Figure 4: Recipe pointer\nhi,j = LSTMe(WEXij,hi,j- Vi\nS, = LSTMp( -1,dv-1, WeYv-1 copy =ATTN{{hi Pr d, = ) [su) = sigmoid(W[su, d]) vocab = softmax(W[su, dy])\nNext, we consider the task of recipe generation conditioning on the ingredient lists. In this task, we must generate the recipe from a list of ingredients. Table. 3 illustrates the ingredient list and recipe for Spinach and Banana Power Smoothie. We can see that the ingredients soy milk, spinach leaves, and banana occur in the recipe\nLet the ingredients of a recipe be X = {x}T=1 and each ingredient contains L tokens x, = A1 gredient:\nThen, we sum the resulting state of each ingredient to obtain the starting LSTM state of the decoder Once again we use an attention based decoder:\nS, = LSTMp(Su -1,dy-1, WeYv-1 copy = ATTN{{h dy = Zy[sy) = sigmoid(W[sv, dv yocab = softmax(W[sv, d)\nSimilar to the previous task, the decision to copy from the ingredient list or generate a new word from the softmax is performed using a switch, denoted as p(zu[su). We can obtain a probability distribution of copying each of the words in the ingredients by computing peopy = likelihood function employed in the previous task..\nFinally, we build a language model that uses coreference links to point to previous words. Before. generating a word, we first make the decision on whether it is an entity mention. If so, we decide\nwhich entity this mention belongs to, then we generate the word based on that entity. Denote the document as X = {t}-1, and the entities are E = {e,}1, each entity has M, mentions, e; = the hidden state of each token is h, = LSTM(Wexi, hi-1). We use a set he = {ho, hi, ..., hM} to keep track of the entity states, where h, is the state of entity j.\num and [I] think that is whats - Go ahead [Linda]2. Well and thanks goes to [you] and tc [the media]3 to help [us]4...So [our]4 hat is off to all of [you]5....\nFigure 5: Coreference based lan. e model, example taken from Wiseman et a (2016)\nWord generation: At each time step before generating the next word, we predict whether the wor is an entity mention:\nwhere z; denotes whether the next word is an entity and if yes v, denotes which entity the next word corefers to. If the next word is an entity mention, then p(x,[vi,hi-1,he). softmax(W1 tanh(W2[he.,hi-1])) else p(x;|hi-1) = softmax(W1hi-1),\np(xi|hi-1)p(zi|hi-1,he) if Zi = 0. p(xi|Ui,hi-1,h)pcoref(v;|hi-1,h)p(zi|hi-1,he) if Zi = 1.\nEntity state update: We update the entity state he at each time step. In the beginning, he = {h} he denotes the state of an virtual empty entity and is a learnable variable. If z; = 1 and v; = 0, then it indicates the next word is a new entity mention, then in the next step, we append h, to he, i.e. he = {he, h}, if e; > 0, then we update the corresponding entity state with the new hidden state, he [v,] = h;. Another way to update the entity state is to use one LSTM to encode the mention states and get the new entity state. Here we use the latest entity mention state as the new entity state for simplicity. The detailed update process is shown in Figure 5.\nDialogue: We use the DSTC2 data set. We only extracted the dialogue transcript from data set There are about 3,200 dialogues in total. Since this is a small data set, we use 5-fold cross validatior and report the average result over the 5 partitions. There may be multiple tokens in each table cell for example in Table., the name, address, post code and phone number have multiple tokens, we replace them with one special token. For the name, address, post code and phone number of the j-th row, we replace the tokens in each cell with NAME_j, _ADDR_j, POSTCODE_j, _PHONE_j If a table cell is empty, we replace it with an empty token _EMPTY. We do a string match in the transcript and replace the corresponding tokens in transcripts from the table with the special tokens\nnew entity entity 1 empty Linda 2 Linda state You te 0 0 cess att []1 push state update state qttn push state - []1 [Linda] and of [You] um - - -\nnew entity entity 1 empty Linda Linda state You entity state 0 0 update process [1]1 You push state update state attn push state - - 1 - []1 and [Linda], - of [You]] um - -\ndi =) Vi\nEach dialogue on average has 8 turns (16 sentences). We use a vocabulary size of 900, includii about 400 table tokens and 500 words\nRecipes: We crawl all recipes from www. allrecipes. com. There are about 31, 000 recipes ir. total, and every recipe has a ingredient list and a corresponding recipe. We exclude the recipes tha have less than 10 tokens or more than 500 tokens, those recipes take about 0.1% of all data set. Or. average each recipe has 118 tokens and 9 ingredients. We random shuffle the whole data set and take. 80% as training and 10% for validation and test. We use a vocabulary size of 10,O00 in the model..\nCoref LM: We use the Xinhua News data set from Gigaword Fifth Edition and sample 100,000. documents from it that has length in range from 100 to 500. Each document has on average 234. tokens, so there are 23 million tokens in total. We use a tool to annotate all the entity mentions. and use the annotation in the training. We take 80% as training and 10% as validation and test respectively. We ignore the entities that have only one mention and for the mentions that have multiple tokens, we take the token that is most frequent in the all the mentions for this entity. After the preprocessing, tokens that are entity mentions take about 10% of all tokens. We use a vocabulary size of 50,000 in the model."}, {"section_index": "5", "section_name": "4.1 MODEL TRAINING AND EVALUATION", "section_text": "We train all models with simple stochastic gradient descent with clipping. We use a one-layer LSTM. for all RNN components. Hyper-parameters are selected using grid search based on the validation. set. We use dropout after the input embedding and LSTM output. The learning rate is selected from. [0.1, 0.2, 0.5, 1], maximum gradient norm is selected from [1, 2, 5, 10] and drop ratio is selected. from [0.2, O.3, O.5]. The batch size and LSTM dimension size is slightly different for different. tasks so as to make the model fit into memory. The number of epochs to train are different for each task and we drop the learning rate after reaching a given number of epochs. We report the. per-word perplexity for all tasks, specifically, we report the perplexity of all words, words that can. be generated from reference and non-reference words. For recipe generation, we also generate the. recipe using beam size of 10 and evaluate the generated recipe with BLEU..\nTable 4: Dialogue perplexity results. (All means all tokens, table means tokens from table, table oov. denotes table tokens that does not appear in the training set, word means non-table tokens). sentence. attn denotes we use attention mechanism over tokens from past turn. Table pointer and table laten differs in that table pointer, we provide supervised signal on when to generate a table token, while. in table latent it is a latent decision..\nTable 5: Recipe result, evaluated in perplexity and BLEU score. ing denotes tokens from recipe that appear in ingredients.\nmodel all table table oov word seq2seq 1.350.01 4.980.38 1.99E77.75E6 1.230.01 table attn 1.370.01 5.090.64 7.91E71.39E8 1.240.01 table pointer 1.330.01 3.990.36 1360 2600 1.230.01 table latent 1.360.01 4.990.20 3.78E76.08E7 1.240.01 + sentence attn seq2seq 1.280.01 3.310.21 2.83E9 4.69E9 1.190.01 table attn 1.280.01 3.170.21 1.67E79.5E6 1.200.01 table pointer 1.270.01 2.990.19 82.86110 1.200.01 table latent 1.280.01 3.260.25 1.27E71.41E7 1.200.01\nval test model pp1 ppl BLEU BLEU all ing word all ing word seq2seq 5.60 11.26 5.00 14.07 5.52 11.26 4.91 14.39 attn 5.25 6.86 5.03 14.84 5.19 6.92 4.95 15.15 pointer 5.15 5.86 5.04 15.06 5.11 6.04 4.98 15.29 latent 5.02 5.10 5.01 14.87 4.97 5.19 4.94 15.41\nval test model all entity word all entity word 1m 33.08 44.52 32.04 33.08 43.86 32.10 pointer 32.57 32.07 32.62 32.62 32.07 32.69 pointer + init 30.43 28.56 30.63 30.42 28.56 30.66\nTable 6: Coreference based LM. pointer + init means we initialize the model with the LM weights\nRecently, there has been great progresses in modeling languages based on neural network, including. language modeling (Mikolov et a, 2010; Jozefowicz et al, 2016), machine translation (Sutskever. et al., 2014; Bahdanau et al., 2014), question answering (Hermann et al., 2015) etc. Based on the. success of seq2seq models, neural networks are applied in modeling chit-chat dialogue (Li et al. 2016; Vinyals & Le, 2015; Sordoni et al., 2015; Serban et al, 2016; Shang et al., 2015) and task. oriented dialogue (Wen et al., 2015; Bordes & Weston, 2016; Williams & Zweig, 2016; Wen et al.). 2016). Most of the chit-chat neural dialogue models are simply applying the seq2seq models. For. the task oriented dialogues, most of them embed the seq2seq model in traditional dialogue systems,. in which the table query part is not differentiable. while our model queries the database directly.. Recipe generation was proposed in (Kiddon et al., 2016). Their model extents previous work on. attention models (Allamanis et al., 2016) to checklists, whereas our work models explicit references. to those checklists. Context dependent language models (Mikolov et all, 2010; Ji et al.l, 2015; Wang. & Chd, 2015) are proposed to capture long term dependency of text. There are also lots of works. on coreference resolution (Haghighi & Klein, 2010; Wiseman et al.J, 2016). We are the first to. combine coreference with language modeling, to the best of our knowledge. Much effort has been. invested in embedding a copying mechanism for neural models (Gulcehre et al., 2016; Gu et al.,. 2016; Ling et al., 2016). In general, a gating mechanism is employed to combine the softmax over. observed words and a pointer network (Vinyals et al., 2015). These gates can be trained either by. marginalizing over both outcomes, or using heuristics (e.g. copy low frequency words). Our models. are similar to models proposed in (Ahn et al., 2016; Merity et al., 2016), where the generation of. each word can be conditioned on a particular entry in knowledge lists and previous words. In our. work, we describe a model with broader applications, allowing us to condition, on databases, lists. and dvnamic lists\nThe results for dialogue, recipe generation and coref language model are shown in Table , anc respectively. We can see from Table that models that condition on table performs better ir redicting table tokens in general. Table pointer has the lowest perplexity for token in the table Since the table token appears rarely in the dialogue, the overall perplexity does not differ much anc he non-table tokens perplexity are similar. With attention mechanism over the table, the perplexity of table token improves over basic seq2seq model, but not as good as directly pointing to cells in the able. As expected, using sentence attention improves significantly over models without sentence ttention. Surprisingly, table latent performs much worse than table pointer. We also measure the erplexity of table tokens that appear only in test set. For models other than table pointer, because he tokens never appear in training set, the perplexity is quite high, while table pointer can predic hese tokens much more accurately. The recipe results in Table in general follows that findings rom the dialogue. But the latent model performs better than pointer model since that tokens ir ngredients that match with recipe does not necessarily come from the ingredients. Imposing a upervised signal will give wrong information to the model and hence make the result worse. Hence vith latent decision, the model learns to when to copy and when to generate it from the vocabulary The coref LM results are shown in Table 6. We find that coref based LM performs much better or he entities perplexities, but however is a little bit worse than for non-entity words. We found it is ar ptimization problem and perhaps the model is stuck in local optimum. So we initialize the pointe nodel with the weights learned from LM, the pointer model performs better than LM both for entity erplexity and non-entity words perplexity.\nWe introduce reference-aware language models which explicitly model the decision of from where. to generate the token at each step. Our model can also learns the decision by treating it as a laten variable. We demonstrate on three tasks, table based dialogue modeling, recipe generation and corei based LM, that our model performs better than attention based model, which does not incorporate this decision explicitly. There are several directions to explore further based on our framework. The current evaluation method is based on perplexity and BLEU. In task oriented dialogues, we can alsc. try human evaluation to see if the model can reply users' query accurately. It is also interesting tc use reinforcement learning to learn the actions in each step.."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Sungjin Ahn, Heeyoul Choi, Tanel Parnamaa, and Yoshua Bengio. A neural knowledge languag model. CoRR, abs/1608.00318, 2016.\nAntoine Bordes and Jason Weston. Learning end-to-end goal-oriented dialog. arXiv preprin arXiv:1605.07683, 2016\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in. Neural Information Processing Systems, pp. 1693-1701, 2015.\nYangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. Document context language models. arXiv preprint arXiv:1511.03962, 2015.\nJiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. Incorporating copying mechanism in sequence-to-sequence learning. CoRR, abs/1603.06393, 2016. URL http://arxiv.org/ abs/1603.06393.\nAria Haghighi and Dan Klein. Coreference resolution in a modular, entity-centered model. Ir Human Language Technologies: The 2010 Annual Conference of the North American Chapte of the Association for Computational Linguistics, pp. 385-393. Association for Computationa Linguistics, 2010.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.\nWang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, Andrew Senior, Fumi Wang, and Phil Blunsom. Latent predictor networks for code generation. In Proc. ACL, 2016..\nStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurren neural network based language model. In Interspeech, volume 2, pp. 3, 2010..\nAlessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian Yun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generatior. of conversational responses. In Proc. NAACL, 2015..\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proc. NIPs, 2015.\nTsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes David Vandyke, and Steve Young. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562, 2016.\nJason D Williams and Geoffrey Zweig. End-to-end lstm-based dialog control optimized with super vised and reinforcement learning. arXiv preprint arXiv:1606.01269, 2016\nSam Wiseman, Alexander M Rush, and Stuart M Shieber. Learning global features for coreference resolution. arXiv preprint arXiv:1604.03035, 2016\nSteve Young, Milica Gasic, Blaise Thomson, and Jason D Williams. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160-1179, 2013."}, {"section_index": "7", "section_name": "A VISUALIZATION", "section_text": "M: Hello , welcome to the Cambridge restaurant system ? You can ask for restaurants by area price range or food type . How may I help you ?. U: and expensive restaurant that serves tuscan food M: Sorry there is no tuscan restaurant in the expensive price range. U: french food M: cote is a great restaurant serving french food and it is in the expensive price range. U: may i have the phone number M: The phone number of cote is 01223 311053 . I I: thank Ood by\n(a) Dialogue script NAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE charlie chan cheap chinese east Regent Street City Cen- C.B 2.1 D.B 01223 361763 tre chiquito restau- expensive mexican south 2G Cambridge Leisure C.B 1, 7 D.Y 01223 400170 rant bar Park Cherry Hinton Road Cherry Hinton city stop expensive food north Cambridge City Foot- _EMPTY 01223 363270 ball Club Milton Road Chesterton clowns cafe expensive italian centre EMPTY C.B 1, 1L.N 01223 355711 cocum expensive indian west 71 Castle Street City C.B 3, 0 A.H 01223 366668 Centre cote expensive french centre Bridge Street City Cen- C.B 2,1 U.F 01223 311053 tre curry garden expensive indian centre 106 Regent Street City _EMPTY 01223 302330 Centre curry king expensive indian centre 5 Jordans Yard Bridge C.B 1, 2 B.D 01223 324351 Street City Centre curry prince moderate indian east 451 Newmarket Road C.B 5, 8 J.J 01223 566388 Fen Ditton (b) Attention heat map: cote is a great restaurant serving french food and it is in the expensive price range. NAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE charlie chan cheap chinese east Regent Street City Cen- C.B 2, 1 D.B 01223 361763 tre chiquito restau- expensive mexican south 2G Cambridge Leisure C.B 1,7 D.Y 01223 400170 rant bar Park Cherry Hinton Road Cherry Hinton city stop expensive food north Cambridge City Foot- _EMPTY 01223 363270 ball Club Milton Road Chesterton clowns cafe expensive italian centre EMPTY C.B 1, 1L.N 01223 355711 expensive indian west 71 Castle Street City cocum C.B 3, 0 A.H 01223 366668 Centre cote expensive french centre Bridge Street City Cen- C.B 2, 1 U.F 01223 311053 tre curry garden expensive indian centre 106 Regent Street City _EMPTY 01223 302330 Centre curry king expensive indian centre 5 Jordans Yard Bridge C.B 1, 2 B.D 01223 324351 Street City Centre curry prince moderate indian east 451 Newmarket Road C.B 5, 8 J.J 01223 566388 Fen Ditton (c) Attention heap map: The phone number of cote is 01223 311053 ..\n106 Regent Street City. Centre 5 Jordans Yard Bridge\n106 Regent Street City. Centre 5 Jordans Yard Bridge\nM: 1eno.wecomne can ask lor reslal price range or food type . How may I help you ?. U: cheap restaurant M: What kind of food would you like ?. U: in the west part of town. M: 1a margherita is a nice restaurant in the west of town in the cheap price range U: address M: Sure,lamargheritais on15 Magdalene Street City Centre U: thank you M: la margherita is a nice restaurant in the west of town in the cheap price range. U: good bye\nc) Attention heap map: Sure , la margherita is on 15 Magdalene Street City Centre\n(a) Dialogue script NAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE india house expensive indian west 31 Newnham Road _EMPTY 01223 461661 Newnham j restaurant cheap oriental centre 86 Regent Street City C.B 2, 1 D.P 01223 307581 Centre jinling noodle moderate chinese centre 11 Peas Hill City Cen- C.B 2, 3 P.P 01223 566188 bar tre kohinoor cheap indian centre 74 Mill Road City Cen- _EMPTY 01223 323639 tre kymmoy expensive oriental centre 52 Mill Road City Cen- C.B 1, 2 A.S 01223 311911 tre la margherita cheap italian west 15MagdaleneStreet C.B 3, 0 A.F 01223 315232 City Centre la mimosa expensive mediterranean centre ThompsonsLaneFen C.B 5, 8 A.Q 01223 362525 Ditton la raza cheap spanish centre 4 - 6 Rose Crescent C.B 2, 3 L.L 01223 464550 la tasca moderate spanish centre 14 -16 Bridge Street C.B 2, 1 U.F 01223 464630 lan hong house moderate chinese centre 12 Norfolk Street City _EMPTY 01223 350420 Centre (b) Attention heat map: 1a margherita is a nice restaurant in the west of town in the cheap price range NAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE india house expensive indian west 31 Newnham Road _EMPTY 01223 461661 Newnham j restaurant cheap oriental centre 86Regent Street City C.B 2, 1D.P 01223 307581 Centre jinling noodle moderate chinese centre 11 Peas Hill City Cen- C.B 2, 3 P.P 01223 566188 bar tre kohinoor cheap indian centre 74 Mill Road City Cen- _EMPTY 01223 323639 tre kymmoy expensive oriental centre 52 Mill Road City Cen- C.B 1, 2 A.S 01223 311911 tre la margherita cheap italian west 15MagdaleneStreet C.B 3, 0 A.F 01223 315232 City Centre la mimosa expensive mediterranean centre ThompsonsLaneFen C.B 5, 8 A.Q 01223 362525 Ditton la raza cheap spanish centre 4 - 6 Rose Crescent C.B 2, 3 L.L 01223 464550 la tasca moderate spanish centre 14 -16 Bridge Street C.B 2, 1 U.F 01223 464630 lan hong house moderate chinese centre 12 Norfolk Street City EMPTY 01223 350420 Centre\ncentre 11 Peas Hill City Cen- tre\nwest 15 Magdalene Street City Centre\nre 12 Norfolk Street City Centre\nFigure 6: Recipe heat map example 1. The ingredient tokens appear on the left while the recipe tokens appear on the top. The first row is the p(zu[su).\nIn large skillet heat olive oil medium heat Stir shallots over in p(z) tablespoon olive oil shallot diced ( 10 ounce ) bag baby spinach leaves kosher salt and freshly ground pepper to taste (a) part 1 and cook until transparent about 5 minutes Add spinach sprinkle with salt p(z) 1 tablespoon olive oil 1 shallot diced 10 ounce bag baby spinach leaves kosher salt and freshly ground pepper to taste (b) part 2 and pepper cook and stir 3 to minutes until leaves are wilted and 5 p(z) 1 tablespoon olive oil 1 shallot diced 1 10 ounce bag baby spinach leaves kosher salt and freshly ground pepper to taste (c) part 3\nFigure 7: Recipe heat map example 2"}]
HyxQzBceg
[{"section_index": "0", "section_name": "DEEP VARIATIONAL INFORMATION BOTTLENECK", "section_text": "Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy\nalemi,iansf,jvdillon,kpmurphy}@google.com\nWe present a variational approximation to the information bottleneck of Tishby et al.(1999). This variational approach allows us to parameterize the informa- tion bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method \"Deep Variational Information Bottleneck\"', or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Given the data processing inequality, and the invariance of the mutual information to reparameteriza tions, if this was our only objective we could always ensure a maximally informative representation by taking the identity encoding of our data (Z = X), but this is not a useful representation of our data. Instead we would like to find the best representation we can obtain subject to a constraint on its complexity. A natural and useful constraint to apply is on the mutual information between our encoding and the original data, I(X, Z) < Ic, where Ic is the information constraint. This suggests the objective:\nmax1(Z, Y;0) s.t. 1(X,Z;0) Ic 0\nR1B(0) = 1(Z,Y;0) - 31(Z,X;0)\nThe IB principle is appealing, since it defines what we mean by a good representation, in terms of the fundamental tradeoff between having a concise representation and one with good predictive power (Tishby & Zaslavsky2015a). The main drawback of the IB principle is that computing mutual. information is, in general, computationally challenging. There are two notable exceptions: the first"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We adopt an information theoretic view of deep networks. We regard the internal representation of some intermediate layer as a stochastic encoding Z of the input source X, defined by a parametric encoder p(z|x; 0)|1[Our goal is to learn an encoding that is maximally informative about our target Y, measured by the mutual information between our encoding and the target I(Z, Y; 0), where\np(z,y|O) I(Z,Y;0) = dx dy p(z,y\\0) log p(z|0)p(y|0)\nHere our goal is to learn an encoding Z that is maximally expressive about Y while being maximally compressive about X, where > 0 controls the tradeoff|3|This approach is known as the informa- tion bottleneck (IB), and was first proposed in|Tishby et al.[(1999). Intuitively, the first term in R1 B encourages Z to be predictive of Y; the second term encourages Z to \"forget\" X. Essentially it forces Z to act like a minimal sufficient statistic of X for predicting Y.\n1 In this work, X, Y, Z are random variables, x, y, z and x, y, z are instances of random variables, and F(.; 0) and f(.; 0) are functionals or functions parameterized by 0. 2 Note that in the present discussion, Y is the ground truth label which is independent of our parameters so p(y|0) = p(y). 3 Note that, in our notation, large results in a highly compressed representation. In some works, the IB principle is formulated as the minimization of I(Z, X) - I(Z, Y), in which case large corresponds to high\nis when X, Y and Z are all discrete, as in Tishby et al.(1999); this can be used to cluster discret data, such as words. The second case is when X, Y and Z are all jointly Gaussian (Chechik et al 2005). However, these assumptions both severely constrain the class of learnable models.\nIn this paper, we propose to use variational inference to construct a lower bound on the IB objective. in Equation[3] We call the resulting method VIB (variational information bottleneck). By using th reparameterization trick (Kingma & Welling2014), we can use Monte Carlo sampling to get ar unbiased estimate of the gradient, and hence we can optimize the objective using stochastic gradien descent. This allows us to use deep neural networks to parameterize our distributions, and thus tc. handle high-dimensional, continuous data, such as images, avoiding the previous restrictions to the. discrete or Gaussian cases.\nWe also show, by a series of experiments, that stochastic neural networks, fit using our VIB method are robust to overfitting, since VIB finds a representation Z which ignores as many details of the input X as possible. In addition, they are more robust to adversarial inputs than deterministic models which are fit using (penalized) maximum likelihood estimation. Intuitively this is because each input image gets mapped to a distribution rather than a unique Z, so it is more difficult to pass small idiosyncratic perturbations through the latent bottleneck.\nThe idea of using information theoretic objectives for deep neural networks was pointed out ir Tishby & Zaslavsky(2015b). However, they did not include any experimental results, since theii. approach for optimizing the IB objective relied on the iterative Blahut Arimoto algorithm, which is. infeasible to apply to deep neural networks..\nVariational inference is a natural way to approximate the problem. Variational bounds on mutual. information have previously been explored in Agakov(2004), though not in conjunction with the. information bottleneck objective.Mohamed & Rezende (2015) also explore variational bounds on mutual information, and apply them to deep neural networks, but in the context of reinforcement. learning. We recently discovered Chalk et al.[(2016), who independently developed the same varia-. tional lower bound on the IB objective as us. However, they apply it to sparse coding problems, and. use the kernel trick to achieve nonlinear mappings, whereas we apply it to deep neural networks. which are computationally more efficient. In addition, we are able to handle large datasets by using. stochastic gradient descent, whereas they use batch variational EM..\nN 1 [H(p(y|yn),p(y|xn)) -BH(p(y|xn)] N n=1\nwhere H(p,q) = -yp(y) logq(y) is the cross entropy, H(p) = H(p,p) is the entropy, p(y[yn) = dyn (y) is a one-hot encoding of the label yn, and N is the number of training exam- ples. (Note that setting = 0 corresponds to the usual maximum likelihood estimate.) In (Pereyra et al.]2016) they show that CP performs better than the simpler technique of label smoothing, in which we replace the zeros in the one-hot encoding of the labels by e > 0, and then renormalize so that the distribution still sums to one. We will compare our VIB method to both the confidence penalty method and label smoothing in Section4.1\nIn the unsupervised learning literature, our work is closely related to the work in|Kingma & Welling. (2014) on variational autoencoders. In fact, their method is a special case of an unsupervised version of the VIB, but with the parameter fixed at 1.0, as we explain in AppendixB] The VAE objective. but with different values of , was also explored inHiggins et al.(2016), but from a different. perspective.\nThe method of|Wang et al.(2016b) proposes a latent variable generative model of both x and y;. their variational lower bound is closely related to ours, with the following differences. First, we do\nIn the supervised learning literature, our work is related to the recently proposed confidence penalty. (entropy regularization) method of (Pereyra et al.]2016). In this work, they fit a deterministic. network by optimizing an objective that combines the usual cross entropy loss with an extra term which penalizes models for having low entropy predictive distributions. In more detail, their cost. function has the form\nnot have a likelihood term for x, since we are in the discriminative setting. Second, they fix = 1 since they do not consider compression.\nFinally, the variational fair autoencoder of Louizos et al.[(2016) shares with our paper the idea of ignoring parts of the input. However, in their approach, the user must specify which aspects of the input (the so-called \"sensitive'' parts) to ignore, whereas in our method, we can discover irrelevant parts of the input automatically."}, {"section_index": "3", "section_name": "3 METHOD", "section_text": "Following standard practice in the IB literature, we assume that the joint distribution p(X, Y, Z factors as follows:\np(X,Y,Z) = p(Z|X,Y)p(YX)p(X) =p(Z|X)p(Y]X)p(X\nRecall that the IB objective has the form I(Z, Y) - 3I(Z, X). We will examine each of these expressions in turn. Let us start with I(Z, Y). Writing it out in full, this becomes\nU. I(Z,Y dy dz p(y, z) log dy dz p(y, z) log\nyx)pzx)px lx n dx p(y\\x)p(x[z) py|z p(z\nKL[p(Y|Z),q(Y|Z)] 0 = dy p(y|z) logp(y|z) > dy p(y|z) log q(y|z) z\nq(y\\z) I(Z,Y) dy dz p(y, z) log. dy dz p(y, z) log q(y[z) dy p(y) log p(y dy dz p(y, z) log q(y|z) + H(Y) .\nNotice that the entropy of our labels H(Y) is independent of our optimization procedure and so car be ignored.\nI(Z,Y) dx dy dz p(x)p(y|x)p(z|x) log q(y|z)\nThis only requires samples from both our joint data distribution as well as samples from our stochas.. tic encoder, while it requires we have access to a tractable variational approximation in q(y[z)\np(z|x I(Z,X) dz dx p(x, z) log. dz dx p(x, z) log p(z|x) dz p(z) logp(z) pz\ne., we assume p(Z|X, Y) = p(Z|X), corresponding to the Markov chain Y > X > Z. This estriction means that our representation Z cannot depend directly on the labels Y. (This opens he door to unsupervised representation learning, which we will discuss in Appendix [B]) Besides he structure in the joint data distribution p(X, Y), the only content at this point is our model for he stochastic encoder p(Z|X), all other distributions are fully determined by these and the Markov hain constraint.\nSince this is intractable in our case, let q(y[z) be a variational approximation to p(y[z). This is our decoder, which we will take to be another neural network with its own set of parameters. Using the fact that the Kullback Leibler divergence is always positive, we have.\nFocusing on the first term in Equation11 we can rewrite p(y, z) as p(y, z) = J dx p(x, y, z) = f dx p(x)p(y|x)p(z|x) (leveraging our Markov assumption), which gives us a new lower bound on the first term of our objective:\nCombining both of these bounds we have that\nSuppose we use an encoder of the form p(z[x) = N(z|f(x), f(x)), where fe is an MLP which outputs both the K-dimensional mean of z as well as the K K covariance matrix . Then we can use the reparameterization trick (Kingma & Welling2014) to write p(z|x)dz = p(e)de, where z = f(x, e) is a deterministic function of x and the Gaussian random variable e. This formulation has the important advantage that the noise term is independent of the parameters of the model, so it is easy to take gradients.\nAs in Kingma & Welling(2014), this formulation allows us to directly backpropagate through a single sample of our stochastic code and ensure that our gradient is an unbiased estimate of the true expected gradient4\nIn this section, we present various experimental results, comparing the behavior of standard deter ministic networks to stochastic neural networks trained by optimizing the VIB objective"}, {"section_index": "4", "section_name": "4.1 BEHAVIOR ON MNIST", "section_text": "We start with experiments on unmodified MNIST (i.e. no data augmentation). In order to pick a model with some \"headroom' to improve, we decided to use the same architecture as in the (Pereyra et al.[2016) paper, namely an MLP with fully connected layers of the form 784 - 1024 - 1024 - 10, and ReLu activations. (Since we are not exploiting spatial information, this correpsonds to the \"permutation invariant' version of MNIST.) The performance of this baseline is 1.38% error. (Pereyra et al.[2016) were able to improve this to 1.17% using their regularization technique. We were able to improve this to 1.13% using our technique, as we explain below.\nIn our method, the stochastic encoder has the form p(z[x) = N(z|f(x), f(x)), where fe is ai MLP of the form 784 - 1024 - 1024 - 2K, where K is the size of the bottleneck. The first K outputs from fe encode , the remaining K outputs encode (after a softplus transform).\nIn general, while it is fully defined, computing the marginal distribution of Z, p(z) : I dx p(z|x)p(x), might be difficult. So let r(z) be a variational approximation to this marginal Since KL[p(Z),r(Z)] 0 => f dzp(z) logp(z) f dz p(z) logr(z), we have the following upper bound:\nI(Z,X) dx dz p(x)p(z|x) log\nI(Z,Y) -I(Z,X) dx dy dz p(x)p(y|x)p(z|x) log q(y|z dx dz p(x)p(z|x) log\nN 1 L ~ dz p(z|xn) logq(yn[z) - p(z[xn) log N n=1\nAssuming our choice of p(z[x) and r(z) allows computation of an analytic Kullback-Leibler di-. vergence, we can put everything together to get the following objective function, which we try to minimize:\nN JI B Ee~p(e) [-logq(yn|f(xn,E))]+ BKL[p(Z|xn),r(Z)] N n=1\n4 Even if our choice of encoding distribution and variational prior do not admit an analytic KL, we could similarly reparameterize through a sample of the divergence (Kingma & Welling) 2014 Blundel1 et a1.12015)\nThe decoder is a simple logistic regression model of the form q(y[z) = S(y|fd(z)), where S(a) latent code to the logits of the C = 10 classes. (In later sections, we consider more comple decoders, but here we wanted to show the benefits of VIB in a simple setting.)\nFinally, we treat r(z) as a fixed K-dimensional spherical Gaussian, r(z) = N(z|0, I)\nWe compare our method to the baseline MLP. We calso consider the following deterministic limit of our model, when = 0. In this case, we obtain the following objective function:"}, {"section_index": "5", "section_name": "4.1.1 HIGHER DIMENSIONAL EMBEDDING", "section_text": "To demonstrate that our VIB method can achieve competitive classification results, we comparec against a deterministic MLP trained with various forms of regularization. We use a K = 256. dimensional bottleneck and a diagonal Gaussian for p(z|x). The networks were trained using Ten. sorFlow for 200 epochs using the Adam optimizer (Kingma & Ba]2015) with a learning rate o1 0.0001. Full hyperparameter details can be found in Appendix|A.\nThe results are shown in Table[1 we see that we can slightly outperform other forms of regulariza tion that have been proposed in the literature while using the same network for each. Of course, the performance varies depending on . These results are not state of the art, nor is our main focus of our work to suggest that VIB is the best regularization method by itself, which would require much more experimentation. However, using the same architecture for each experiment and comparing to VIB as the only source of regularization suggests VIB works as a decent regularizer in and of itself. Figure[1(a) plots the train and test error vs , averaged over 5 trials (with error bars) for the case where we use a single Monte Carlo sample of z when predicting, and also for the case where we average over 12 posterior samples (i.e., we use p(y|x) = s=1 q(y|z) for z ~ p(z|x), where S = 12). In our own investigations, a dozen samples seemed to be sufficient to capture any additional benefit the stochastic evaluations had to offer in this experimen[5\nWe see several interesting properties in Figure[1(a). First, we notice that the error rate shoots up once rises above the critical value of ~ 10-2. This corresponds to a setting where the mutual information between X and Z is less than log,(10) bits, so the model can no longer represent the fact that there are 10 different classes. Second, we notice that, for small values of , the test error\n5 A dozen samples wasn't chosen for any particular reason, except the old addage that a dozen samples are sufficient, as mirrored in David MacKay's book (MacKay. 2003). They proved sufficient in this case..\nModel error Baseline 1.38% Dropout 1.34% Dropout (Pereyra et al.]2016) 1.40% Confidence Penalty 1.36% Confidence Penalty (Pereyra et al. 12016 1.17% Label Smoothing 1.40% Label Smoothing (Pereyra et al. 2016 1.23% VIB (B = 10 1.13%\nTable 1: Test set misclassification rate on permutation-invariant MNIST using K = 256. We com- pare our method (VIB) to an equivalent deterministic model using various forms of regularization. The discrepancy between our results for confidence penalty and label smoothing and the numbers reported in (Pereyra et al.|2016) are due to slightly different hyperparameters.\nN 1 J1B0 = - Ez~N(ft(xn),f2(xn))[logS(yn|fd(z)] N n=1\nWhen -> 0, we observe the VIB optimization process tends to make f(x) -> 0, so the network. becomes nearly deterministic. In our experiments we also train an explicitly deterministic model that has the same form as the stochastic model, except that we just use z = f(x) as the hidden. encoding, and drop the Gaussian layer..\nis higher than the training error, which indicates that we are overfitting. This is because the network learns to be more deterministic, forcing ~ 0, thus reducing the benefits of regularization. Third. we notice that for intermediate values of , Monte Carlo averaging helps. Interestingly, the region with the best performance roughly corresponds to where the added benefit from stochastic averaging. goes away, suggesting an avenue by which one could try to optimize using purely statistics on the. training set without a validation set. We have not extensively studied this possibility yet..\nIn Figure[1(c), we plot the IB curve, i.e., we plot I(Z, Y) vs I(Z, X) as we vary . As we allow. more information from the input through to the bottleneck (by lowering 3), we increase the mutua information between our embedding and the label on the training set, but not necessarily on the tes. set, as is evident from the plot.\nIn Figure[1(d) we plot the second term in our objective, the upper bound on the mutual information between the images X and our stochastic encoding Z, which in our case is simply the relative entropy between our encoding and the fixed isotropic unit Gaussian prior. Notice that the y-axis is a logarithmic one. This demonstrates that our best results (when is between 10-3 and 10-2) occur where the mutual information between the stochastic encoding and the images is on the order of 10 to 100 bits.\n0.020 0.05 0.015 0.04 error 0.010 Crnor 0.03 test 1 shot eval test 1 shot eval 0.02 0.005 test avg eval test avg eval train 1 shot eval 0.01 - train 1 shot eval train avg eval train avg eval 0.000 0.00 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 101 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 101 (a) (b) 103 3.3 train train + test 102 test 3.2 101 3.1 'z)I 'z)I 100 3.0 10-1 2.9 10-2 2.8 10-3 101 102 103 104 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 101 I(Z,X) (c) (d)\nFigure 1: Results of VIB model on MNIST. (a) Error rate vs for K = 256 on train and test set \"1 shot eval' means a single posterior sample of z, \"avg eval' means 12 Monte Carlo samples. The. spike in the error rate at ~ 10-2 corresponds to a model that is too highly regularized. Plotted. values are the average over 5 independent training runs at each 3. Error bars show the standard deviation in the results. (b) Same as (a), but for K = 2. Performance is much worse, since we pass through a very narrow bottleneck. (c) I(Z, Y) vs I(Z, X) as we vary for K = 256. We see that. increasing I(Z, X) helps training set performance, but can result in overfitting. (d) I(Z, X) vs . for K = 256. We see that for a good value of , such as 10-2, we only need to store about 10 bits. of information about the input."}, {"section_index": "6", "section_name": "4.1.2 TWO DIMENSIONAL EMBEDDING", "section_text": "To better understand the behavior of our method, we refit our model to MNIST using a K = 2 dimensional bottleneck, but using a full covariance Gaussian. (The neural net predicts the mean anc the Cholesky decomposition of the covariance matrix.) Figure[1(b) shows that, not surprisingly, the classification performance is worse (note the different scaled axes), but the overall trends are the\nsame as in the K = 256 dimensional case. The IB curve (not shown) also has a similar shape tc before, except now the gap between training and testing is even larger..\nFigure[2|provides a visualization of what the network is doing. We plot the posteriors p(z[x) as a 2c Gaussian ellipse (representing the 95% confidence region) for 1000 images from the test set. Colors correspond to the true class labels. In the background of each plot is the entropy of the variationa. classifier q(y[z) evaluated at that point..\n15 10 10 15 15 10 10 15\n, errmc = 3.18%, (b) = 10-1, errmc = 3.44%, (c) = 10, errmc = 33.82% (a) = 10-3 3 24% 432% 62.81% err err\nFigure 2: Visualizing embeddings of 1000 test images in two dimensions. We plot the 95% confi dence interval of the Gaussian embedding p(z[x) = N(, ) as an ellipse. The images are colored according to their true class label. The background greyscale image denotes the entropy of the vari ational classifier evaluated at each two dimensional location. As becomes larger, we forget more about the input and the embeddings start to overlap to such a degree that the classes become indis tinguishable. We also report the test error using a single sample, err1, and using 12 Monte Carlo. samples, errmc. For \"good\" values of , a single sample suffices..\nWe see several interesting properties. First, as increases (so we pass less information through). the embedding covariances increase in relation to the distance between samples, and the classe start to overlap. Second, once passes a critical value, the encoding \"collapses\"', and essentiall all the class information is lost. Third, there is a fair amount of uncertainty in the class predition (q(y[z)) in the areas between the class embeddings. Fourth, for intermediate values of (say 10-. in Figure[2(b)), predictive performance is still good, even though there is a lot of uncertainty abou. where any individual image will map to in comparison to other images in the same class. This mean it would be difficult for an outside agent to infer which particular instance the model is representing. a property which we will explore more in the following sections..\nSince the initial work by Szegedy et al.[(2013) and Goodfellow et al.[(2014), many different adver saries have been proposed. Most attacks fall into three broad categories: optimization-based attack. (Szegedy et al.|2013 Carlini & Wagner2016f|Moosavi-Dezfooli et al.2 2016 Papernot et al.]2015 Robinson & Graham2015, Sabour et al.2016), which directly run an optimizer such as L-BFGs or ADAM (Kingma & Ba2015) on image pixels to find a minimal perturbation that changes the model's classification; single-step gradient-based attacks (Goodfellow et al.]2014) Kurakin et al. 2016, Huang et al.|2015), which choose a gradient direction of the image pixels at some loss anc then take a single step in that direction; and iterative gradient-based attacks (Kurakin et al.2016\nSzegedy et al.[(2013) was the first work to show that deep neural networks (and other kinds of classifiers) can be easily \"fooled'' into making mistakes by changing their inputs by imperceptibly small amounts. In this section, we will show how training with the VIB objective makes models significantly more robust to such adversarial examples.\nMany adversaries can be formalized as either untargeted or targeted variants. An untargeted ad. versary can be defined as A(X, M) -> X', where A(.) is the adversarial function, X is the input image, X' is the adversarial example, and M is the target model. A is considered successful if. M(X) M(X'). Recently,Moosavi-Dezfooli et al.(2016) showed how to create a \"universal'. adversarial perturbation & that can be added to any image X in order to make M(X + ) M(X). for a particular target model.\nIn this work, we focus on the Fast Gradient Sign (FGs) method proposed in Goodfellow et al.. (2014) and the L2 optimization method proposed in Carlini & Wagner(2016). FGS is a standard baseline attack that takes a single step in the gradient direction to generate the adversarial example. As originally described, FGS generates untargeted adversarial examples. On MNIST, Goodfellow. et al.[(2014) reported that FGS could generate adversarial examples that fooled a maxout network approximately 90% of the time with e = 0.25, where e is the magnitude of the perturbation at each. pixel. The L2 optimization method has been shown to generate adversarial examples with smaller. perturbations than any other method published to date, which were capable of fooling the target. network 100% of the time. We consider both targeted attacks and untargeted attacks for the L2. optimization method8"}, {"section_index": "7", "section_name": "4.2.2 ADVERSARIAL ROBUSTNESS", "section_text": "There are multiple definitions of adversarial robustness in the literature. The most basic, which we shall use, is accuracy on adversarially perturbed versions of the test set. called adversarial examples\nIt is also important to have a measure of the magnitude of the adversarial perturbation. Since ad versaries are defined relative to human perception, the ideal measure would explicitly correspond to how easily a human observer would notice the perturbation. In lieu of such a measure. it is commor to compute the size of the perturbation using Lo, L1, L2, and Loo norms (Szegedy et al.]2013 Goodfellow et al.[ 2014] Carlini & Wagner2016] Sabour et al.f 2016). In particular, the Lo norr measures the number of perturbed pixels, the L2 norm measures the Euclidean distance between X and X'. and the I. norm measures thelarges change to any pixel\nWe used the same model architectures as in Section4.1 using a K = 256 bottleneck. The archi tectures included a deterministic (base) model trained by MLE; a deterministic model trained wit dropout (the dropout rate was chosen on the validation set); and a stochastic model trained with VII for various values of\nFor the VIB models, we use 12 posterior samples of Z to compute the class label distribution p(y[x) This helps ensure that the adversaries can get a consistent gradient when constructing the perturba tion, and that they can get a consistent evaluation when checking if the perturbation was successful\n6 There are also other adversaries that don't fall as cleanly into those categories, such as \"fooling im- ages\" from[Nguyen et al.[(2014), which remove the human perceptual constraint, generating regular geometric patterns or noise patterns that networks confidently classify as natural images; and the idea of generating ad- versaries by stochastic search for images near the decision boundary of multiple networks from Baluja et al (2015).\nA targeted adversary can be defined as A(X, M, l) -> X', where l is an additional target label, and A is only considered successful if M(X') = l|/ Targeted attacks usually require larger magnitude perturbations, since the adversary cannot just \"nudge'' the input across the nearest decision boundary but instead must force it into a desired decision region.\nSabour et al.(2016) proposes a variant of the targeted attack, A(Xs, M, XT, k) -> X's, where Xs is the source image, XT is a target image, and k is a target layer in the model M. A produces X's by minimizing the difference in activations of M at layer k between XT and X's. The end result of this attack for a classification network is still that M(X's) yields a target label implicitly specified by XT in a successful attack.\nCarlini & Wagner (2016) shared their code with us, which allowed us to perform the attack with exactly the same parameters they used for their paper, including the maximum number of iterations and maximum C value (see their paper for details).\n(i.e., it reduces the chance that the adversary \"gets lucky' in its perturbation due to an untypical. sample). We also ran the VIB models in \"mean mode', where the os are forced to be 0. This had nc. noticeable impact on the results, so all reported results are for stochastic evaluation with 12 samples"}, {"section_index": "8", "section_name": "4.2.4 MNIST RESULTS AND DISCUSSION", "section_text": "We selected the first 1O zeros in the MNIST test set, and use the L2 optimization adversary of|Carlini & Wagner(2016) to try to perturb those zeros into ones!|Some sample results are shown in Figure 3 We see that the deterministic models are easily fooled by making small perturbations, but for the VIB models with reasonably large , the adversary often fails to find an attack (indicated by the green borders) within the permitted number of iterations. Furthermore, when an attack is succesful. it needs to be much larger for the VIB models. To quantify this, Figure4|plots the magnitude of the perturbation (relative to that of the deterministic and dropout models) needed for a successful attack as a function of . As increases, the Lo norm of the perturbation decreases, but both L2 and Lo. norms increase, indicating that the adversary is being forced to put larger modifications into fewer pixels while searching for an adversarial perturbation.\nFigure5|plots the accuracy on FGS adversarial examples of the first 1000 images from the MNIST test set as a function of . Each point in the plot corresponds to 3 separate executions of three different models trained with the same value of . All models tested achieve over 98.4% accuracy on the unperturbed MNIST test set, so there is no appreciable measurement distortion due to underlying model accuracy.\nFigure|6[plots the accuracy on L2 optimization adversarial examples of the first 1000 images from the MNIST test set as a function of 3. The same sets of three models per were tested three times as with the FGS adversarial examples..\nWe generated both untargeted and targeted adversarial examples for Figure 6 For targeting, we generate a random target label different from the source label in order to avoid biasing the result with unevenly explored source/target pairs. We see that for a reasonably broad range of values the VIB models have significantly better accuracy on the adversarial examples than the deterministic models, which have an accuracy of O% (the L2 optimization attack is very effective on traditiona model architectures).\nFigure [6also reveals a surprising level of adversarial robustness even when -> 0. This can be explained by the theoretical framework of Fawzi et al.(2016). Their work proves that quadratic classifiers (e.g., x' Ax, symmetric A) have a greater capacity for adversarial robustness than linear classifiers. As we show in Appendix C] our Gaussian/softmax encoder/decoder is approximately quadratic for all oo."}, {"section_index": "9", "section_name": "4.2.5 IMAGENET RESULTS AND DISCUSSION", "section_text": "VIB improved classification accuracy and adversarial robustness for toy datasets like MNIST. We now investigate if VIB offers similar advantages for ImageNet, a more challenging natural image classification. Recall that ImageNet has approximately 1M images spanning 1K classes. We pre process images such that they are 299x299 pixels."}, {"section_index": "10", "section_name": "Architecture", "section_text": "We make use of publicly available, pretrained checkpoints10of Inception Resnet V2 (Szegedy et al.. 2016) on ImageNet (Deng et al.|2009). The checkpoint obtains 80.4% classification accuracy on the. ImageNet validation set. Using the checkpoint, we transformed the original training set by applying the pretrained network to each image and extracting the representation at the penultimate layer This new image representation has 1536 dimensions. The higher layers of the network continue to. classify this representation with 80.4% accuracy; conditioned on this extraction the classification\n9 We chose this pair of labels since intuitively zeros and ones are the digits that are least similar in terms of human perception, so if the adversary can change a zero into a one without much human-noticeable perturba- tion, it is unlikely that the model has learned a representation similar to what humans learn\nFigure 3: The adversary is trying to force each O to be classified as a 1. Successful attacks have a red background. Unsuccessful attacks have a green background. In the case that the label is changed to an incorrect label different from the target label (i.e., the classifier outputs something other than O or 1), the background is purple. The first column is the original image. The second column is. adversarial examples targeting our deterministic baseline model. The third column is adversarial examples targeting our dropout model. The remaining columns are adversarial examples targeting our VIB models for different 3..\n3.0 Deterministic Model L* Dropout Model L* 2.0 Targeted L2 Optimization (0->1):L0 111 Targeted L2 Optimization (0->1):L0 Targeted L2 Optimization (0->1):L2 -- Targeted L2 Optimization (0->1):L2 Targeted L2 Optimization (0->1):Loo 1.8 Targeted L2 Optimization (0->1):Lx 2.5 * *1.6 2.0 qnodou 1.4 Jq/*1 IIV 1.2 1.5 1.0 1.0P 0.8 0.6 10-11 10-10 10-9 10-8 10-7 10-6 10-5 104 10-3 10-2 10-11 10-10 10-9 10-8 10-7 10-6 10-5 104 10-3 10-2 (a) (b)\nFigure 4: (a) Relative magnitude of the adversarial perturbation, measured using Lo, L2, and La norms, for the images in Figure|3|as a function of . (We normalize all values by the correspondin, norm of the perturbation against the base model.) As increases, Lo decreases, but both L2 and Lx. increase, indicating that the adversary is being forced to put larger modifications into fewer pixels while searching for an adversarial perturbation. (b) Same as (a), but with the dropout model as the baseline. Dropout is more robust to the adversarial perturbations than the base deterministic model but still performs much worse than the VIB model as increases..\nOrig. Det. Dropout 3 = 0 = 10-10 3 = 10-8 3 = 10-6 =10-4 =10-3=10-2\nDeterministic Model L* 2.0 Dropout Model L* Targeted L2 Optimization (0->1):L0 + Targeted L2 Optimization (0->1):L0 :- Targeted L2 Optimization (0->1):L2 - - Targeted L2 Optimization (0->1):L2 Targeted L2 Optimization (0->1):Lx 1.8 Targeted L2 Optimization (0->1):Loo *1.6 1.4 1.0 0.8 0.6 10-10 10-9 108 107 10-6 105 104 10-3 10-2 10-11 10-10 10-9 10-8 10-7 106 10-5 104 10-3 B (a) (b)\nDeterministic Model. Dropout Model + FGS, epsilon=0.350 + FGS, epsilon=0.350 + FGS, epsilon=0.400 5 + FGS, epsilon=0.400 10 + FGS, epsilon=0.450 + FGS, epsilon=0.450 + FGS, epsilon=0.500 + FGS, epsilon=0.500 8 3 6 2 4 1 2 108 107 10-6 10-5 104 10-3 10-2 101 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 (a) (b)\nFigure 5: Classification accuracy of VIB classifiers, divided by accuracy of baseline classifiers, or FGS-generated adversarial examples as a function of . Higher is better, and the baseline is always at 1.0. For the FGS adversarial examples, when = 0 (not shown), the VIB model's performance is almost identical to when = 10-8. (a) FGS accuracy normalized by the base deterministic model performance. The base deterministic model's accuracy on the adversarial examples ranges from about 1% when e = 0.5 to about 5% when e = 0.35. (b) Same as (a), but with the dropout mode. as the baseline. The dropout model is more robust than the base model, but less robust than VIB particularly for stronger adversaries (i.e., larger values of e). The dropout model's accuracy on the adversarial examples ranges from about 5% when e = 0.5 to about 16% when e = 0.35. As in the other results, relative performance is more dramatic as increases, which seems to indicate tha the VIB models are learning to ignore more of the perturbations caused by the FGS method, even though they were not trained on any adversarial examples.\n0.7 0.6 0.5 0.4 0.3 0.2 0.1 Deterministic and Dropout Models (Targeted and Untargeted) Targeted L2 Optimization Untargeted L2 Optimization 0.0 10-11 10-10 10-9 10-8 10-7 10-6 10-5 104 10-3 10-2 101 B\nFigure 6: Classification accuracy (from O to 1) on L2 adversarial examples (of all classes) as a function of . The blue line is for targeted attacks, and the green line is for untargeted attacks (which are easier to resist). In this case, = 10-11 has performance indistinguishable from = 0. The deterministic model and dropout model both have a classification accuracy of O% in both the targeted and untargeted attack scenarios, indicated by the horizontal red dashed line at the bottom of the plot. This is the same accuracy on adversarial examples from this adversary reported in Carlini & Wagner(2016) on a convolutional network trained on MNIST.\n221 22 418 222222 236 236 222 (a) (b) 2 221 221 222|222 222|222 981 981 222|222 222|222 22 22222 21222 222222 222/222 236 27 222 c d\nFigure 7: The results of our ImageNet targeted L2 optimization attack. In all cases we target a. new label of 222 (\"soccer ball''). Figure (a) shows the 30 images from the first 40 images in the. ImageNet validation set that the VIB network classifies correctly. The class label is shown in green. on each image. The predicted label and targeted label are shown in red. Figure (b) shows adversarial. examples of the same images generated by attacking our VIB network with = 0.01. While all. of the attacks change the classification of the image, in 13 out of 30 examples the attack fails to. hit the intended target class (\"soccer ball'). Pink crosses denote cases where the attack failed to. force the model to misclassify the image as a soccer ball. Figure (c) shows the same result but. for our deterministic baseline operating on the whitened precomputed features. The attack always. succceeds. Figure (d) is the same but for the original full Inception ResNet V2 network without. modification. The attack always succceeds. There are slight variations in the set of adversarial. examples shown for each network because we limited the adversarial search to correctly classified images. In the case of the deterministic baseline and original Inception ResNet V2 network, the. perturbations are hardly noticable in the perturbed images, but in many instances, the perturbations. for the VIB network can be percieved.\nFigure 8: Shown are the absolute differences between the original and final perturbed images for. all three networks. The left block shows the perturbations created while targeting the VIB network.. The middle block shows the perturbations needed for the deterministic baseline using precomputed whitened features. The right block shows the perturbations created for the unmodified Inception ResNet V2 network. The contrast has been increased by the same amount in all three columns to. emphasize the difference in the magnitude of the perturbations. The VIB network required much. larger perturbations to confuse the classifier, and even then did not achieve the targeted class in 13. of those cases.\nmodel is simply logistic regression. To further speed training, we whitened the 1536 dimensional representation.\nUnder this transformation, the experiment regime is identical to the permutation invariant MNIST task. We therefore used a similar model architecture. Inputs are passed through two fully connected layers, each with 1024 units. Next, data is fed to a stochastic encoding layer; this layer is charac- terized by a spherical Gaussian with 1024 learned means and standard deviations. The output of the stochastic layer is fed to the variational classifier-itself a logistic regression, for simplicity. All other hyperparameters and training choices are identical to those used in MNIST, more details in Appendix A"}, {"section_index": "11", "section_name": "Classification", "section_text": "We see the same favorable VIB classification performance in ImageNet as in MNIST. By varying. 3, the estimated mutual information between encoding and image (I(Z, X)) varies as well. At large values of accuracy suffers, but at intermediate values we obtain improved performance over both a deterministic baseline and a = 0 regime. In all cases our accuracy is somewhat lower than the original 80.4% accuracy. This may be a consequence of inadequate training time or suboptimal. hyperparameters.\nOverall the best accuracy we achieved was using = O.01. Under this setting we saw an accu racy of 80.12%-nearly the same as the state-of-the-art unmodified network- but with substantially smaller information footprint, only I(X, Z) ~ 45 bits. This is a surprisingly small amount of infor mation; = 0 implies over 10,000 bits yet only reaches an accuracy of 78.87%. The deterministic baseline, which was the same network, but without the VIB loss and a 1024 fully connected lin ear layer instead of the stochastic embedding similarly only achieved 78.75% accuracy. We stress that regressions from the achievable 80.4% are likely due to suboptimal hyperparameters settings or inadequate training.\nConsidering a continuum of and a deterministic baseline, the best classification accuracy was achieved with a = 0.01 E (0, 1). In other words, VIB offered accuracy benefit yet using a mere ~ 45 bits of information from each image"}, {"section_index": "12", "section_name": "Adversarial Robustness", "section_text": "We next show that the VIB-trained network improves resistance to adversarial attack.\nFigure|8|shows the absolute pixel differences between the perturbed and unperturbed images for the examples in Figure[7] We see that the VIB network requires much larger perturbations in order to fool the classifier, as quantified in Table2\nMetric Determ IRv2 VIB(0.01) Sucessful target 1.0 1.0 0.567 L2 6.45 14.43 43.27 Lo 0.18 0.44 0.92\nTable 2: Quantitative results showing how the different Inception Resnet V2-based architectures. (described in Section 4.2.5) respond to targeted L2 adversarial examples. Determ is the deterministic architecture, IRv2 is the unmodified Inception Resnet V2 architecture, and VIB(0.01) is the VIB architecture with = O.01. Successful target is the fraction of adversarial examples that caused the architecture to classify as the target class (soccer ball). Lower is better. L2 and Loo are the. average L distances between the original images and the adversarial examples. Larger values mean the adversary had to make a larger perturbation to change the class.."}, {"section_index": "13", "section_name": "5 FUTURE DIRECTIONS", "section_text": "There are many possible directions for future work, including: putting the VIB objective at multipl or every layer of a network; testing on real images; using richer parametric marginal approxima tions, rather than assuming r(z) = N(0, I); exploring the connections to differential privacy (se e.g.,[Wang et al.(2016a);Cuff & Yu(2016)); and investigating open universe classification problem (see e.g.,Bendale & Boult|(2015)). In addition, we would like to explore applications to sequence prediction, where X denotes the past of the sequence and Y the future, while Z is the current repre sentation of the network. This form of the information bottleneck is known as predictive informatior (Bialek et al.2001| Palmer et al.||2015).\nDavid Barber Felix Agakov. The IM algorithm: a variational approach to information maximizatior In NIPS, volume 16, 2004.\nShumeet Baluja, Michele Covell, and Rahul Sukthankar. The virtues of peer pressure: A simple method for discovering high-value mistakes. In Intl. Conf. Computer Analysis of Images and Patterns, 2015.\nAbhijit Bendale and Terrance Boult. Towards open world recognition. In CVPR, 2015\n11 The attacks still often cause the VIB model to misclassify the image, but not to the targeted label. This is a form of \"partial' robustness, in that an attacker will have a harder time hitting the target class, but can stil disrupt correct function of the network.\nWe focus on the Carlini targeted L2 attack (see Section4.2.1). We show results for the VIB-trained network and a deterministic baseline (both on top of precomputed features), as well as for the origi- nal pretrained Inception ResNet V2 network itself. The VIB network is more robust to the targeted L2 optimization attack in both magnitude of perturbation and frequency of successful attack\nFigure7|shows some example images which were all misclassified as \"soccer balls\"' by the deter- ministic models; by contrast, with the VIB model, only 17 out of 30 of the attacks succeeded in. being mislabeled as the target label|11 We find that the VIB model can resist about 43.3% of the. attacks, but the deterministic models always fail (i.e., always misclassify into the targeted label)\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.\nRyan P. Browne and Paul D. McNicholas. Multivariate sharp quadratic bounds via -strong con vexity and the fenchel connection. Electronic Journal of Statistics, 9, 2015.\nMatthew Chalk, Olivier Marre, and Gasper Tkacik. Relevant sparse codes with variational informa tion bottleneck. In NIPS, 2016\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009 IEEE Conference on, pp. 248-255. IEEE, 2009.\nAlhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers from adversarial to random noise. In NIPS, 2016.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In AI/Statistics, volume 9, pp. 249-256, 2010.\nRuitong Huang, Bing Xu, Dale Schuurmans, and Csaba Szepesvari. Learning with a strong adver sary. CoRR, abs/1511.03034, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nDiederik P Kingma and Max Welling. Auto-encoding variational Bayes. In ICLR, 2014.\nDavid JC MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003.\nShakir Mohamed and Danilo Jimenez Rezende. Variational information maximisation for intrinsi cally motivated reinforcement learning. In NIPS. pp. 2125-2133. 2015\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. Arxiv, 2016.\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In CVPR, 2016.\nNicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. Arxiv. 2016.\nIrina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick Shakir Mohamed, and Alexander Lerchner. beta-VAE: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017. URL|https : //openreview. net/pdf? id=Sy2 f zU9g1\nStephanie E Palmer, Olivier Marre, Michael J Berry, and William Bialek. Predictive information ir a sensory population. PNAS, 112(22):6908-6913, 2015.\nSara Sabour, Yanshuai Cao, Fartash Faghri, and David J Fleet. Adversarial manipulation of deep representations. In ICLR, 2016.\nNoam Slonim, Gurinder Singh Atwal, Gasper Tkacik, and William Bialek. Information-based clus tering. PNAS, 102(51):18297-18302, 2005\nWeina Wang, Lei Ying, and Junshan Zhang. On the relation between identifiability, differentia privacy and Mutual-Information privacy. IEEE Trans. Inf. Theory, 62:5018-5029, 2016a.\nWeiran Wang, Honglak Lee, and Karen Livescu. Deep variational canonical correlation analysis arXiv [cs.LG].11 October 2016b. URL https://arxiv.org/abs/1610.03454\nBoris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging SIAM Journal on Control and Optimization, 30(4):838-855, 1992"}, {"section_index": "14", "section_name": "HYPERPARAMETERS AND ARCHITECTURE DETAILS FOR EXPERIMENTS", "section_text": "All of the networks for this paper were trained using TensorFlow (Abadi et al.] 2016). All weights. were initialized using the default TensorFlow Xavier initialization scheme (Glorot & Bengio|2010 using the averaging fan scaling factor on uniform noise. All biases were initialized to zero. The Adam optimizer (Kingma & Ba|2015) was used with initial learning rate of 10-4, (1 = 0.5, 2 = 0.999) and exponential decay, decaying the learning rate by a factor of 0.97 every 2 epochs. The. networks were all trained for 200 epochs total. For the MNIST experiments, a batch size of 100. was used, and the full 60,000 training and validation set was used for training, and the 10,000 test. images for test results. The input images were scaled to have values between -1 and 1 before fed to. the network.\nAll runs maintained an exponential weighted average of the parameters during the training run;. these averaged parameters were used at test time. This is in the style of Polyak averagingPolyak &. Juditsky(1992), with a decay constant of 0.999. Our estimate of mutual informations were measured in bits. For the VIB experiments in all sections, no other form of regularization was used.\n= log(1 + exp(x - 5.0)\nFor the 1024 dimensional Imagenet embeddings of Section4.2.5] a sigma bias of 0.57 was used to keep the initial standard deviations near 1 originally, and a batch size of 200 was used\nmaxI(Z,X) - I(Z,i)\nI(Z,X) = dx dz p(x, z) log px H(x)+ dx p(x|z) log p(x|2 dx p(x|z) log q(x|z) dx dz p(x|z) log q(x[z)\nHere we have dropped the entropy in our data H(X) because it is out of our control and we have used the nonnegativity of the Kullbach-Leibler divergence to replace our intractable p(x[z) with a variational decoder q(x|z).\nFor the 256 dimensional gaussian embeddings of Section4.1.1] a linear layer of size 512 was used. to create the 256 mean values and standard deviations for the embedding. The standard deviations were made to be positive by a softplus transformation with a bias of -5.0 to have them initially be. Small.\nFor the 2 dimensional gaussian embeddings of Section4.1.2] a linear layer was used with 2+4 = 6 outputs, the first two of which were used for the means, and the other 4 were reshaped to a 2 2 matrix, the center was transformed according to a softplus with a bias of -5.0, and the off diagonal components were multiplied by 10-2, while the upper triangular element was dropped to form the Cholesky decomposition of the covariance matrix.\nHere the aim is to take our data X and maximize the mutual information contained in some encoding Z, while restricting how much information we allow our representation to contain about the identity of each data element in our sample (i). We will form a bound much like we did in the main text. For the first term, we form a variational decoder g(x[z) and take a bound:\nz) log (21) dx p(x|z) log p(x|z (22) p(x[z) log q(x[z) (23) z p(x|z) log q(x|z) (24)\nTurning our attention to the second term. note that.\ndx p(z|x)p(x|i) dx p(z[x)(x-x) =p(z|x)\nSo that we can bound our second term from above\nWhere we have replaced the intractable marginal p(z) with a variational marginal r(z)\nPutting these two bounds together we have that our unsupervised information bottleneck objective takes the form\nI(Z,X)-I(Z,i) < lz p(z[x) log q(x[z) - `KL[p(Z|xi),r(Z)]\nIt is interesting that while this objective takes the same mathematical form as that of a Variational Autoencoder, the interpretation of the objective is very different. In the VAE, the model starts life as. a generative model with a defined prior p(z) and stochastic decoder p(x[z) as part of the model, and the encoder q(z[x) is created to serve as a variational approximation to the true posterior p(z[x) = p(x|z)p(z)/p(x). In the VIB approach, the model is originally just the stochastic encoder p(z[x), and the decoder q(x[z) is the variational approximation to the true p(x[z) = p(z[x)p(x)/p(z) and r(z) is the variational approximation to the marginal p(z) = I dx p(x)p(z[x). This difference in. interpretation makes natural suggestions for novel directions for improvement..\nThis precise setup, albeit with a different motivation was recently explored in|Higgins et al.(2016) where they demonstrated that by changing the weight of the variational autoencoders regularization term, there were able to achieve latent representations that were more capable when it came ot zero- shot learning and understanding 'objectness'\"'. In that work, they motivated their choice to change the relative weightings of the terms in the objective by appealing to notions in neuroscience. Here we demonstrate that appealing to the information bottleneck objective gives a principled motivation and could open the door to better understanding the optimal choice of and more tools for accessing the importance and tradeoff of both terms.\nConsider the special case when the bottleneck Z is a multivariate Normal, i.e., z|x ~ N(x, x where , is a K K positive definite matrix. The parameters x, , can be constructed from a deep neural network, e.g.,\nPx = Y1:K(x chol(x) = diag(log(1 + exp(7K+1:2K))) + subtril(72K+1:K(K+3)/2\nwhere y(x) E RK(K+3)/2 is the network output of input x.\no(z[i) I(Z,i)= dz p(z|i)p(i) log 2 1 p(z|xi) dz p(z|xi) log N p(z) 1 p(z|Xi dz p(z|xi) log N\nAnd this takes the form of a variational autoencoder (Kingma & Welling]2014), except with the second KL divergence term having an arbitrary weight .\nBeyond the connection to existing variational autoencoder techniques, we note that the unsupervised information bottleneck objective suggests new directions to explore, including targetting the exact marginal p(z) in the regularization term, as well as the opportunity to explore tighter bounds on the first I(Z, X) term that may not require explicit variational reconstruction.\nThis setup (which is identical to our experiments) induces a classifier which is bounded by a quadratic function, which is interesting because the theoretical framework|Fawzi et al. (2016) proves that quadratic classifiers have greater capacity for adversarial robustness than linear functions.\nWe now derive an approximate bound using second order Taylor series expansion (TSE). The bounc. can be made proper via Browne & McNicholas(2015). However, using the TSE is sufficient tc sketch the derivation.\nJensen's inequality implies that the negative log-likelihood soft-max is upper bounded by\nThe second order Taylor series expansion (TSE) of lse is given by\nIse(x + 0) ~ lIse(x) + 8T S(x) + 1sT diag(S(x)) - S(x)S(x) S\nTaking the expectation of the TSE at the mean yields\nThe second-moment was calculated by noting\nPutting this altogether, we conclude\nlog E[S(W Z)[x,x] -E[logS(W Z)[x,x =-W x + E[Ise(W Z)[x,x] Wx + E[Ise(Z)|W x,Wx]\nEn(0,w,wt)[Ise(Wx + 0)] ~ lse(Wx) + En(0,w,wt)[6T ]S(Wx)+ +1 EN(0,WE,wT)[6Tdiag(S(Wx))-S(Wx)S(Wx)T = lse(Wx) +1tr(WxWTdiag(S(Wx)) -S(Wx)S(Wx)T lse(Wx) + tr(WxWT diag(S(Wx))) - 1S(Wx)WxWTS(Wx) =lse(Wx) +2VS(Wx)'WxWTS(Wx) -3S(Wx)T WExWT S(Wx\nE[X'BX]=Etr(XX'B)=tr(E[XX'1B)=tr(B)\nE[S(W Z)[x,Ex] Z S(W x) exp -2VS(Wx)'WExWTS(Wx)+JS(Wx)TWxWT S(Wx)\nAs indicated, rather than approximate the Ise via TSE, we can make a sharp, quadratic upper bound via|Browne & McNicholas(2015). However this merely changes the S(W x) scaling in the expo nential; the result is still log-quadratic."}]
HycUbvcge
[{"section_index": "0", "section_name": "DEEP GENERALIZED CANONICAL CORRELA ANALYSIS", "section_text": "Adrian Benton, Huda Khayrallah, Biman Gujral Drew Reisinger, Sheng Zhang, Raman Arora\nadrian', huda*,bgujral1*, reisinger', zsheng2*, arora' *@jhu.edu, '@cogsci.jhu.edu, '@cs.jhu.edu.\nWe present Deep Generalized Canonical Correlation Analysis (DGCCA) - a. method for learning nonlinear transformations of arbitrarily many views of data. such that the resulting transformations are maximally informative of each other.. While methods for nonlinear two-view representation learning (Deep CCA, (An-. drew et al.]2013)) and linear many-view representation learning (Generalized. CCA (Horst|[1961)) exist, DGCCA is the first CCA-style multiview representation learning technique that combines the flexibility of nonlinear (deep) representation. learning with the statistical power of incorporating information from many inde-. pendent sources, or views. We present the DGCCA formulation as well as an. efficient stochastic optimization algorithm for solving it. We learn DGCCA repre-. sentations on two distinct datasets for three downstream tasks: phonetic transcrip. tion from acoustic and articulatory measurements, and recommending hashtags. and friends on a dataset of Twitter users. We find that DGCCA representations. soundly beat existing methods at phonetic transcription and hashtag recommenda. tion, and in general perform no worse than standard linear many-view techniques"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Multiview representation learning refers to settings where one has access to many \"views\" of data at train time. Views often correspond to different modalities or independent information about ex- amples: a scene represented as a series of audio and image frames, a social media user characterized by the messages they post and who they friend, or a speech utterance and the configuration of the speaker's tongue. Multiview techniques learn a representation of data that captures the sources of variation common to all views.\nMultiview representation techniques are attractive for intuitive reasons. A representation that is abl. to explain many views of the data is more likely to capture meaningful variation than a representatior. that is a good fit for only one of the views. They are also attractive for the theoretical reasons. Fo. example,Anandkumar et al.(2014) show that certain classes of latent variable models, such a. Hidden Markov Models, Gaussian Mixture Models, and Latent Dirichlet Allocation models, can b optimally learned with multiview spectral techniques. Representations learned from many view. will generalize better than one, since the learned representations are forced to accurately captur. variation in all views at the same time (Sridharan & Kakade|2008) - each view acts as a regularize. constraining the possible representations that can be learned. These methods are often based or. canonical correlation analysis (CCA), a classical statisical technique proposed byHotelling(1936\nIn spite of encouraging theoretical guarantees, multiview learning techniques cannot freely mode nonlinear relationships between arbitrarily many views. Either they are able to model variatioi across many views, but can only learn linear mappings to the shared space (Horst1961), or they simply cannot be applied to data with more than two views using existing techniques based on kerne CCA (Hardoon et al.]2004) and deep CCA (Andrew et al.]2013)."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Here we present Deep Generalized Canonical Correlation Analysis (DGCCA). Unlike previous correlation-based multiview techniques, DGCCA learns a shared representation from data with ar bitrarily many views and simultaneously learns nonlinear mappings from each view to this shared space. The only (mild) constraint is that these nonlinear mappings from views to shared space must be differentiable. Our main methodological contribution is the derivation of the gradient update for the Generalized Canonical Correlation Analysis (GCCA) objective (Horst1961). As a practical contribution, we have also released an implementation of DGCCA1\nWe also evaluate DGCCA-learned representations on two distinct datasets and three downstrean tasks: phonetic transcription from aligned speech and articulatory data, and Twitter hashtag anc friend recommendation from six text and network feature views. We find that downstream perfor mance of DGCCA representations is ultimately task-dependent. However, we find clear gains ir performance from DGCCA for tasks previously shown to benefit from representation learning or. more than two views, with up to 4% improvement in heldout accuracy for phonetic transcription.."}, {"section_index": "3", "section_name": "2 PRIOR WORK", "section_text": "u]E12U2 (ui, u2) corr(u] X1, u2 X2) = = argmax argmax U1 ERd1,u2ERd2 u1 ERd1,u2ERd2 11u1u 22U2 ui\nDeep CCA (DCCA) (Andrew et al.l2013) is an extension of CCA that addresses the first limitation by finding maximally linearly correlated non-linear transformations of two vectors. It does this by passing each of the input views through stacked non-linear representations and performing CCA on the outputs.\n(u*, u*, W*, W*) = argmax corr(u] f1(X1), u2 f2(X2) U1,U2\n'See https://bitbucket.org/adrianbenton/dgcca-py3 for implementation of DGCC along with data from the synthetic experiments.\nThe paper is organized as follows. We review prior work in Section |2] In Section [3|we describe DGCCA. Empirical results on a synthetic dataset, and three downstream tasks are presented in Section4 In Section 5] we describe the differences between DGCCA and other non-CCA-based multiview learning work and conclude with future directions in Section|6.\nSome of most successful techniques for multiview representation learning are based on canonical correlation analysis (Wang et al.]2015a b) and its extension to the nonlinear and many view settings, which we describe in this section. For other related multiview learning techniques, see Section|5\nCanonical correlation analysis (CCA) (Hotelling1936) is a statistical method that finds maximally correlated linear projections of two random vectors and is a fundamental multiview learning tech- nique. Given two input views, X1 E Rd1 and X2 E Rd2, with covariance matrices, 11 and 22, respectively, and cross-covariance matrix, 12, CCA finds directions that maximize the correlation between them:\n(u,u) = Uj Y12U2 argmax uTY11u1=u 22u2=1\nThis technique has two limitations that have led to significant extensions: First, it is limited tc learning representations that are linear transformations of the data in each view, and second, it can only leverage two input views.\nLet us use f1(X1) and f2(X2) to represent the network outputs. The weights, W1 and W2, of these. networks are trained through standard backpropagation to maximize the CCA objective\nAnother extension of CCA, which addresses the limitation on the number of views, is Generalized CCA (GCCA) (Horst!1961). It corresponds to solving the optimization problem in Equation (2) of finding a shared representation G of J different views, where N is the number of data points. d, is the dimensionality of the jth view, r is the dimensionality of the learned representation, and X, E Rd, x N is the data matrix for the jth view2\n||G-u, x|l minimize U;ERdjXr,GERrX N j=1 GGT =Ir subject to"}, {"section_index": "4", "section_name": "3 DEEP GENERALIZED CANONICAL CORRELATION ANALYSIS (DGCCA)", "section_text": "In this section, we present deep GCCA (DGCCA): a multiview representation learning technique that benefits from the expressive power of deep neural networks and can also leverage statistical strength from more than two views in data, unlike Deep CCA which is limited to only two views More fundamentally, deep CCA and deep GCCA have very different objectives and optimization problems, and it is not immediately clear how to extend deep CCA to more than two views.\nDGCCA learns a nonlinear map for each view in order to maximize the correlation between the learnt representations across views. In training, DGCCA passes the input vectors in each view through multiple layers of nonlinear transformations and backpropagates the gradient of the GCCA. objective with respect to network parameters to tune each view's network, as illustrated in Figure|1. The objective is to train networks that reduce the GCCA reconstruction error among their outputs. At test time, new data can be projected by feeding them through the learned network for each view..\nAW AW GCCA I 1 1 1 1 1 1 1 1 i4w\nAW 4W GCCA I 1 1 1 1 1 1 1 1 X 1 1 / AW 1\nFigure 1: A schematic of DGCCA with deep networks for J views\nSolving GCCA requires finding an eigendecomposition of an N N matrix, which scales quadrat ically with sample size and leads to memory constraints. Unlike CCA and DCCA, which only learn projections or transformations on each of the views, GCCA also learns a view-independent repre- sentation G that best reconstructs all of the view-specific representations simultaneously. The key limitation of GCCA is that it can only learn linear transformations of each view.\nWe now formally define the DGCCA problem. We consider J views in our data, and let X; E Rd, N denote the jth input matrix3The network for the jth view consists of K, layers. Assume,. for simplicity, that each layer in the jth view network has c; units with a final (output) layer of size. Oj. The output of the kth layer for the jth view is h = s(Whk-1), where s : R > R is a. nonlinear activation function and W? E Rc ck-1 is the weight matrix for the kth layer of the jth. view network. We denote the output of the final layer as fj(Xj)..\nthat J |G-UJ fj(Xj)lIF minimize U;ERjXr,GERrXN j=1 GGT =Ir subject to\nwhere G E Rr N is the shared representation we are interested in learning\nOptimization: We solve the DGCCA optimization problem using stochastic gradient descen. (SGD) with mini-batches. In particular, we estimate the gradient of the DGCCA objective in Prob. lem 3 on a mini-batch of samples that is mapped through the network and use back-propagatior. to update the weight matrices, Wj's. However, note that the DGCCA optimization problem is a. constrained optimization problem. It is not immediately clear how to perform projected gradient de. scent with back-propagation. Instead, we characterize the objective function of the GCCA problem. at an optimum, and compute its gradient with respect to the inputs to GCCA, i.e. with respect to the network outputs. These gradients are then back-propagated through the network to update Wj's..\nAlthough the relationship between DGCCA and GCCA is analogous to the relationship betweei. DCCA and CCA, derivation of the GCCA objective gradient with respect to the network outpu layers is non-trivial. The main difficulty stems from the fact that there is no natural extension of the. correlation objective to more than two random variables. Instead, we consider correlations betweer every pair of views, stack them in a J J matrix and maximize a certain matrix norm for tha. matrix. For GCCA, this suggests an optimization problem that maximizes the sum of correlations. between a shared representation and each view. Since the objective as well as the constraints of th generalized CCA problem are very different from that of the CCA problem, it is not immediately. obvious how to extend Deep CCA to Deep GCCA.\n|G-Ufj(X;)|I=|G-Gfj(Xj)Cz'fj(X;)llF=rJ-Tr(GMG j=1 j=1\nMinimizing the GCCA objective (w.r.t. the weights of the neural networks) means maximizing Tr(GMGT), which is the sum of eigenvalues L = r=1 A,(M). Taking the derivative of L with. respect to each output layer f(X) we have:.\nThus, the gradient is the difference between the r-dimensional auxiliary representation G embedded into the subspace spanned by the columns of U, (the first term) and the projection of the actual data in f(X) onto the said subspace (the second term). Intuitively, if the auxiliary representation G is far away from the view-specific representation U, fj(Xj), then the network weights should. receive a large update. Computing the gradient descent update has time complexity O(JNrd),. where d = max(d1. d2. . d 1) is the largest dimensionality of the input views.."}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we apply DGCCA to a small synthetic data set to show how it preserves the generative structure of data sampled from a multiview mixture model. The data we use for this experiment are\nNext, we show a sketch of the gradient derivation, the full derivation is given in appendix A] It is straightforward to show that the solution to the GCCA problem is given by solving an eigenvalue problem. In particular, define Cj; = f(X;) f(X;)T E Ro o, to be the scaled empirical covariance projection matrix that whitens the data; note that P, is symmetric and idempotent. We define M = G are the top r (orthonormal) eigenvectors of M, and U, = C' f(X;)GT. Thus, at the minimum of the objective, we can rewrite the reconstruction error as follows:\nOL 2U;G-2U;U}fj(Xj) 0fj(Xj)\nplotted in Figure2 Points that share the same color across different views are sampled from the same mixture component.\n###ot ###x# # F X\nA #\nFigure 3: The matrix G learned from applying (linear) GCCA or DGCCA to the data in Figure|2\nIt is also illustrative to consider the view-specific representations learned by DGCCA, that is, tc consider the outputs of the neural networks that were trained to maximize the GCCA objective. We plot the representations in Figure4 For each view, we have learned a nonlinear mapping that does remarkably well at making the mixture components linearly separable. Recall that absolutely nc direct supervision was given about which mixture component each point was generated from. The only training signals available to the networks were the reconstruction errors between the network outputs and the learned representation G.\nFigure 4: Outputs of the trained input neural networks in Section4.1applied to the data in Figure2\nIn this section, we discuss experiments on the University of Wisconsin X-ray Microbeam Database (XRMB) (Westbury|1994). XRMB contains acoustic and articulatory recordings as well as phone. mic labels. We present phoneme classification results on the acoustic vectors projected using DCCA.\nFigure 2: Synthetic data used in in Section|4.1|experiments. Importantly, in each view, there is no linear transformation of the data that separates the two mixture components, in the sense that the generative structure of the data could not be exploited by a linear model. This point is reinforced by Figure|3(a), which shows the two-dimensional representation G learned by applying (linear) GCCA to the data in Figure |2] The learned representation completely loses the structure of the data.\n(a) GCCA (b) DGCCA x G learned from applving (inear) GCCA or DGCCA to the\nCPPy18( We can contrast the failure of GCCA to preserve structure with the result of applying DGCCA; in this. case, the input neural networks had three hidden layers with ten units each with weights randomly nitialized. We plot the representation G learned by DGCCA in Figure[3|(b). In this representation,. the mixture components are easily separated by a linear classifier; in fact, the structure is largely. preserved even after projection onto the first coordinate of G..\nGCCA. and DGCCA. We set acoustic and articulatory data as the two views and phoneme labels as the third view for GCCA and DGCCA. For classification, we run K-nearest neighbor classification (Cover & Hart1967) on the projected result.\nWe use the same train/tune/test split of the data as Arora & Livescu(2014). To limit experimen runtime, we use a subset of speakers for our experiments. We run a set of cross-speaker experiment using the male speaker JW11 for training and two splits of JW24 for tuning and testing. We alsc perform parameter tuning for the third view with 5-fold cross validation using a single speaker JW11. For both experiments, we use acoustic and articulatory measurements as the two views ir DCCA. Following the pre-processing in Andrew et al.(2013), we get 273 and 112 dimensiona feature vectors for the first and second view respectively. Each speaker has ~50,o0o frames. Fo the third view in GCCA and DGCCA, we use 39-dimensional one-hot vectors corresponding to the labels for each frame, following|Arora & Livescu (2014)."}, {"section_index": "6", "section_name": "4.2.3 RESULTS", "section_text": "As we show in Table1 DGCCA improves upon both the linear multiview GCCA and the non-lineal 2-view DCCA for both the cross-speaker and speaker-dependent cross-validated tasks.\nIn addition to accuracy, we examine the reconstruction error, i.e. the objective in Equation [3] ob tained from the objective in GCCA and DGCCA4|This sharp improvement in reconstruction error. shows that a non-linear algorithm can better model the data..\nIn this experimental setup, DCCA under-performs the baseline of simply running KNN on the orig inal acoustic view. Prior work considered the output of DCCA stacked on to the central frame of the original acoustic view (39 dimensions). This poor performance, in the absence of original features indicates that it was not able to find a more informative projection than original acoustic features based on correlation with the articulatory view within the first 30 dimensions.\nTable 1: KNN phoneme classification performance\nCROSS-SPEAKER SPEAKER-DEPENDENT DEV TEST REC DEV TEST REC METHOD Acc Acc ERROR Acc Acc ERROR MFCC 48.89 49.28 66.27 66.22 DCCA 45.40 46.06 65.88 65.81 GCCA 49.59 50.18 40.67 69.52 69.78 40.39 DGCCA 53.78 54.22 35.89 72.62 72.33 20.52\nTo highlight the improvements of DGCCA over GCCA, Figure5|presents a subset of the the con. fusion matrices on speaker-dependent test data. In particular, we observe large improvements in the classification of D, F, K, SH, V and Y. GCCA outperforms DGCCA for UH and DH. These. matrices also highlight the common misclassifications that DGCCA improves upon. For instance.\n4For 2-view experiments, correlation is a common metric to compare performance. Since that metric is unavailable in a multiview setting, reconstruction error is the analogue.\nWe use a fixed network size and regularization for the first two views, each containing three hidden ayers with sigmoid activation functions. Hidden layers for the acoustic view were all width 1024. and layers in the articulatory view all had width 512 units. L2 penalty constants of O.oo01 and 0.01 were used to train the acoustic and articulatory view networks, respectively. The output layer dimension of each network is set to 30 for DCCA and DGCCA. For the 5-fold speaker-dependent experiments, we performed a grid search for the network sizes in {128, 256, 512, 1024} and covari- ance matrix regularization in {10-2, 10-4, 10-6, 10-8} for the third view in each fold. We fix the hyperparameters for these experiments optimizing the networks with minibatch stochastic gradient descent with a step size of O.005, batch size of 2000, and no learning decay or momentum. The third view neural network had an L2 penalty of 0.0005.\nDDHP R B F KSHWAOIY S THHVUHY DDHP R B F KSHWAOIY S THHVUHY D 90 D 90 DH DH 80 P 80 R R 70 70 B F F 60 60 K K SH SH 50 50 W W AO 40 AO 40 IY IY S 30 s 30 HH 20 HH 20 V UH 10 10 UH V 0 0 (a) GCCA (b) DGCCA F1011re ondDOCO\nDGCCA rectifies the frequent misclassification of V as P, R and B by GCCA. In addition, com monly incorrect classification of phonemes such as S and T is corrected by DGCCA, which enables better performance on other voiceless consonants such as like F, K and SH. Vowels are classified with almost equal accuracy by both the methods.\nLinear multiview techniques are effective at recommending hashtag and friends for Twitter users. (Benton et al.|2016). In this experiment, six views of a Twitter user were constructed by applying. principal component analysis (PCA) to the bag-of-words representations of (1) tweets posted by the ego user, (2) other mentioned users, (3) their friends, and (4) their followers, as well as one-hot. encodings of the local (5) friend and (6) follower networks. We learn and evaluate DGCCA models on identical training, development, and test sets as|Benton et al.|(2016), and evaluate the DGCCA. representations on macro precision at 1000 (P@ 1000) and recall at 1000 (R @ 1000) for the hashtag and friend recommendation tasks described there..\nWe trained 40 different DGCCA model architectures, each with identical architectures across views where the width of the hidden and output layers, c1 and c, for each view are drawn uniformly from 10, 1000], and the auxiliary representation width r is drawn uniformly from 10, c2P] All networks used ReLUs as activation functions, and were optimized with Adam (Kingma & Ba2014) for 200 epochs6Networks were trained on 90% of 102,328 Twitter users, with 10% of users used as a tuning set to estimate heldout reconstruction error for model selection. We report development and test results for the best performing model on the downstream task development set. Learning rate was set to 10-4 with an L1 and L2 regularization constants of 0.01 and 0.001 for all weights7\nTable 2: Dev/test performance at Twitter friend and hashtag recommendation tasks\nFRIEND HASHTAG ALGORITHM P@1000 R @1000 P@1000 R @1000 PCA[TEXT+NET] 0.445/0.439 0.149/0.147 0.011/0.008 0.312/0.290 GCCA[TEXT] 0.244/0.249 0.080/0.081 0.012/0.009 0.351/0.326 GCCA[TEXT+NET] 0.271/0.276 0.088/0.089 0.012/0.010 0.359/0.334 DGCCA[TEXT+NET] 0.297/0.268 0.099/0.090 0.013/0.010 0.385/0.373 WGCCA[TEXT] 0.269/0.279 0.089/0.091 0.012/0.009 0.357/0.325 WGCCA[TEXT+NET] 0.376/0.364 0.123/0.120 0.013/0.009 0.360/0.346\nTable 2|displays the performance of DGCCA compared to PCA[text+net] (PCA applied to con catenation of view feature vectors), linear GCCA applied to the four text views, [text], and all\n5we chose to restrict ourselves to a single hidden layer with non-linear activation and identical architec- tures for each view, so as to avoid a fishing expedition. If DGCCA is appropriate for learning Twitter user representations, then a good architecture should require little exploration.\nviews, Jtext+net], along with a weighted GCCA variant (WGCCA). We learned PCA, GCCA, anc WGCCA representations of width r E {10, 20, 50, 100, 200, 300, 400, 500, 750, 1000}, and repor the best performing representations on the development set..\nThere are several points to note: First is that DGCCA outperforms linear methods at hashtag rec. ommendation by a wide margin in terms of recall. This is exciting because this task was shown to. benefit from incorporating more than just two views from Twitter users. These results suggest that. a nonlinear transformation of the input views can yield additional gains in performance. In addi-. tion, WGCCA models sweep over every possible weighting of views with weights in {0, 0.25, 1.0}. WGCCA has a distinct advantage in that the model is allowed to discriminatively weight views. to maximize downstream performance. The fact that DGCCA is able to outperform WGCCA at. hashtag recommendation is encouraging, since WGCCA has much more freedom to discard unin-. formative views, whereas the DGCCA objective forces networks to minimize reconstruction error equally across all views. As noted inBenton et al.(2016), only the friend network view was useful. for learning representations for friend recommendation (corroborated by performance of PCA ap-. plied to friend network view), so it is unsurprising that DGCCA when applied to all views cannot. compete with WGCCA representations learned on the single useful friend network view"}, {"section_index": "7", "section_name": "S OTHER MULTIVIEW LEARNING WORK", "section_text": "There has been strong work outside of CCA-related methods to combine nonlinear representation and learning from multiple views.Kumar et al.[(2011) elegantly outlines two main approaches these methods take to learn a joint representation from many views: either by 1) explicitly maximizing. pairwise similarity/correlation between views or by 2) alternately optimizing a shared, \"consen- sus\"' representation and view-specific transformations to maximize similarity. Models such as the siamese network proposed byMasci et al.(2014), fall in the former camp, minimizing the squared error between embeddings learned from each view, leading to a quadratic increase in the terms of. the loss function size as the number of views increase.Rajendran et al.(2015) extend Correlational. Neural Networks (Chandar et al.]2015) to many views and avoid this quadratic explosion in the. loss function by only computing correlation between each view embedding and the embedding of a \"pivot' view. Although this model may be appropriate for tasks such as multilingual image caption. ing, there are many datasets where there is no clear method of choosing a pivot view. The DGCCA. objective does not suffer from this quadratic increase w.r.t. the number of views, nor does it require. a privileged pivot view, since the shared representation is learned from the per-view representations.\nApproaches that estimate a \"consensus\"' representation, such as the multiview spectral clustering ap. proach in|Kumar et al.[(2011), typically do so by an alternating optimization scheme which depends on a strong initialization to avoid bad local optima. The GCCA objective our work builds on is par. ticularly attractive, since it admits a globally optimal solution for both the view-specific projections U1 ...UJ, and the shared representation G by singular value decomposition of a single matrix: a. sum of the per-view projection matrices. Local optima arise in the DGCCA objective only because. we are also learning nonlinear transformations of the input views. Nonlinear multiview methods. often avoid learning these nonlinear transformations by assuming that a kernel or graph Laplaciar. (e.g. in multiview clustering) is given (Kumar et al.|. 2011 Xiaowen 2014 Sharma et al.|2012).\nWe present DGCCA, a method for non-linear multiview representation learning from an arbitrary number of views. We show that DGCCA clearly outperforms prior work when using labels as a third view (Andrew et al.2013] Arora & Livescu2014] Wang et al.2015c), and can successfully exploit multiple views to learn user representations useful for downstream tasks such as hashtag recommendation for Twitter users. To date, CCA-style multiview learning techniques were either restricted to learning representations from no more than two views, or strictly linear transformations of the input views. This work overcomes these limitations.\n8The performance of WGCCA suffers compared to PCA because whitening the friend network data ignores. the fact that the spectrum of the decays quickly with a long tail -- the first few principal components made up a. large portion of the variance in the data, but it was also important to compare users based on other components."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "analysis. A dlscrimnnatlveTalent space InOO7lule7VlslOl anaIallelnlNeco 2012 IEEE Conference on, pp. 2160-2167. IEEE, 2012. Karthik Sridharan and Sham M Kakade. An information theoretic framework for multi-view learn ing. In Proceedings of COLT, 2008. Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. Unsupervised learning of acoustic features via deep canonical correlation analysis. In Proc. of the IEEE Int. Conf. Acoustics, Speech and Sig. Proc. (ICASSP'15), 2015a. Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representation learning. In Proc. of the 32nd Int. Conf. Machine Learning (ICML 2015), 2015b. Weiran Wang, Raman Arora, Karen Livescu, and Nathan Srebro. Stochastic optimization for deep cca via nonlinear orthogonal iterations. In Proceedings of the 53rd Annual Allerton Conference on Communication, Control and Computing (ALLERTON), 2015c. John R. Westbury. X-ray microbeam speech production database users handbook. In Waisman Cen ter on Mental Retardation & Human Development University of Wisconsin Madison, WI 53705 2280, 1994. Dong Xiaowen. Multi-View Signal Processing and Learning on Graphs. PhD thesis, Ecole Poly technique Federale de Lausanne, 2014.\nbhishek Kumar, Piyush Rai, and Hal Daume. Co-regularized multi-view spectral clustering. In"}, {"section_index": "9", "section_name": "APPENDIX A DERIVING THE GCCA OBJECTIVE GRADIENT", "section_text": "In order to train the neural networks in DGCCA, we need to compute the gradient of the GCCA objective with respect to any one of its input views. This gradient can then be backpropagated through the input networks to derive updates for the network weights.\nLet N be the number of data points and J the number of views. Let Y, E RckN be the data of neurons in the output layer of the jth network. Then, GCCA can be written as the following optimization problem, where r is the dimensionality of the learned auxiliary representation:\n|G-UJYj|l minimize j=1 GGT =Ir subject to\nkth row of G is gk, and since the matrix product Ggk = ek,\nN GMG'=> AkGgk(Ggk) = k=1 k=1\nBut this is just an N N diagonal matrix containing the top r eigenvalues of M, so we can write the GCCA objective as\nThus, minimizing the GCCA objective (w.r.t. the weights of the neural nets) means maximizing th sum of eigenvalues '-1 A,(M), which we will henceforth denote by L.\nIt can be shown that the solution is found by solving a certain eigenvalue problem. In particular define Cjj = YjYJ E Rckck, Pj = Y,T CF,'Y, (note that P, is symmetric and idempotent), and M = j-1 P, (since each P, is psd, so is M). Then the rows of G are the top r (orthonormal) eigenvectors of M, and U, = C,'Y,GT. Thus, at the minima of the objective, we can rewrite the reconstruction error as follows:\n|G-UYj|F=||G-GYCy'Yj j=1 j=1 J =||G(IN- Pj)|I j=1 J Tr[G(Iv - P;)GT j=1 J =Tr(Ir) - Tr(GMGT) j=1 = Jr - Tr(GMG\nJr- \\(M i=1\nN aL aL aMcd d(Yj)ab aMcd 0(Yj)ab c,d=1 N aMcd (G G)cd a(Yj)ab c,d=1\n(Pj)cd=>`(Yj)kc(C)ke(Yj)ed k,l=1\nThus, by the product rule\nThe derivative in the last term can also be computed using the chain rule\nd(Cz)kl N 0(C)ke O(Cjj)mn d(Yj)ab d(Cjj)mn 0(Yj)ab m,n=1 N C)km( m,n=1 Sam(Yj)nb + 0an(Y) N -Cj)ka n=1 N LY m. m=1 i o\n)kl O(Cjj)mn d(Yj)ab d(Cjj)mn 0(Yj)ab m,n=1 N (C)km( n. m,n=1 [Sam(Yj)nb+ dan(Yj)ml N -C)kaC ii)ne( nl n=1 N C I km i m m=1\nSubstituting this into the expression for a(Y)\n(P; )cd Scb(CY))ad+ 0db(C) Yi -(C) i) (C) Yi)a 1(Y 'C (IN Pj)cb(CYj)ad + (IN P)d(C)\na(Pj)cd K l = dcb (Yj)ea(Cj)ae+ d(Yj)ab l=1 ddb >(Yj)kc(Cz1) )ka+ k=1 a(C ) kl (Yj)kc(Yj)ed d k,l=1 cb(C-Y Yj)ad+ Odb(C) K (Yj)kc(Yj)ed-g L + a a k,l=1\na(Pj)cd Ocb(C Y)ad+ ddb(C a(Yj)ab (CYj)ac(Y)'CzYj) (CYj) C-: (In P)cb(C- Ydad - (IN -Pj)db(CYj)ac\nOL Finally, substituting this into our expression for we find that\nBut recall that U . Using this, the gradient simplifies as follows:\nN dL (GG)ca(IN-Pj)cb(Cj'Yj)ad d(Yj)ab c,d=1 N (G~G)cd(IN - Pj)ds(Cj] c,d=1 2[Cz'Y;G'G(In Pj)]ab OL 2C-1Y:GG(n - Pa\naL =2U;G-2U;U}Y OYj\nThus, the gradient is the difference between the r-dimensional auxiliary representation G embedded into the subspace spanned by the columns of U; (the first term) and the projection of the network outputs in Y, = f(X,) onto said subspace (the second term). Intuitively, if the auxiliary repre-. sentation G is far away from the view-specific representation U, f,(X), then the network weights. should receive a large update."}, {"section_index": "10", "section_name": "APPENDIX B DGCCA OPTIMIZATION PSEUDOCODE", "section_text": "Input: multiview data: X1, X2,..., X 1 number of iterations T, learning rate n Output: O1, O2,..., O J Initialize weights W1, W2,..., WJ for iteration t = 1, 2, . . . , T do for each view j = 1, 2, ..., J do O, forward pass of X, with weights W mean-center O end for U1,...,UJ,G gcca(O1,...,Oj) for each view j = 1, 2, ..., J do 0F/8OjU;UJOj-U;G VW; backprop(@F/dOj,Wj) Wj W; -nVWj end for end for for each view j = 1, 2, ..., J do O, forward pass of X, with weights W, mean-center Oj end for U1,...,UJ,G gcca(O1,...,Oj) for each view j = 1...J do Oj U]Oj end for\nAlgorithm1contains the pseudocode for the DGCCA optimization algorithm. In practice we use stocastic optimization with minibatches, followingWang et al.(2015c)."}, {"section_index": "11", "section_name": "ENDIX C RECONSTRUCTION ERROR AND DOWNSTREAM PERFORMANCE", "section_text": "0.40 0.35 0.30 0.25 8 000T 0.20 0.15 0.10 0.05 0.00 101 102 103 104 105 106 tune_err\nFigure 6: Tuning reconstruction error against Recall at 1o00 for the hashtag prediction task. Eacl point corresponds to a different setting of hyperparameters.\nCCA methods are typically evaluated intrinsically by the amount of correlation captured, or recon struction error. These measures are dependent on the width of the shared embeddings and view specific output layers, and do not necessarily predict downstream performance. Although recon struction error cannot solely be relied on for model selection for a downstream task, we found tha it was a useful as a signal to weed out very poor models. Figure 6[shows the reconstruction erro against hashtag prediction Recall at 1000 for an initial grid search of DGCCA hyperparameters Models with tuning reconstruction error greater than 103 can safely be ignored, while there is some variability in the performance of models with achieving lower error.\nSince a DGCCA model with high reconstruction error suggests that the views do not agree with eacl. other at all, it makes sense that the shared embedding will likely be noisy, whereas a relatively lowly reconstruction error suggests that the transformed views have converged to a stable solution.."}]
rkYmiD9lg
[{"section_index": "0", "section_name": "EXPONENTIAL MACHINES", "section_text": "Alexander Novikoy1,2\nnovikov@bayesqroup.ru\ni.oseledets@skoltech.ru\n1National Research University Higher School of Economics, Moscow, Russia 2Institute of Numerical Mathematics, Moscow, Russia 3Moscow Institute of Physics and Technology, Moscow, Russia 4Skolkovo Institute of Science and Technology, Moscow, Russia\nModeling interactions between features improves the performance of machine learning solutions in many domains (e.g. recommender systems or sentiment analysis). In this paper, we introduce Exponential Machines (ExM), a predictor that models all interactions of every order. The key idea is to represent an exponentially large tensor of parameters in a factorized format called Tensor Train (TT). The Tensor Train format regularizes the model and lets you control the number of underlying parameters. To train the model, we develop a stochastic Riemannian optimization procedure, which allows us to fit tensors with 2160 entries. We show that the model achieves state-of-the-art performance on synthetic data with high order interactions and that it works on par with high-order factorization machines on a recommender system dataset MovieLens 100K."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "If the dictionary has d words, modeling pairwise interactions requires O(d2) parameters and will probably overfit to the data. Taking into account all interactions (all pairs, triplets, etc. of words requires impractical 2d parameters.\n[n this paper, we show a scalable way to account for all interactions. Our contributions are\nmikhail.trofimov@phystech.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Machine learning problems with categorical data require modeling interactions between the features to solve them. As an example, consider a sentiment analysis problem - detecting whether a review is positive or negative - and the following dataset: 'I liked it', 'I did not like it', 'I'm not sure'. Judging by the presence of the word 'like' or the word 'not' alone, it is hard to understand the tone of the review. But the presence of the pair of words 'not' and 'like' strongly indicates a negative opinion.\nWe propose a predictor that models all 2d interactions of d-dimensional data by representing. the exponentially large tensor of parameters in a compact multilinear format - Tenso Train (TT-format) (Sec.3). Factorizing the parameters into the TT-format leads to a bette. generalization, a linear with respect to d number of underlying parameters and inference. time (Sec.[5). The TT-format lets you control the number of underlying parameters througl. the TT-rank - a generalization of the matrix rank to tensors.. We develop a stochastic Riemannian optimization learning algorithm (Sec.6.1). In ou. experiments, it outperformed the stochastic gradient descent baseline (Sec.8.2) that is ofter. used for models parametrized by a tensor decomposition (see related works, Sec.9). We show that the linear model (e.g. logistic regression) is a special case of our model witl. the TT-rank equal 2 (Sec.8.3). We extend the model to handle interactions between functions of the features, not jus. between the features themselves (Sec.7)"}, {"section_index": "3", "section_name": "2 LINEAR MODEL", "section_text": "In this section, we describe a generalization of a class of machine learning algorithms - the linear. feature vector of f-th object, and y(f) is the corresponding target variable. Also fix a loss function. e(y, y) : R2 -> R, which takes as input the predicted value y and the ground truth value y. We call. a model linear, if the prediction of the model depends on the features x only via the dot product between the features x and the d-dimensional vector of parameters w:.\nYlinear(x) =(x, w) + b\nOne of the approaches to learn the parameters w and b of the model is to minimize the following los.\nN ) f=1\nwhere is the regularization parameter. For the linear model we can choose any regularization term instead of L2, but later the choice of the regularization term will become important (see Sec.[6.1)\nSeveral machine learning algorithms can be viewed as a special case of the linear model with an appropriate choice of the loss function l(y, y): least squares regression (squared loss), Support Vector Machine (hinge loss), and logistic regression (logistic loss)"}, {"section_index": "4", "section_name": "3 OUR MODEL", "section_text": "Note that all permutations of features in a term (e.g. x1x2 and x2x1) correspond to a single term an have exactly one associated weight (e.g. Wi1o)..\nIn the general case, we enumerate the subsets of features with a binary vector (i1, ..., id), whe k = 1 if the k-th feature belongs to the subset. The model equation looks as follows.\nHere we assume that 0o = 1. The model is parametrized by a d-dimensional tensor W, which consis of 2d elements.\nNote that there is no need in a separate bias term, since it is already included in the model as the weight tensor element Wo...o (see the model equation example (3)..\nThe key idea of our method is to compactly represent the exponentially large tensor of parameters W in the Tensor Train format (Oseledets2011)\nA d-dimensional tensor A is said to be represented in the Tensor Train (TT) format (Oseledets|2011 if each of its elements can be computed as the following product of d - 2 matrices and 2 vectors\nBefore introducing our model equation in the general case, consider a 3-dimensional example. The equation includes one term per each subset of features (each interaction).\ny(x) = Wo00 + W100 x1 + W010 x2 + W001x3 + W110 x1x2 + Wi01 x1x3 + W011 x2x3 + W111 x1x2x3.\ny(x) = Wo00 + W100 x1 + Wo10 x2 + Wo01x3 + W110 x1x2 + W101 x1x3 + W011 x2x3 + W111 x1x2X3.\n1 1 d y(x)=... W x i1=0 id=0 k=1\nThe model equation (4) is linear with respect to the weight tensor W. To emphasize this fact and simplify the notation we rewrite the model equation (4) as a tensor dot product y(x) = (X, W) where the tensor A is defined as follows.\nd 11 x k=1\nAii...id = G1[i1]... Ga[id]\nFigure 1: An illustration of the TT-format for a 3 4 4 3 tensor A with the TT-rank equal 3\nwhere for any k = 2,...,d - 1 and for any value of ik, Gg[ik] is an r r matrix, Gi[i1] is a. 1 r vector and Ga[id] is an r 1 vector (see Fig.[1). We refer to the collection of matrices Gk. corresponding to the same dimension k (technically, a 3-dimensional array) as the k-th TT-core, where k = 1,..., d. The size r of the slices Gk[ik] controls the trade-off between the representational. power of the TT-format and computational efficiency of working with the tensor. We call r the. TT-rank of the tensor A.\nAn attractive property of the TT-format is the ability to perform algebraic operations on tensors without materializing them, i.e. by working with the TT-cores instead of the tensors themselves. The TT-format supports computing the norm of a tensor and the dot product between tensors; element-wise sum and element-wise product of two tensors (the result is a tensor in the TT-format with increased TT-rank), and some other operations (Oseledets2011)."}, {"section_index": "5", "section_name": "5 INFERENCE", "section_text": "In this section, we return to the model proposed in Sec.3 and show how to compute the model. equation (4) in linear time. To avoid the exponential complexity, we represent the weight tensor W. and the data tensor ' (5) in the TT-format. The TT-ranks of these tensors determine the efficiency of the scheme. During the learning, we initialize and optimize the tensor W in the TT-format and. explicitly control its TT-rank. The TT-rank of the tensor X always equals 1. Indeed, the following. TT-cores give the exact representation of the tensor\nGk[ik] = xk C k = 1.....d\nNow that we have a TT-representations of tensors W and X, we can compute the model response y(x) = (X', W) in the linear time with respect to the number of features d.\nTheorem 1. The model response y(x) can be computed in O(r-d), where r is the TT-rank of the weight tensor W.\nWe refer the reader to Appendix[A|where we propose an inference algorithm with O(r2 d) complexity and thus prove Theorem1\nwhere the loss is defined as follows\nN L(W)=e(x(),w)yf))+W|W= `W? ..id f=1 i1=0 id=0\nWe consider two approaches to solving problem (7). In a baseline approach, we optimize the objective L(W) with stochastic gradient descent applied to the underlying parameters of the TT-format of the tensor W.\nG1 G2 G3 G4 A2423 = X i1 = 2 i2 = 4 i3 = 2 i4\nG1 G2 G3 G4 A2423 = X X X i1 = 2 i2 = 4 i3 = 2 i4 = 3\nThe TT-rank of the weight tensor W is a hyper-parameter of our method and it controls the efficiency vs. flexibility trade-off. A small TT-rank regularizes the model and yields fast learning and inference but restricts the possible values of the tensor W. A large TT-rank allows any value of the tensor W and effectively leaves us with the full polynomial model without any advantages of the TT-format.\nLearning the parameters of the proposed model corresponds to minimizing the loss under the TT-rank constraint:\nA simple alternative to the baseline is to perform gradient descent with respect to the tensor W, that is subtract the gradient from the current estimate of W on each iteration. The TT-format indeed allows to subtract tensors, but this operation increases the TT-rank on each iteration, making this approach impractical.\nWe now describe how to implement each of the steps outlined above\nTT-rank(PTwM,(Z)) < 2TT-rank(W) = 2r\nN aL al W aw du f=1\nSince the resulting expression is a weighted sum of projections of individual data tensors (f). .We can project them in parallel. Since the TT-rank of each of them equals 1 (see Sec.5, all N projections cost O(dr2(r + N)) in total. The TT-rank of the projected gradient is less or equal to 2r regardless of the dataset size N.\nNote that here we used the particular choice of the regularization term. For terms other than L2 (e.g L1), the gradient may have arbitrary large TT-rank..\nSince we aim for big datasets, we use a stochastic version of the Riemannian gradient descent: or each iteration we sample a random mini-batch of objects from the dataset, compute the stochastic gradient for this mini-batch, make a step along the projection of the stochastic gradient, and retract back to the manifold (Alg.1).\nAn iteration of the stochastic Riemannian gradient descent consists of inference O(dr2 M), projection O(dr2(r + M)), and retraction O(dr3), which yields O(dr2 (r + M)) total computational complexity\nTo improve upon the baseline and avoid the TT-rank growth, we exploit the geometry of the set of tensors that satisfy the TT-rank constraint (7) to build a Riemannian optimization procedure (Sec.6.1) We experimentally show the advantage of this approach over the baseline in Sec.8.2\nforms a Riemannian manifold (Holtz et al.[2012). This observation allows us to use Riemannian optimization to solve problem (7). Riemannian gradient descent consists of the following steps which are repeated until convergence (see Fig.2[for an illustration):\n2. Follow along 9 with some step a (this operation increases the TT-rank).. 3. Retract the new point W - aG back to the manifold Mr. that is decrease its TT-rank to r\nubich et al. (2015) proposed an algorithm to project a TT-tensor Z on the tangent space of M, at a point W which consists of two steps: preprocess W in O(dr3) and project Z in O(dr2 TT-rank(Z)2).Lubich et al.(2015) also showed that the TT-rank of the projection is bounded by a constant that is independent of the TT-rank of the tensor Z:\nN aL dl T aw dy f=1"}, {"section_index": "6", "section_name": "6.2 INITIALIZATION", "section_text": "We found that a random initialization for the TT-tensor W sometimes freezes the convergence of optimization method (Sec.8.3j. We propose to initialize the optimization from the solution of the. corresponding linear model (1)\nThe following theorem shows how to initialize the weight tensor W from a linear model\nTheorem 2. For any d-dimensional vector w and a bias term b there exist a tensor W of TT-rank 2 such that for any d-dimensional vector x and the corresponding object-tensor X the dot products x, w) and(X, W) coincide.\nIn the general case, to model interactions between ng functions g1, . .. , gng of the features we redefine the obiect-tensor as follows.\nd k=1\nThe weight tensor W and the object-tensor X' are now consist of (ng + 1)d elements. After this change to the object-tensor X, learning and inference algorithms will stay unchanged compared to the original model (4).\nCategorical features. Our basic model handles categorical features xk E {1, . . ., K} by converting them into one-hot vectors xk,1, ... , xk,K. The downside of this approach is that it wastes the model capacity on modeling non-existing interactions between the one-hot vector elements xk,1, . .. , xk,K which correspond to the same categorical feature. Instead, we propose to use one TT-core per categorical feature and use the model extension technique with the following function\nif xk = ik Or ik = 0 Otherwise.\nFigure 2: An illustration of one step. of the Riemannian gradient descent The step-size a is assumed to be 1. for clarity of the figure.\nIn this section, we extend the proposed model to handle polynomials of any functions of the features As an example, consider the logarithms of the features in the 2-dimensional case:\nk=1 if ik = 0, if ik = 1, if ik = Ng\nThis allows us to cut the number of parameters per categorical feature from 2Kr2 to (K + 1)r without losing any representational power."}, {"section_index": "7", "section_name": "8 EXPERIMENTS", "section_text": "We release a Python implementation of the proposed algorithm and the code to reproduce the experiments' For the operations related to the TT-format, we used the TT-Toolbox?"}, {"section_index": "8", "section_name": "8.1 DATASETS", "section_text": "The datasets used in the experiments (see details in Appendix|C"}, {"section_index": "9", "section_name": "8.2 RIEMANNIAN OPTIMIZATION", "section_text": "In this experiment, we compared two approaches to training the model: Riemannian optimiza tion (Sec.6.1) vs. the baseline (Sec.[6). In this and later experiments we tuned the learning rate of both Riemannian and SGD optimizers with respect to the training loss after 100 iterations by the grid search with logarithmic grid.\nOn the Car and HIV datasets we turned off the regularization ( = 0) and used rank r = 4. We reporl that on the Car dataset Riemannian optimization (learning rate a = 40) converges faster and achieves. better final point than the baseline (learning rate a = 0.03) both in terms of the training and tesi losses (Fig.3a, 5h). On the HIV dataset Riemannian optimization (learning rate a = 800) converges to the value 10-4 around 20 times faster than the baseline (learning rate = 0.001, see Fig.3b), but. the model overfitts to the data (Fig.5b)..\nThe results on the synthetic dataset with high-order interactions confirm the superiority of the Riemannian approach over SGD - we failed to train the model at all with SGD (Fig.6)\nOn the MovieLens 1o0K dataset, we have only used SGD-type algorithms, because using the one-hot feature encoding is much slower than using the categorical version (see Sec.7), and we have yet to implement the support for categorical features for the Riemannian optimizer. On the bright side prototyping the categorical version of ExM in TensorFlow allowed us to use a GPU accelerator.."}, {"section_index": "10", "section_name": "8.3 INITIALIZATION", "section_text": "In this experiment, we compared random initialization with the initialization from the solution of the corresponding linear problem (Sec.|6.2). We explored two ways to randomly initialize a TT-tensor 1) filling its TT-cores with independent Gaussian noise; 2) initializing W to represent a linear model with random coefficients (sampled from a standard Gaussian). We report that on the Car dataset type-1 random initialization slowed the convergence compared to initialization from the linear model solution (Fig.3a), while on the HIV dataset the convergence was completely frozen (Fig.3b).\nTwo possible reasons for this effect are: a) the vanishing and exploding gradients problem (Bengio et al.l|1994) that arises when dealing with a product of a large number of factors (160 in the case of the HIV dataset); b) initializing the model in such a way that high-order terms dominate we may force the gradient-based optimization to focus on high-order terms, while it may be more stable to start with low-order terms instead. Type-2 initialization (a random linear model) indeed worked on par with the best linear initialization on the Car, HIV, and synthetic datasets (Fig.3p,[6).\nhttps://github.com/Bihaqo/exp-machines https://github.com/oseledets/ttpy\n1. UCI (Lichman, 2013) Car dataset is a classification problem with 1728 objects and 21 binary features (after one-hot encoding). We randomly splitted the data into 1382 training and 346 test objects and binarized the labels for simplicity. 2. UCI HIV dataset is a binary classification problem with 1625 objects and 160 features, which we randomly splitted into 1300 training and 325 test objects. 3. Synthetic data. We generated 100 000 train and 100 000 test objects with 30 features and set the ground truth target variable to a 6-degree polynomial of the features. 4. MovieLens 100K is a recommender system dataset with 943 users and 1682 movies (Harper & Konstan] 2015). We followed Blondel et al.(2016a) in preparing 2703 one-hot features And the problem into binary classification\n101 Cores GD 104 Cores GD Cores SGD 100 101 Cores SGD 100 100 Cores SGD 500 100 Cores SGD 500 Riemann GD 10 Riemann GD 10 Riemann 100 601) 10 Riemann 100 SSO O-0 Riemann 500 10 Riemann 500 10-2 bu! Riemann GD rand init 1 Riemann GD rand init 1. 10-3 10-5 Riemann GD rand init 2. 10-6 104 10-7 10-5 10-8 10-1 100 101 102 10-1 100 101 102 103 time (s) time (s) (a) Binarized Car dataset (b) HIV dataset\n101 Cores GD 102 Cores GD Cores SGD 100 101 Cores SGD 100 100 Cores SGD 500 100 Cores SGD 500 0 o Riemann GD 0 o Riemann GD 101 0 o Riemann 100 102 0 o Riemann 100 SSOJ 0 o Riemann 500 0 0 Riemann 500 10-2 - Riemann GD rand init 1 2 104 Riemann GD rand init 1 10-3 innn V V Riemann GD rand init 2 10-6 10-4 10-7 10-5 108 10-1 100 101 102 10-1 100 101 102 103 time (s) time (s)\nFigure 3: A comparison between Riemannian optimization and SGD applied to the underlying parameters of the TT-format (the baseline) for the rank-4 Exponential Machines. Numbers in the legend stand for the batch size. The methods marked with 'rand init' in the legend (square and triangle markers) were initialized from a random TT-tensor from two different distributions (see Sec. 8.3] all other methods were initialized from the solution of ordinary linear logistic regression. Type-2 random initialization is ommited from the Car dataset for the clarity of the figure\nTraining Inference Method Test AUC time (s) time (s) Log. reg. 0.50 0.4 0.0 RF 0.55 21.4 6.5 Neural Network 0.50 47.2 0.1 SVM RBF 0.50 2262.6 5380 SVM poly. 2 0.50 1152.6 4260 SVM poly. 6 0.56 4090.9 3774 2-nd order FM 0.50 638.2 0.5 6-th order FM 0.57 549 3 6-th order FM 0.86 6039 3 6-th order FM 0.96 38918 3 ExM rank 3 0.79 65 0.2 ExM rank 8 0.85 1831 1.3 ExM rank 16 0.96 48879 3.8\nTable 1: A comparison between models on synthetic data with high-order interactions (Sec.[8.4). We report the inference time on 100000 test objects in the last column"}, {"section_index": "11", "section_name": "8.4 COMPARISON TO OTHER APPROACHES", "section_text": "On the synthetic dataset with high-order interactions we campared Exponential Machines (the proposed method) with scikit-learn implementation (Pedregosa et al.2011) of logistic regression,. random forest, and kernel SVM; FastFM implementation (Bayer2015) of 2-nd order Factorization. Machines; our implementation of high-order Factorization Machines3] and a feed-forward neural. network implemented in TensorFlow (Abadi et al.]2015). We used 6-th order FM with the Adam. optimizer (Kingma & Ba 2014) for which we'd chosen the best rank (20) and learning rate (0.003) based on the training loss after the first 50 iterations. We tried several feed-forward neural networks. with ReLU activations and up to 4 fully-connected layers and 128 hidden units. We compared the. models based on the Area Under the Curve (AUC) metric since it is applicable to all methods and is. robust to unbalanced labels (Tbl.1).\nOn the MovieLens 1ooK dataset we used the categorical features representation described in Sec.7. Our model obtained 0.784 test AUC with the TT-rank equal 10 in 273 seconds on a Tesla K40 GPU. (the inference time is 0.3 seconds per 78800 test objects); our implentation of 3-rd order FM obtained 0.782; logistic regression obtained 0.782; and Blondel et al.(2016a) reported 0.786 with 3-rd order FM on the same data.\n0.785 0.784 0.783 AUC 0.782 ftest 0.781 0.780 0.779 0.778 O 5 10 15 20 25 TT-rank\nFigure 4: The influence of the TT-rank on the test AUC for the MovieLens 100K dataset.\nKernel SVM is a flexible non-linear predictor and, in particular, it can model interactions when used with the polynomial kernel (Boser et al.J 1992). As a downside, it scales at least quadratically with the dataset size (Bordes et al.f2005) and overfits on highly sparse data.\nWith this in mind, Rendle|(2010) developed Factorization Machine (FM), a general predictor that. models pairwise interactions. To overcome the problems of polynomial SVM, FM restricts the rank of the weight matrix, which leads to a linear number of parameters and generalizes better on sparse data. FM running time is linear with respect to the number of nonzero elements in the data, which. allows scaling to billions of training entries on sparse problems..\nA number of works used full-batch or stochastic Riemannian optimization for data processing tasks (Meyer et al.]2011) Tan et al.]2014] Xu & Ke2016] Zhang et al.]2016). The last work (Zhang et al. 2016) is especially interesting in the context of our method, since it improves the convergence rate of stochastic Riemannian gradient descent and is directly applicable to our learning procedure.\nIn a concurrent work, Stoudenmire & Schwab (2016) proposed a model that is similar to ours but relies on the trigonometric basis (cos(x), sin(x)) in contrast to polynomials (1, x) used in Exponential Machines (see Sec.7|for an explanation on how to change the basis). They also proposed a different learning procedure inspired by the DMRG algorithm (Schollwock] 2011), which allows to automatically choose the ranks of the model, but is hard to adapt to the stochastic regime. One of the possible ways to combine strengths of the DMRG and Riemannian approaches is to do a full DMRG sweep once in a few epochs of the stochastic Riemannian gradient descent to adjust the ranks.\nOther relevant works include the model that approximates the decision function with a multidimen. sional Fourier series whose coefficients lie in the TT-format (Wahls et al.|2014); and models that are similar to FM but include squares and other powers of the features: Tensor Machines (Yang & Gittens 2015) and Polynomial Networks (Livni et al.]2014). Tensor Machines also enjoy a theoretica. generalization bound. In another relevant work, Blondel et al.(2016b) boosted the efficiency of FM. and Polynomial Networks by casting their training as a low-rank tensor estimation problem, thu. making it multi-convex and allowing for efficient use of Alternative Least Squares types of algorithms. Note that Exponential Machines are inherently multi-convex.."}, {"section_index": "12", "section_name": "10 DISCUSSION", "section_text": "We presented a predictor that models all interactions of every order. To regularize the model and to make the learning and inference feasible, we represented the exponentially large tensor of parameters in the Tensor Train format. To train the model, we used Riemannian optimization in the stochastic regime and report that it outperforms a popular baseline based on the stochastic gradient descent However, the Riemannian learning algorithm does not support sparse data, so for dataset with hundreds of thousands of features we are forced to fall back on the baseline learning method. We found that training process is sensitive to initialization and proposed an initialization strategy based. on the solution of the corresponding linear problem. The solutions developed in this paper for the. stochastic Riemannian optimization may suit other machine learning models parametrized by tensors in the TT-format.\nThe TT-rank is one of the main hyperparameters of the proposed model. Two possible strategies can be used to choose it: grid-search or DMRG-like algorithms (see Sec.9). In our experiments we opted for the former and observed that the model is fairly robust to the choice of the TT-rank (see Fig.) but a too small TT-rank can hurt the accuracy (see Tbl.[1).\nFor high-order interactions FM uses CP-format (Caroll & Chang1970] Harshman]1970) to represent the tensor of parameters. The choice of the tensor factorization is the main difference between the high-order FM and Exponential Machines. The TT-format comes with two advantages over the CP-format: first, the TT-format allows for Riemannian optimization; second, the problem of finding the best TT-rank r approximation to a given tensor always has a solution and can be solved in polynomial time. We found Riemannian optimization superior to the SGD baseline (Sec.[6) that was used in several other models parametrized by a tensor factorization (Rendlel 2010f Lebedev et al. 2014][Novikov et al.] 2015). Note that CP-format also allows for Riemannian optimization, but only for 2-order tensors (and thereafter 2-order FM)."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http: //tensorf1ow. org/ Software available from tensorflow.org.\nI. Bayer. Fastfm: a library for factorization machines. arXiv preprint arXiv:1505.00641, 2015\nM. Blondel, A. Fujino, N. Ueda, and M. Ishihata. Higher-order factorization machines. 2016a\nM. Lichman. UCI machine learning repository, 2013\nF. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten-. hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12:2825-2830, 2011.\nR. Livni, S. Shalev-Shwartz, and O. Shamir. On the computational efficiency of training neural networks. In Advances in Neural Information Processing Systems 27 (N1PS), 2014 C. Lubich, I. V. Oseledets, and B. Vandereycken. Time integration of tensor trains. SIAM Journal on Numerical Analysis, pp. 917-941, 2015. G. Meyer, S. Bonnabel, and R. Sepulchre. Regression on fixed-rank positive semidefinite matrices: a Riemannian approach. The Journal of Machine Learning Research, pp. 593-625, 2011. A. Novikov, D. Podoprikhin, A. Osokin, and D. Vetrov. Tensorizing neural networks. In Advances in Neural Information Processing Systems 28 (NIPS). 2015.\nI. V. Oseledets. Tensor-Train decomposition. SIAM J. Scientific Computing, 33(5):2295-2317, 2011\nM. Tan, I. W. Tsang, L. Wang, B. Vandereycken, and S. J. Pan. Riemannian pursuit for big matrix recovery. 2014."}, {"section_index": "14", "section_name": "PROOF OF THEOREM 1", "section_text": "Theorem|1|states that the inference complexity of the proposed algorithm is O(r2 d), where r is th TT-rank of the weight tensor W. In this section, we propose an algorithm that achieve the statec. complexity and thus prove the theorem..\nProof. Let us rewrite the definition of the model response (4) assuming that the weight tensor W is represented in the TT-format (6\nd d Wi...id ik Gi[i1]...Ga[id] y(x) = x x k k i1,...,id k=1 i1,...,id k=1\n1 Ap = xjFGk[ik] =Gk[0] +xkGk[1] ik=0\nH. Zhang, S. J. Reddi, and S. Sra. Fast stochastic optimization on riemannian manifolds. arXiv preprint arXiv:1605.07147, 2016.\ny(x)=xG1[ii]...xGa[id]= G1|i1 id Gd[id] i1.....id. i1=0 IXr X\nThe final value y(x) can be computed from the matrices A via d-- 1 matrix-by-vector multiplications and 1 vector-by-vector multiplication, which yields O(r2 d) complexity..\nNote that the proof is constructive and corresponds to an implementation of the inference algorithm\nCores GD Cores GD 101 Cores SgD 100 Cores SgD 100 10 Cores SGD 500 Cores SGD 500 100 0 Riemann GD 0 Riemann GD 0 Riemann 100 0 Riemann 100 0 Riemann 500 10 0-0 Riemann 500 tessssss - Riemann GD rand init 1 Riemann GD rand init 1 10-2 fest Riemann GD rand init 2. 10-1 10-3 104 101 100 101 102 101 100 101 102 103 time (s) time (s) oRin (bHI\nFigure 5: A comparison between Riemannian optimization and SGD applied to the underlying parameters of the TT-format (the baseline) for the rank-4 Exponential Machines. Numbers in the legend stand for the batch size. The methods marked with 'rand init' in the legend (square and triangle markers) were initialized from a random TT-tensor from two different distributions, all other methods were initialized from the solution of ordinary linear logistic regression. See details in Sec.[8.2 and|8.3"}, {"section_index": "15", "section_name": "B PROOF OF THEOREM 2", "section_text": "Theorem2|states that it is possible to initialize the weight tensor W of the proposed model from the weights w of the linear model.\nTheorem. For any d-dimensional vector w and a bias term b there exist a tensor W of TT-rank ' such that for any d-dimensional vector x and the corresponding object-tensor X the dot product (x, w) and(X, W) coincide.\nTo proof the theorem, in the rest of this section we show that the tensor W from Theorem |2 representable in the TT-format with the following TT-cores\nG1[0]= 1 G1[1]=[ 0 W1 b Wd Ga[0] = Ga[1] = 1 0\nG1[0] = [ 1 G1[1] = 0 W1 b Wd Ga[0] = Ga[1] = 1 0 V2<k<d-1 1 0 0 Wk Gk[0] = Gk[1] = 0 1 0 0\nand thus the TT-rank of the tensor W equals 2\n0 1, if q=1i 1 lg = 0 0 0 1 if q=1iq2, G1[i1]... Gp[ip] =- 0 Wk if q=1iq=1, and ik = 1..\nProof. We prove the lemma by induction. Indeed, for p = 1 the statement of the lemma becomes\nif i1 = 0, G1[i1] W1 ifi1= 1,\n[f in = 1, then there are 3 options.\n101 Cores GD Cores GD Cores SgD 100 Cores SgD 100 10 Cores SGD 500 Cores SGD 500 100 (lol iol) ssos 0 Riemann GD o Riemann GD 0 Riemann 100 o Riemann 100 10 0 0 Riemann 500 100 0 Riemann 500 testsss - Riemann GD rand init 1 Riemann GD rand init 1 test 10-2 V Riemann GD rand init 2 10-3 10-1 10-4 10-1 100 101 102 10-1 100 101 102 103 time (s) time (s) (a) Binarized Car dataset (b) HIV dataset\nb, if 0, if 1...ig= G1[i1]...Gd-1[id-1]Ga[id] = Wk, if and ik = 1.\nThe elements of the obtained tensor W that correspond to interactions of order > 2 equal to zero; th weight that corresponds to xk equals to wk; and the bias term Wo...o = b..\nThe TT-rank of the obtained tensor e qual 2 since its TT-cores are of size 2 2\n0.70 Riemann SGD 2000, LR 0.05 0.90 Riemann SGD 2000, LR 0.05 0.68 Riemann SGD 1000, LR 0.05 0.85 Riemann SGD 1000, LR 0.05 0.66 Riemann SGD 500, LR 0.05 0.80 Riemann SGD 500, LR 0.05 Riemann SGD 1000, LR 0.02 0.75 Riemann SGD 1000, LR 0.02 Riemann SGD 1000, LR 0.1 0.70 Riemann SGD 1000, LR 0.1 0.62 SGD 128, LR 0.005 SGD 128, LR 0.005 Riemann SGD 1000, LR 0.05, rand init 2. 0.65 Riemann SGD 1000, LR 0.05, rand init 2 0.60 0.58 0.55 0.56 0.50 0.45 101 102 103 101 102 103 time (s) time (s) (a) Training set. (b) Test set\nFigure 6: A comparison between Riemannian optimization and SGD applied to the underlying parameters of the TT-format (the baseline) for the rank-3 Exponential Machines on the synthetic dataset with high order interactions. The first number in each legend enrty stands for the batch size The method marked with 'rand init' in the legend (triangle markers) was initialized from a random linear model, all other methods were initialized from the solution of ordinary linear logistic regression See details in Sec.8.2and 8.3\nG1[i1]...Gp[ip] = 0 Wk 1Gp[1] = 0 0 1\n1. UCI (Lichman, 2013) Car dataset is a classification problem with 1728 objects and 21 binary features (after one-hot encoding). We randomly splitted the data into 1382 training and 346 test objects. For simplicity, we binarized the labels: we picked the first class (unacc') and made a one-versus-rest binary classification problem from the original Car dataset. 2. UCI (Lichman,2013) HIV dataset is a binary classification problem with 1625 objects and 160 features, which we randomly splitted into 1300 training and 325 test objects.. 3. Synthetic data. We generated 100 000 train and 100 000 test objects with 30 features.. Each entry of the data matrix X was independently sampled from {-1, +1} with equal probabilities 0.5. We also uniformly sampled 20 subsets of features (interactions) of order 6: ..., j20 ~ U{1, ..., 30}. We set the ground truth target variable to a. of the interactions from the uniform distribution: E1, ... , E2o ~ U(--1, 1)..\n4. MovieLens 1o0K. MovieLens 100K is a recommender system dataset with 943 users and 1682 movies (Harper & Konstan2015). We followed Blondel et al.(2016a) in preparing. the features and in turning the problem into binary classification. For users, we converted. age (rounded to decades), living area (the first digit of the zipcode), gender and occupation. into a binary indicator vector using one-hot encoding. For movies, we used the release year. (rounded to decades) and genres, also encoded. This process yielded 49+29 = 78 additional. one-hot features for each user-movie pair (943 + 1682 + 78 features in total). Original. ratings were binarized using 5 as a threshold. This results in 21200 positive samples, half of. which were used for traininig (with equal amount of sampled negative examples) and the. rest were used for testing.\n4. MovieLens 1ooK. MovieLens 100K is a recommender system dataset with 943 users and 1682 movies (Harper & Konstan2015). We followed Blondel et al.(2016a) in preparing the features and in turning the problem into binary classification. For users, we converted age (rounded to decades), living area (the first digit of the zipcode), gender and occupation into a binary indicator vector using one-hot encoding. For movies, we used the release year (rounded to decades) and genres, also encoded. This process yielded 49+29 = 78 additional one-hot features for each user-movie pair (943 + 1682 + 78 features in total). Original ratings were binarized using 5 as a threshold. This results in 21200 positive samples, half of which were used for traininig (with equal amount of sampled negative examples) and the rest were used for testing."}]
Sy6iJDqlx
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "One of the goals of Artificial Intelligence (AI) is to build autonomous agents that can learn an. adapt to new environments. Reinforcement Learning (RL) is a key technique for achieving sucl. adaptability. The goal of RL algorithms is to learn an optimal policy for choosing actions tha. maximize some notion of long term performance. Transferring knowledge gained from tasks solve. earlier to solve a new target task can help, either in terms of speeding up the learning process o. in terms of achieving a better solution, among other performance measures. When applied to RI. transfer could be accomplished in many ways (seeTaylor & Stone(2009]2011) for a very go0c survey of the field). One could use the value function from the source task as an initial estimate i1 the target task to cut down exploration [Sorg & Singh (2009)]. Alternatively one could use policie. from the source task(s) in the target task. This can take one of two forms - (i) the derived policie. can be used as initial exploratory trajectories [Atkeson & Schaal|(1997); Niekum et al.(2013)] ii the target task and (ii) the derived policy could be used to define macro-actions which may then be. used by the agent in solving the target task [Mannor et al.(2004);Brunskill & Li (2014)]."}, {"section_index": "1", "section_name": "ATTEND, ADAPT AND TRANSFER: ATTENTIVE DEEP ARCHITECTURE FOR ADAPTIVE TRANSFER FROM MULTIPLE SOURCES IN THE SAME DOMAIN", "section_text": "Arayind S. Lakshminarayanan\nAravind shminarayanan Indian Institute of Technology Madras aravindsrinivas@gmail.com\nprasanna.p@cs.mcgill.ca"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The key contribution in the architecture is a deep attention network, that decides which solutions tc attend to, for a given input state. The network learns solutions as a function of current state thereby aiding the agent in adopting different solutions for different parts of the state space in the target task\nTo this end, we propose A2T: Attend, Adapt and Transfer, an Attentive Deep Architecture for Adap tive Transfer, that avoids negative transfer while performing selective transfer from multiple source tasks in the same domain. In addition to the tennis example, A2T is a fairly generic framework tha can be used to selectively transfer different skills available from different experts as appropriate tc the situation. For instance, a household robot can appropriately use skills from different expert for different household chores. This would require the skill to transfer manipulation skills acros objects, tasks and robotic actuators. With a well developed attention mechanism, the most appropri ate and helpful combination of object-skill-controller can be identified for aiding the learning on a related new task. Further, A2T is generic enough to effect transfer of either action policies or action value functions, as the case may be. We also adapt different algorithms in reinforcement learning as appropriate for the different settings and empirically demonstrate that the A2T is effective fo transfer learning for each setting."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "As mentioned earlier, transfer learning approaches could deal with transferring policies or valu functions. For example, Banerjee & Stone[(2007) describe a method for transferring value function. by constructing a Game tree. Similarly, Sorg & Singh (2009) use the value function from a source task as the initial estimate of the value function in the target task..\nAnother method to achieve transfer is to reuse policies derived in the source task(s) in the target. task. Probabilistic Policy Reuse as discussed in Fernandez & Veloso(2006) maintains a library of. policies and selects a policy based on a similarity metric, or a random policy, or a max-policy from. the knowledge obtained. This is different from the proposed approach in that the proposed approach\nWhile transfer in RL has been much explored, there are two crucial issues that have not been ad- equately addressed in the literature. The first is negative transfer, which occurs when the transfer. esults in a performance that is worse when compared to learning from scratch in the target task This severely limits the applicability of many transfer techniques only to cases for which some mea-. sure of relatedness between source and target tasks can be guaranteed beforehand. This brings us. to the second problem with transfer, which is the issue of identifying an appropriate source task. from which to transfer. In some scenarios, different source tasks might be relevant and useful for. different parts of the state space of the target task. As a real world analogy, consider multiple players. experts) who are good at different aspects of a game (say, tennis). For example, Player 1 is good at. laying backhand shots while Player 2 is good at playing forehand shots. Consider the case of a new. olayer (agent) who wants to learn tennis by selectively learning from these two experts. We handle. such a situation in our architecture by allowing the agent to learn how to pick and use solutions from. multiple and different source tasks while solving a target task, selectively applicable for different. parts of the state space. We call this selective transfer. Our agent can transfer knowledge from. Player 1 when required to play backhand shots and Player 2 for playing forehand shots. Further,. et us consider consider the situation that both Player 1 and Player 2 are bad at playing drop shots.. Apart from the source tasks, we maintain a base network that learns from scratch on the target task.. The agent can pick and use the solution of the base network when solving the target task at the parts. f the state space where transferring from the source tasks is negative. Such a situation could arise. when the source task solutions are irrelevant for solving the target task over a specific portion of the. state space, or when the transferring from the source tasks is negative over a specific portion of the. state space (for example, transferring the bad drop shot abilities of Players 1 and 2). This situation. also entails the first problem of avoiding negative transfer. Our framework allows an agent to avoid. ransferring from both Players 1 and 2 while learning to play drop shots, and rather acquire the drop. shot skill by learning to use the base network. The architecture is trained such that the base network. uses not just the experience obtained through the usage of its solutions in the target task, but the. overall experience acquired using the combined knowledge of the source tasks and itself. This en-. ables the base network solutions to get closer to the behavior of the overall architecture (which uses. the source task solutions as well). This makes it easier for the base network to assist the architecture. to fine tune the useful source task solutions to suit the target task perfectly over time..\ncan transfer policies at the granularity of individual states which is not possible in policy-reus rendering it unable to learn customized policy at that granularity|Atkeson & Schaal|(1997);|Niekun et al.[(2013) evaluated the idea of having the transferred policy from the source tasks as explorativ policies instead of having a random exploration policy. This provides better exploration behavio provided the tasks are similar. Talvitie & Singh (2007) try to find the promising policy from a se of candidate policies that are generated using different action mapping to a single solved task. Ir contrast, we make use of one or more source tasks to selectively transfer policies at the granularit of state. Apart from policy transfer and value transfer as discussed above, Ferguson & Mahadevar (2006) discuss representation transfer using Proto Value Functions.\nThe idea of negative and selective transfer have been discussed earlier in the literature. For example. Lazaric & Restelli (2011) address the issue of negative transfer in transferring samples for a related task in a multi-task setting.Konidaris et al.[(2012) discuss the idea of exploiting shared common. features across related tasks. They learn a shaping function that can be used in later tasks..\nThe two recent works that are very relevant to the proposed architecture are discussed in Parisotto et al.(2015) and Rusu et al.[(2016).Parisotto et al.[(2015) explore transfer learning in RL across Atari games by trying to learn a multi-task network over the source tasks available and directly fine-. tune the learned multi-task network on the target task. However, fine-tuning as a transfer paradigm cannot address the issue of negative transfer which they do observe in many of their experiments. Rusu et al.[(2016) try to address the negative transfer issue by proposing a sequential learning mech- anism where the filters of the network being learned for an ongoing task are dependent through. lateral connections on the lower level filters of the networks learned already for the previous tasks. The idea is to ensure that dependencies that characterize similarity across tasks could be learned. through these lateral connections. Even though they do observe better transfer results than direct fine-tuning, they are still not able to avoid negative transfer in some of their experiments.."}, {"section_index": "4", "section_name": "PROPOSED ARCHITECTURE", "section_text": "N+1 Wi,s = 1, Wi,s E [0,1 i=1\nW;.s is the weight given to the ith solution at state s.\nThe agent uses KT to act in the target task. Figure|1a shows the proposed architecture. While the. source task solutions K1, . . ., K remain fixed, the base network solutions are learnt and hence K B. can change over time. There is a central network which learns the weights (w,.s, i E 1, 2, . . . , N+1),. given the input state s. We refer to this network as the attention network. The [0, 1] weights deter-. mine the attention each solution gets allowing the agent to selectively accept or reject the different. solutions, depending on the input state. We adopt a soft-attention mechanism whereby more than one weight can be non-zero [Bahdanau et al.(2014)] as opposed to a hard-attention mechanism. Mnih et al.(2014)] where we are forced to have only one non-zero weight..\nexp ie{1,2,...,N+1 Wi N+1 exp j=1\nLet there be N source tasks and let K1, K2,... Ky be the solutions of these source tasks 1, ...N. respectively. Let KT be the solution that we learn in the target task T. Source tasks refer to tasks. that we have already learnt to perform and target task refers to the task that we are interested in. learning now. These solutions could be for example policies or state-action values. Here the source tasks should be in the same domain as the target task, having the same state and action spaces. We propose a setting where KT is learned as a function of K1,..., Kv, Kb, where K p is the solution. of a base network which starts learning from scratch while acting on the target task. In this work,. we use a convex combination of the solutions to obtain KT..\nN KT(s) = wN+1,sKb(s) + i=1\nFigure 1: (a) A2T architecture. The doted arrows represent the path of back propagation. (b) Actor Critic using A2T.\nDepending on the feedback obtained from the environment upon following KT, the attention net work's parameters 0g are updated to improve performance\nEven though the agent follows KT, we update the parameters of the base network that produces K3, as if the action taken by the agent was based only on Kb. Due to this special way of updating. K3, apart from the experience got through the unique and individual contribution of K3 to KT in. parts of the state space where the source task solutions are not relevant, Kb also uses the valuable. experience got by using Kt which uses the solutions of the source tasks as well..\nThis also means that, if there is a source task whose solution K, is useful for the target task in. some parts of its state space, then Kb tries to replicate K; in those parts of the state space. In practise, the source task solutions though useful, might need to be modified to suit perfectly for the target task. The base network takes care of these modifications required to make the useful source. task solutions perfect for the target task. The special way of training the base network assists the. architecture in achieving this faster. Note that the agent could follow/use K; through KT even when. K3 does not attain its replication in the corresponding parts of the state space. This allows for a. good performance of the agent in earlier stages training itself, when a useful source task is available. and identified.\nSince the attention is soft, our model has the flexibility to combine multiple solutions. The use oj. deep neural networks allow the model to work even for large, complex RL problems. The deep. attention network, allows the agent to learn complex selection functions, without worrying abou\nK. Attention Network Actor Weight A2T Vector Kq, K2 ......,K. K (size N+1) TD F error : : Critic state Value Function Base action . Source Task network Solutions reward Environment State (a) (b)\n(e1,s, e2,s,.,eN+1,s) = f(s; 0a\nHere, f(s; 0a) is a deep neural network (attention network), which could consist of convolution layers and fully connected layers depending on the representation of input. It is parametrised by 0. and takes as input a state s and outputs a vector of length N + 1, which gives the attention scores for the N + 1 solutions at state s. Eq.(3) normalises this score to get the weights that follow Eq.(2)\nIf the ith source task solution is useful at state s, then w,s is set to a high value by the attention network. Working at the granularity of states allows the attention network to attend to different source tasks, for different parts of the state space of the target task, thus giving it the ability to perform selective transfer. For parts of the state space in the target task, where the source task solutions cause negative transfer or where the source task solutions are not relevant, the attention network learns to give high weight to the base network solution (which can be learnt and improved), thus avoiding negative transfer.\nAs mentioned earlier, the source task solutions, K1,..., Ky remain fixed. Updating these source task's parameters would cause a significant amount of unlearning in the source tasks solutions and result in a weaker transfer. which we observed empirically. This also enables the use of source task solutions, as long as we have the outputs alone, irrespective of how and where they come from.\nrepresentation issues a priori. To summarise, for a given state, A2T learns to attend to specific solutions and adapts this attention over different states, hence attaining useful transfer. A2T is general and can be used for transfer of solutions such as policy and value."}, {"section_index": "5", "section_name": "3.1 POLICY TRANSFER", "section_text": "The solutions that we transfer here are the source task policies, taking advantage of which, we learn. a policy for the target task. Thus, we have K1,..., K, Kb, KT 1,... n, B, T. Here . represents a stochastic policy, a probability distribution over all the actions. The agent acts in the. target task, by sampling actions from the probability distribution T. The target task policy T is got. as described in Eq.(1) and Eq.(2). The attention network that produces the weights for the different. solutions, is trained by the feedback got after taking action following aT. The base network that produces 3 is trained as if the sampled action came from 3 (though it originally came from T).. the implications of which were discussed in the previous section. When the attention network's weight for the policy 3 is high, the mixture policy T is dominated by B, and the base network. learning is nearly on-policy. In the other cases, b undergoes off-policy learning. But if we look. closely, even in the latter case, since b moves towards T, it tries to be nearly on-policy all the. time. Empirically, we observe that b converges. This architecture for policy transfer can be used. alongside any algorithm that has an explicit representation of the policy. Here we describe two. instantiations of A2T for policy transfer, one for direct policy search using REINFORCE algorithm and another in the Actor-Critic setup.."}, {"section_index": "6", "section_name": "3.1.1 POLICY TRANSFER IN REINFORCE ALGORITHMS USING A2T:", "section_text": "REINFORCE algorithms [Williams|(1992)] can be used for direct policy search by making weight adjustments in a direction that lies along the gradient of the expected reinforcement. The full ar- chitecture is same as the one shown in Fig[1a|with K . We do direct policy search, and the parameters are updated using REINFORCE. Let the attention network be parametrized by 0a and. the base network which outputs b be parametrized by Oy. The updates are given by:.\nB(St,At) 9p<Ob+Q0p d0\nWe use A2T for the actor part of the Actor-Critic. The architecture is shown in Fig[1b The actor. A2T is aware of all the previous learnt tasks and tries to use those solution policies for its benefit. The critic evaluates the action selection from tT on the basis of the performance on the target task With the same notations as REINFORCE for St, at, 0a, Ob, Qea, Qe,, B, T; let action at dictated. by T lead the agent to next state St+1 with a reward rt+1 and let V(st) represent the value of state St and y the discount factor. Then, the update equations for the actor are as below:.\nO log T(St,at) 80a 0a0a+ aea. A log T(St,at) 80a\nM TT(St,At) 9a Oa + Qe. d0a\nwhere agg, Qo, are non-negative factors, r is the return obtained in the episode, b is some baseline and M is the length of the episode. at is the action sampled by the agent at state st following T. Note that while nT(St, at) is used in the update of the attention network, B(St, at) is used in the update of the base network..\nActor-Critic methods [Konda & Tsitsiklis (20oo)] are Temporal Difference (TD) methods that have two separate components, viz., an actor and a critic. The actor proposes a policy whereas the critic estimates the value function to critique the actor's policy. The updates to the actor happens through TD-error which is the one step estimation error that helps in reinforcing an agent's behaviour.\nOt =rt+1+yV(St+1)- V(st\nHere, ot is the TD error. The state-value function V of the critic is learnt using TD learning\nIn this case, the solutions being transferred are the source tasks' action-value functions, which w will call as Q functions. Thus, K1,..., K, Kb, KT Qi,..., Q, Qb, QT. Let A represen the discrete action space for the tasks and Q(s) = {Q(s, a;) V a; E A}. The agent acts by using QT in the target task, which is got as described in Eq.(1) and Eq.(2). The attention network and the base network of A2T are updated as described in the architecture.\nThe state-action value Q function is used to guide the agent to selecting the optimal action a at a. state s, where Q(s, a) is a measure of the long-term return obtained by taking action a at state s. One way to learn optimal policies for an agent is to estimate the optimal Q(s, a) for the task. Q-learning [Watkins & Dayan (1992)] is an off-policy Temporal Difference (TD) learning algorithm that does so. The Q-values are updated iteratively through the Bellman optimality equation [Puterman(1994)] with the rewards obtained from the task as below:.\nIn high dimensional state spaces, it is infeasible to update Q-value for all possible state-action pairs One way to address this issue is by approximating Q(s, a) through a parametrized function approx imator Q(s, a; 0),thereby generalizing over states and actions by operating on higher level features [Sutton & Barto(1998)]. The DQN [Mnih et al.[(2015)] approximates the Q-value function with a deep neural network to be able to predict Q(s, a) over all actions a, for all states s.\nThe loss function used for learning a Deep O Network is as below.\nDQN r + ymaxa'Q(s', a',0-) C\nHere, L represents the expected TD error corresponding to current parameter estimate 0. 0- rep- resents the parameters of a separate target network, while 0 represents the parameters of the online network. The usage of a target network is to improve the stability of the learning updates. The gradient descent step is shown below:\nVeL(0) = Es.ar.s' [uDQN - Q(s,a;0))VeQ(s,a)\nVeL(0) =Es,a,r,s' - Q(s,a;0))VeQ(s,a)]\nTo avoid correlated updates from learning on the same transitions that the current network simulates an experience replay [Lin (1993)] D (of fixed maximum capacity) is used, where the experiences are pooled in a FIFO fashion.\nWe use DQN to learn our experts Q, i E 1, 2 ... N on the source tasks. Q-learning is used to ensure. QT(s) is driven to a good estimate of Q functions for the target task. Taking advantage of the off-. policy nature of Q-learning, both Qb and QT can be learned from the experiences gathered by an e-greedy behavioral policy based on QT. Let the attention network that outputs w be parametrised. by 0a and the base network outputting Qb be parametrised by Ob. Let 0a- and 0~ represent the. parameters of the respective target networks. Note that the usage of target here is to signify the. parameters (0 , 0) used to calculate the target value in the Q-learning update and is different from. its usage in the context of the target task. The update equations are:.\nLQT(0a,0b) = Es,a,r, (yQT - QT(s,a;0a,0p))2\nOlog B(St,at) 80b 0p0b+Q0nOt alogB(St,at) 80b\nQ(s,a) E[r(s,a,s)+ymaxa'Q(s',a)\nQ(s,a;0)\nyQT=(r+ymaxa'QT(s',a';0a-,0b-))\nVo.LQr =E[(yQr - QT(s,a))VoaQT(s,a)]\nVo,LQb =E[(yQr _ QB(s,a))Vo,QR(s,a)]"}, {"section_index": "7", "section_name": "EXPERIMENTS AND DISCUSSION", "section_text": "Puddle worlds: Figures 2b and 2c| show the discrete version of the standard puddle world tha. is widely used in Reinforcement Learning literature. In this world, the goal of the agent is to g. from a specified start position to the goal position, maximising its return. At each state the agen can choose one of these four actions: move one position to the north, south, east or west.With O. probability the agent moves in the chosen direction and with 0.1 probability it moves in a randon direction irrespective of its choice of action. On reaching the goal state, the agent gets a rewar of +10. On reaching other parts of the grid the agent gets different penalties as mentioned in the. legend of the figures. . We evaluate the performance of our architecture on value transfer using the Arcade Learning Environment (ALE) platform [Bellemare et al.|(2012)]. Atari 2600: ALE provide a simulator for Atari 2600 games. This is one of the most commonly used benchmark tasks for deej reinforcement learning algorithms [Mnih et al.(2015), Mnih et al.(2016), Parisotto et al.(2015] Rusu et al.(2016)]. We perform our adaptive transfer learning experiments on the Atari 2600 game.\nIn this section, we consider the case when multiple partially favorable source tasks are availabl such that each of them can assist the learning process for different parts of the state space of th target task. The objective here is to first show the effectiveness of the attention network in learnin to focus only on the source task relevant to the state the agent encounters while trying to complet the target task and then evaluating the full architecture with an additional randomly initialised bas network.\n3 -40 2 G1 -20 -1 -0.2 -0.2 S1 S1 G1 S2 S2 S3 S4 Vorld (b) Puddle World 1 (c) Puddle World 2\nFigure 2: Different worlds for policy transfer experiments\nLQB(0b) =Es,a,r,s yQT - QB(s,a;0))2\n0a and 0, are updated with the above gradients using RMSProp. Note that the Q-learning updates for both the attention network (Eq.(11)) and the base network (Eq.(12) use the target value generated by QT. We use target networks for both Qb and QT to stabilize the updates and reduce the non- stationarity as in DQN training. The parameters of the target networks are periodically updated to that of the online networks.\nWe evaluate the performance of our architecture A2T on policy transfer using two simulated worlds.. viz., chain world and puddle world as described below. The main goal of these experiments is to test. the consistency of results with the algorithm motivation. Chain world: Figure|2a[shows the chain world where the goal of the agent is to go from one point in the chain (starting state) to another. point (goal state) in the least number of steps. At each state the agent can choose to either move. one position to the left or to the right. After reaching the goal state the agent gets a reward that is. inversely proportional to the number of steps taken to reach the goal..\n1400 Learning from scratch A2T with (i) Attention Weights 1200 base network and L1 1eob L1 A2T with 1000 base network and L2 L2 States 10 15 20 5 800 A2T with 1F (ii) Attention Weights Juqwne base network,L1 and L2 L1 600 L2 R 400 States 5 10 15 20 Color bar 200 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 50 100 150 200 250 300 Episode number (a) The weights given by the attention network. Selective (b) Selective transfer in Actor-Critic transfer in REINFORCE\nThis is illustrated for the Policy Transfer setting using the chain world shown in (Fig.2a). Consider. that the target task LT is to start in A or B with uniform probability and reach C in the least number of steps. Now, consider that two learned source tasks, viz., L1 and L2, are available. L1 is the source task where the agent has learned to reach the left end (A) starting from the right end (B). In contrast, L2 is the source task where the agent has learned to reach the right end (B) starting from the left end (A). Intuitively, it is clear that the target task should benefit from the policies learnt for tasks L1 and L2. We learn to solve the task LT using REINFORCE given the policies learned for L1 and L2. Figure 3a (i) shows the weights given by the attention network to the two source task policies for different parts of the state space at the end of learning. We observe that the attention. network has learned to ignore L1, and L2 for the left, and right half of the state space of the target task, respectively. Next, we add base network and evaluate the full architecture on this task. Figure. 3a|(ii) shows the weights given by the attention network to the different source policies for different parts of the state space at the end of learning. We observe that the attention network has learned to ignore L1, and L2 for the left, and right half of the state space of the target task, respectively. As the. base network replicates nT over time, it has a high weight throughout the state space of the target task.\nWe also evaluate our architecture in a relatively more complex puddle world shown in Figure[2cl In. this case, L1 is the task of moving from S1 to G1, and L2 is the task of moving from S2 to G1 In the target task LT, the agent has to learn to move to G1 starting from either S1 or S2 chosen with uniform probability. We learn the task LT using Actor-Critic method, where the following are. available (i) learned policy for L1 (ii) learned policy for L2 and (iii) a randomly initialized policy. network (the base network). Figure|3b shows the performance results. We observe that actor-critic using A2T is able to use the policies learned for L1, and L2 and performs better than a network learning from scratch without any knowledge of source tasks..\nWe do a similar evaluation of the attention network, followed by our full architecture for value. transfer as well. We create partially useful source tasks through a modification of the Atari 2600 game Pong. We take inspiration from a real world scenario in the sport Tennis, where one could imagine two different right-handed (or left) players with the first being an expert player on the. forehand but weak on the backhand, while the second is an expert player on the backhand but weak. on the forehand. For someone who is learning to play tennis with the same style (right/left) as the. experts, it is easy to follow the forehand expert player whenever he receives a ball on the forehand. and follow the backhand expert whenever he receives a ball on the backhand..\nWe try to simulate this scenario in Pong. The trick is to blur the part of the screen where we want. to force the agent to be weak at returning the ball. The blurring we use is to just black out all pixels in the specific region required. To make sure the blurring doesn't contrast with the background, we modify Pong to be played with a black background (pixel value O) instead of the existing gray (pixel. value 87). We construct two partially helpful source task experts L1 and L2. L1 is constructed by.\nExpert 1 Expert 2 03 0.07 0.9\nFigure 4: Visualisation of the attention weights in the Selective Transfer with Attention Network experiment: Green and Blue bars signify the attention probabilities for Expert-1 (L1) and Expert- 2 (L2) respectively. We see that in the first two snapshots, the ball is in the lower quadrant and as expected, the attention is high on Expert-1, while in the third and fourth snapshots, as the ball bounces back into the upper quadrant, the attention increases on Expert-2.\ntraining a DQN on Pong with the upper quadrant (the agent's side) blurred, while L2 is constructec by training a DQN with the lower quadrant (the agent's side) blurred. This essentially results ir the ball being invisible when it is in the upper quadrant for L1 and lower quadrant for L2. We therefore expect L1 to be useful in guiding to return balls on the lower quadrant, and L2 for the upper quadrant. The goal of the attention network is to learn suitable filters and parameters so that i will focus on the correct source task for a specific situation in the game. The source task experts L. and L2 scored an average of 9.2 and 8 respectively on Pong game play with black background. Witl an attention network to suitably weigh the value functions of L1 and L2, an average performance of 17.2 was recorded just after a single epoch (250,000 frames) of training. (The score in Pong is in the range of [-21, 21). This clearly shows that the attention mechanism has learned to take advantage of the experts adaptively. Fig.4 shows a visualisation of the attention weights for the same.\nWe then evaluate our full architecture (A2T) in this setting, i.e with an addition of DQN learn-. ing from scratch (base network) to the above set-. ting. The architecture can take advantage of the. knowledge of the source task experts selectively. early on during the training while using the ex-. pertise of the base network wherever required, to. perform well on the target task. Figure 5 sum- marizes the results, where it is clear that learn- . ing with both the partially useful experts is better. than learning with only one of them which in turn. is better than learning from scratch without any. additional knowledge..\nWe now define an experiment using the puddle world from Figure 2b|for policy transfer. The target task in our experiment is to maximize the return in reaching the goal state G1 starting from any one of the states S1, S2, S3, S4. We artificially construct an unfavorable source task by first learning to solve the above task and then negating the weights of the topmost layer of the actor network We then add a favorable task to the above setting. We artificially construct a favorable source task\n20 15 10 5 scrree O -5 10 -15 Learning from scratch A2T with base network and L1 -20 A2T with base network and L2 A2T with base network, L1 and L2 -25 0 5 10 15 20 25 Epoch\nFigure 5: Selective Value Transfer\nWe first consider the case when only one learned. source task is available such that its solution K1. (policy or value) can hamper the learning process of the new target task. We refer to such a source. task as an unfavorable source task. In such a scenario, the attention network shown in Figure 1a. should learn to assign a very low weight (ignore) to K1 . We also consider a modification of this. setting by adding another source task whose solution K2 is favorable to the target task. In such a scenario, the attention network should learn to assign high weight (attend) to K, while ignoring K1\n20 20 15 15 10 10 5 5 Aeereeeeeere 0 5 -10 -10 15 -15 Learning from scratch Learning from scratch -20 Direct Transfer with unfavorable task -20 Direct transfer with unfavorable task A2T with base network and unfavorable task A2T with base network and unfavorable task A2T with base network, favorable and unfavorable tasks A2T with base network, favorable and unfavorable task -25 25 0 5 10 15 20 25 30 0 5 10 15 20 25 30 Ch ch\n(a) Avoiding negative transfer(Pong) and transferring(b) Avoiding negative transfer(Freeway) and transfer from a favorable task ring from a favorable task\nFigure 7: Avoiding negative transfer and transferring value from a favorable task(higher the better) Specific training and architecture details are mentioned in APPENDIX. The plots are averaged ove two runs with different random seeds.\nsimply by learning to solve the target task and using the learned actor network. Figure 6 shows. the results. The target task for the value transfer experiment is to reach expert level performance. on Pong. We construct two kinds of unfavorable source tasks for this experiment. Inverse-Pong. A DQN on Pong trained with negated reward functions, that is with R'(s, a) = -R(s, a) where. R(s, a) is the reward provided by the ALE emulator for choosing action a at state s. Freeway. An expert DQN on another Atari 2600 game, Freeway, which has the same range of optimal value. functions and same action space as Pong. We empirically verified that the Freeway expert DQN. leads to negative transfer when directly initialized and fine-tuned on Pong which makes this a good. proxy for a negative source task expert even though the target task Pong has a different state space\nWe artificially construct a favorable source task by learning a DQN to achieve expertise on the target task (Pong) and use the learned network Figure [7a compares the performance of the var- ious scenarios when the unfavorable source task is Inverse-Pong, while Figure|7b offers a similar comparison with the negative expert being Free- Way.\nFrom all the above results, we can clearly see that A2T does not get hampered by the unfavorable source task by learning to ignore the same and performs competitively with just a randomly ini- tialized learning on the target task without any ex- pert available. Secondly, in the presence of an ad- ditional source task that is favorable, A2T learns to transfer useful knowledge from the same while ignoring the unfavorable task, thereby reaching expertise on the target task much faster than the other scenarios.\nWe present the evolution of attention weights for the experiment described in Section 4.2 where. we focus on the efficacy of the A2T framework in providing an agent the ability to avoid negativ. transfer and transfer from a favorable source task (perfect expert). Figure8|depicts the evolution o\n1200 Learning from scratch 1000 Direct transfer with unfavorable task |e06 A2T with base network and unfavorable task 800 A2T with base network,favorable and unfavorable tasks 600 400 200 50 100 150 200 Episode number\nFigure 6: Avoiding negative transfer and trans- ferring policy from a favorable task(lower the better).\nthe attention weights (normalised in the range of [0, 1]) during the training of the A2T framework The corresponding experiment is the case where the target task is to solve Pong, while there are two. source task experts, one being a perfect Pong playing trained DQN (to serve as positive expert), and the other being the Inverse-Pong DQN trained with negated reward functions (to serve as negative expert). Additionally, there's also the base network that learns from scratch using the experience gathered by the attentively combined behavioral policy from the expert networks, the base network. and itself.\nWe train the framework for 30 epochs, and the. plot illustrates the attention weights every second. epoch. We clearly see from figure[8[that there is. no weird co-adaptation that happens in the train-. ng, and the attention on the negative expert is. iniformly low throughout. Initially, the frame-. work needs to collect some level of experience. o figure out that the positive expert is optimal. Every or close to optimal). Till then, the attention is. epocl mostly on the base network, which is learning. from scratch. The attention then shifts to the pos-. tive expert which in turn provides more reward-. ng episodes and transition tuples to learn from.. Finally, the attention drifts slowly to the base net-. work from the positive expert again, after which. the attention is roughly random in choosing be-. tween the execution of positive expert and the. Figu oase network. This is because the base network one 1 nas acquired sufficient expertise as the positive expert which happens to be optimal for the tar-. get task. This visualization clearly shows that A2T is a. expert throughout and using a positive expert appropri. gathered and acquire sufficient expertise on the target ta.\n4.4 WHEN A PERFECT EXPERT IS NOT AVAILABLE AMONG THE SOURCE TASKS\nIn our experiments in the previous subsection dealing with prevention of negative transfer and using a favorable source task, we consider the positive expert as a perfect (close to optimal) ex- pert on the same task we treat as the target task. This raises the question of relying on the pres- ence of a perfect expert as a positive expert. If we have such a situation. the obvious solution is to execute each of the experts on the target task and vote for them with probabilities proportional to the average performance of each.\nnowevel genelic and lot 20 Direct Transfer with unfa intended to just do source task selection. We il-. A2T with base network al A2T with base network, fa A2T with base network, p lustrate this with an additional baseline experi-. 25 5 10 15 20 ment, where the positive source task is an im-. Epoch perfect expert on the target task. In such a case,. Figure 9: Partial Positive Expert just having a weighted average voting among the. available source task networks based on their in-. dividual average rewards is upper bounded by the. performance of the best available positive expert, which happens to be an imperfect exp get task. Rather, the base network has to acquire new skills not present in the source ta. We choose a partially trained network on Pong, that scores an average of 8 (max: 21 in figure 9 clearly shows that the A2T framework with a partial Pong expert and a ne. performs better than i) learning from scratch, ii) A2T with only one negative expert, worse than A2T with one perfect positive expert and one negative expert. This is expe\nEpoch Every second epoch from 1 to30 Positive Expert Negative Expert Base Network Color bar 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nFigure 8: Evolution of attention weights with one positive and one negative expert.\n20 15 10 5 Coree S Ayrere ~5 10 15 Learning from scratch -20 Direct Transfer with unfavorable task A2T with base network and unfavorable task A2T with base network, favorable and unfavorable tasks A2T with base network, part. fav and unfav. tasks -25 0 5 10 15 20 25 30 Epoch\nFigure 9: Partial Positive Expert Experiment\na partial expert cannot provide as much of expert knowledge as a perfect expert, but still provides. some useful knowledge in speeding the process of solving the target task. An important conclusion. from this experiment is that the A2T framework is capable of discovering new skills not available among any of the experts when such skills are required for optimally solving the target task. To. maintain consistency, we perform the same number of runs for averaging scores and experimented. with both learning rates and pick the better performing one (0.00025).."}, {"section_index": "8", "section_name": "5 CONCLUSION AND FUTURE WORK", "section_text": "While in this work we focused on transfer between tasks that share the same state and action spaces and are in the same domain, the use of deep networks opens up the possibility of going beyond this. setting. For example, a deep neural network can be used to learn common representations [Parisotto. et al.(2015)] for multiple tasks thereby enabling transfer between related tasks that could possibly. have different state-action spaces. A hierarchical attention over the lower level filters across source. task networks while learning the filters for the target task network is another natural extension to. transfer across tasks with different state-action spaces. The setup from Progressive Neural Networks. [Rusu et al.(2016)] could be borrowed for the filter transfer, while the A2T setup can be retained for. the policy/value transfer. Exploring this setting for continuous control tasks so as to transfer from modular controllers as well avoid negative transfer is also a potential direction for future research.\nThe nature of tasks considered in our experiments is naturally connected to Hierarchical Reinforce ment Learning and Continual Learning. For instance, the blurring experiments inspired from Tennis. based on experts for specific skills like Forehand and Backhand could be considered as learning fron sub-goals (program modules) like Forehand and Backhand to solve a more complex and broade. task like Tennis by invoking the relevant sub-goals (program modules). This structure could be very. useful to build a household robot for general purpose navigation and manipulation whereby specific. skills such as manipulation of different objects, navigating across different source-destination points. etc could be invoked when necessary. The attention network in the A2T framework is essentiall a soft meta-controller and hence presents itself as a powerful differentiable tool for Continual an Meta Learning. Meta-Controllers have typically been been designed with discrete decision struc. ture over high level subgoals. This paper presents an alternate differentiable meta-controller with a. soft-attention scheme. We believe this aspect can be exploited for differentiable meta-learning ar chitectures for hierarchical reinforcement learning. Over all, we believe that A2T is a novel way t. approach different problems like Transfer Learning, Meta-Learning and Hierarchical Reinforcemen. Learning and further refinements on top of this design can be a good direction to explore.."}, {"section_index": "9", "section_name": "ACKNOWLEDGEMENTS", "section_text": "Thanks to the anonymous reviewers of ICLR 2017 who have provided thoughtful remarks and helped us revise the paper. We would also like to thank Sherjil Ozair, John Schulman, Yoshua Bengio, Sarath Chandar, Caglar Gulchere and Charu Chauhan for useful feedback about the work..\nIn this paper we present a very general deep neural network architecture, A2T, for transfer learning. that avoids negative transfer while enabling selective transfer from multiple source tasks in the same domain. We show simple ways of using A2T for policy transfer and value transfer. We empirically. evaluate its performance with different algorithms, using simulated worlds and games, and show. that it indeed achieves its stated goals. Apart from transferring task solutions, A2T can also be used. for transferring other useful knowledge such as the model of the world.."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by join learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nBikramjit Banerjee and Peter Stone. General game learning using knowledge transfer. In In Th Othlnternationallo tifciallntelioence?OO7\nMarc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environ ment: An evaluation platform for general agents. arXiv preprint arXiv:1207.4708, 2012\nFernando Fernandez and Manuela Veloso. Probabilistic policy reuse in a reinforcement learning agent. In Proceedings of the fifth international joint conference on Autonomous agents and mul- tiagent systems, pp. 720-727. ACM, 2006.\nAlessandro Lazaric and Marcello Restelli. Transfer from multiple mdps. In Advances in Neura Information Processing Systems, pp. 1746-1754, 2011.\nLong-Ji Lin. Reinforcement learning for robots using neural networks. Technical report, DTIC Document, 1993.\nVolodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In Advances in Neural Information Processing Systems, pp. 2204-2212, 2014.\nScott Niekum. Sachin Chitta. Andrew G Barto. Bhaskara Marthi. and Sarah Osentoski. Incremental semantically grounded learning from demonstration. In Robotics: Science and Systems, volume 9. 2013.\nEmilio Parisotto, Jimmy Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfe reinforcement learning. CoRR, abs/1511.06342, 2015.\nMartin L Puterman. Markov decision processes: Discrete stochastic dynamic programming. 199\nKimberly Ferguson and Sridhar Mahadevan. Proto-transfer learning in markov decision processes using spectral methods. Computer Science Department Faculty Publication Series, pp. 151, 2006\nVijay Konda and John Tsitsiklis. Actor-critic algorithms. In SIAM Journal on Control and Opti mization, pp. 1008-1014. MIT Press, 2000.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint. arXiv:1312.5602, 2013.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle. mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015..\nAndrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. CoRR. abs/1606.04671, 2016.\nJonathan Sorg and Satinder Singh. Transfer via soft homomorphisms. In Proceedings of The 8tl International Conference on Autonomous Agents and Multiagent Systems- Volume 2, pp. 741-748 International Foundation for Autonomous Agents and Multiagent Systems, 2o09..\nRichard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press Cambridge. MA. USA. 1st edition. 1998. ISBN 0262193981\nMatthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey The Journal of Machine Learning Research, 10:1633-1685, 2009.\nChristopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3):279-292, 1992\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992"}, {"section_index": "11", "section_name": "APPENDIX A: DETAILS OF THE NETWORK ARCHITECTURE IN VALUE TRANSFER EXPERIMENTS", "section_text": "For the source task expert DQNs, we use the same architecture as [Mnih et al.(2015)] where the. input is 84 84 4 with 32 convolution filters, dimensions 8 8, stride 4 4 followed by 64. convolution filters with dimensions 4 4 and stride 2 2, again followed by 64 convolution filters. of size 3 3 and stride 1 1. This is then followed by a fully connected layer of 512 units and finally by a fully connected output layer with as many units as the number of actions in Pong (Freeway. which is 3. We use ReLU nonlinearity in all the hidden layers.\nSpecifically, the NIPS architecture of Mnih et al.(2013) takes in a batch of 84 84 4 inputs followed by 16 convolution filters of dimensions 8 8 with stride 4 4. 32 convolution filters with dimensions 4 4 and stride 2 2, a fully connected hidden layer of 256 units, followed by the output layer. For the Selective Transfer with Blurring experiments described in Section 4.1, we use the second option above. For the other experiments in Section 4.2 and the additional experiments in Appendix, we use the first option. The attention network has N + 1 outputs where N is the number of source tasks."}, {"section_index": "12", "section_name": "LEARNING RATE", "section_text": "In all our experiments, we trained the architecture using the learning rates, 0.0025 and 0.0005. In general, the lower learning rate provided more stable (less variance) training curves. While com- paring across algorithms, we picked the best performing learning rate out of the two (0.0025 and 0.0005) for each training curve."}, {"section_index": "13", "section_name": "APPENDIX C: BLURRING EXPERIMENTS ON PONG", "section_text": "The experts are trained with blurring (hiding the ball) and black background as illustrated in AP PENDIX A. Therefore, to compare the learning with that of a random network without any addi tional knowledge, we ran the baseline DQN on Pong with a black background too. Having a black background provides a rich contrast between the white ball and the black background, thereby mak ing training easier and faster, which is why the performance curves in that setting are different tc the other two settings reported for Inverse Pong and Freeway Negative transfer experiments where no blacking is done and Pong is played with a gray background. The blurring mechanism in Pong is illustrated in APPENDIX E.\nWith respect to the A2T framework architecture, we have experimented with two possible architec. tures:\nThe base and attention networks following the NIPS architecture of Mnih et al.(2013] except that the output layer is softmax for the attention network. The base and attention networks following the Nature architecture of Mnih et al.(2015) with a softmax output layer for the attention network.\nFor all our experiments in Value Transfer, we used RMSProp as in [Mnih et al.(2015)] for updating gradient. For Policy Transfer, since the tasks were simple, stochastic gradient descent was sufficient to provide stable updates. We also use reward clipping, target networks and experience replay for our value transfer experiments in exactly the same way (all hyper parameters retained) as [Mnih et al. [2015)]. A training epoch is 250,000 frames and for each training epoch, we evaluate the networks with a testing epoch that lasts 125,000 frames. We report the average score over the completed episodes for each testing epoch. The average scores obtained this way are averaged over 2 runs with different random seeds. In the testing epochs, we use e = 0.05 in the e-greedy policy.\nAPPENDIX E: BLURRING MECHANISM IN PONG - DETAILS\n(a) Ball in upper quad (b) Blurred up. oer quad (c) Ball in lower quad (d) Blurred lower quad"}, {"section_index": "14", "section_name": "APPENDIX D: BLURRING EXPERIMENTS ON BREAKOUT", "section_text": "Similar to our Blurring experiment on Pong, we additionally ran another experiment on the Atari 2600 game, Breakout, to validate the efficiency of our attention mechanism. We consider a setup with two experts L1 and L2 along with our attention network. The experts L1 and L2 were trainec by blurring the lower left and right quadrants of the breakout screen respectively. We don't have to make the background black like in the case of Pong because the background is already black in Breakout and direct blurring is sufficient to hiding the ball in the respective regions without any contrasts introduced. We blur only the lower part so as to make it easy for the agent to at least anticipate the ball based on the movement at the top. We empirically observed that blurring the top half (as well) makes it hard to learn any meaningful partially useful experts L1 and L2.\nThe goal of this experiment is to show that the attention network can learn suitable filters so as tc. dynamically adapt and learn to select the expert appropriate to the situation (game screen) in the. task. The expert L1 which was blurred on the left bottom half is bound to weak at returning balls or. that region while L2 is expected to be weak on the right. This is in the same vein as the forehand. backhand example in Tennis and its synthetic simulation for Pong by blurring the upper and lowei. quadrants. During game play, the attention mechanism is expected to ignore L2 when the ball is. on the bottom right half (while focusing on L1) and similarly ignore L2 (while focusing on L1. when the ball is on the left bottom half. We learn experts L1 and L2 which score 42.2 and 39.8. respectively. Using the attention mechanism to select the correct expert, we were able to achieve. a score of 94.5 after training for 5 epochs. Each training epoch corresponds to 250, 000 decisior. steps, while the scores are averaged over completed episodes run for 125, 000 decision steps. This. shows that the attention mechanism learns to select the suitable expert. Though the performance is limited by the weaknesses of the respective experts, our goal is to show that the attention paradigr. is able to take advantage of both experts appropriately. This is evident from the scores achieved by. standalone experts and the attention mechanism. Additionally, we also present a visualization of the. attention mechanism weights assigned to the experts L1 and L2 during game play in APPENDIX. G. The weights assigned are in agreement with what we expect in terms of selective attention. The. blurring mechanism is visually illustrated in APPENDIX F..\nFigure 10: The figures above explain the blurring mechanism for selective transfer experiments on Pong. The background of the screen is made black. Let X (84 84) denote an array containing he pixels of the screen. The paddle controlled by the agent is the one on the right. We focus on the two quadrants X1 = X|: 42, 42 :and X2 = X|42 :, 42 :of the Pong screen relevant to the agent controlled paddle. To simulate an expert that is weak at returning balls in the upper quadrant. he portion of X1 till the horizontal location of agent-paddle, ie X1|:, : 31| is blacked out, while similarly, for simulating weakness in the bottom quadrant, we blur the portion of X2 till the agent- oaddle's horizontal location, ie X2|:, : 31 = 0. Figures 10a|and 10b|illustrate the scenarios of olurring the upper quadrant before and after blurring; and similarly do|10c|and|10d|for blurring the lower quadrant. Effectively, blurring this way with a black screen is equivalent to hiding the ball white pixel) in the appropriate quadrant where weakness is to be simulated. Hence, Figures 10b and|10d|are the mechanisms used while training a DQN on Pong to hide the ball at the respective quadrants, so to create the partially useful experts which are analogous to forehand-backhand experts in Tennis. X[: a, : b] indicates the subarray of X with all rows upto row index a and all columns upto column index b.\nAPPENDIX F: BLURRING MECHANISM IN BREAKOUT - DETAILS\nFigure 11: The figures above explain the blurring mechanism used for selective transfer experiments on Breakout. The background of the screen is already black. Let X (84 84) denote an array containing the pixels of the screen. We focus on the two quadrants X1 = X|31 : 81, 4 : 42 anc X2 = X[31 : 81, 42 : 80]. We perform blurring in each case by ensuring X1 = 0 and X2 = 0 for all pixels within them for training L1 and L2 respectively. Effectively, this is equivalent to hiding the ball in the appropriate quadrants. Blurring X1 simulates weakness in the lower left quadrant, while blurring X2 simulates weakness in the lower right quadrant. We don't blur all the way dowr upto the last row to ensure the paddle controlled by the agent is visible on the screen. We also don'1 black the rectangular border with a width of 4 pixels surrounding the screen. Figures|11a|and|11b illustrate the scenarios of blurring the lower left quadrant before and after blurring; and similarly dc 11c and 11d|for blurring the lower right quadrant.\nAPPENDIX G: BLURRING ATTENTION VISUALIZATION ON BREAKOUT\nE5 3.15 ).2 (.53 Expert 1 Expert 2\nFigure 12: Visualisation of the attention weights in the Selective Transfer with Attention for Break. out: Green and Blue bars signify the attention probabilities for Expert-1 (L1) and Expert-2 (L2 respectively on a scale of [0, 1]. We see that in the first two snapshots, the ball is in the lower righ. quadrant and as expected, the attention is high on Expert-1, while in the third and fourth snapshots. the ball is in the lower right quadrant and hence the attention is high on Expert-2..\n(a) Ball in lower-left quad (b) Blurred lower-left quad (c) Ball in lower-right quad (d) Blurred lower-right quad\n20 15 Learning from scratch 15 A2T with fav. and unfav. task 10 Direct Transfer from unfavorable task 10 5 5 sCore score 0 0 -5 -5 10 -10 -15 15 -20 Normal Pong -20 Sparse Pong -25 -25 0 5 10 15 20 25 0 5 10 15 20 Epoch Epoch (a) Comparison of Sparse Pong to Normal Pong (b) A2T with a positive and negative expert\n10 5 score 0 AAeerage 5 -10 -15 -20 -25 D\nFigure 13: This experiment is a case study on a target task where the performance is limited by data availability. So far, we focused on experiments where the target task is to solve Pong (normal or. black background) for Value Transfer, and Puddle Worlds for Policy Transfer. In both these cases, a. randomly initialized value (or policy) network learning without the aid of any expert network is able. to solve the target task within a reasonable number of epochs (or iterations). We want to illustrate a. case where solving the target task in reasonable time is hard and the presence of a favorable source. task significantly impacts the speed of learning. To do so, we consider a variant of Pong as our target. task. In this variant, only a small probability p of transition tuples (s, a, r, s') with non-zero reward r. are added to the Replay Memory (and used for learning through random batch sampling). This way. the performance on the target task is limited by the availability of rewarding (positive or negative). transitions in the replay memory. This synthetically makes the target task of Pong a sparse reward. problem because the replay memory is largely filled with transition tuples that have zero reward. We. do not use any prioritized sampling so as to make sure the sparsity has a negative effect on learning. to solve the target task. We use a version of Pong with black background (as used in Section 4.1. for the Blurring experiments) for faster experimentation. p = 0.1 was used for the plots illustrated above. Figure[13a|clearly shows the difference between a normal Pong task without any synthetic sparsity and the new variant we introduce. The learning is much slower and is clearly limited by data. availability even after 20 epochs (20 million frames) due to reward sparsity. Figure|13b|describes a comparison between the A2T setting with one positive expert which expertly solves the targel. task and one negative expert, learning from scratch, and direct fine-tuning on a negative expert. We. clearly see the effect of having the positive expert in one of the source tasks speeding up the learning. process significantly when compared to learning from scratch, and also see that fine-tuning on top. of a negative expert severely limits learning even after 20 epochs of training. We also see that the. A2T framework is powerful to work in sparse reward settings and avoids negative transfer even in. such cases, while also clearly learning to benefit from the presence of a target task expert among. the source task networks. Importantly, this experiment demonstrates that transfer learning has a. significant effect on tasks which may be hard (infeasible to solve within a reasonable training time). without any expert available. Further, A2T is also beneficial for such (sparse reward) situations when. accessing the weights of an expert network is not possible, and only outputs of the expert (policy. or value-function) can be used. Such synthetic sparse variants of existing tasks is a good way to. explore future directions in the intersection of Inverse Reinforcement Learning and Reward-Based. Learning, with A2T providing a viable framework for off-policy and on-policy learning.."}]
S1jmAotxg
[{"section_index": "0", "section_name": "STICK-BREAKING VARIATIONAL AUTOENCODERS", "section_text": "Eric Nalisnick\nDepartment of Computer Science University of California, Irvine\nWe extend Stochastic Gradient Variational Bayes to perform posterior inference for the weights of Stick-Breaking processes. This development allows us to define a Stick-Breaking Variational Autoencoder (SB-VAE), a Bayesian nonparametric version of the variational autoencoder that has a latent representation with stochastic dimensionality. We experimentally demonstrate that the SB-VAE, and a semi supervised variant, learn highly discriminative latent representations that often outperform the Gaussian VAE's."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep generative models trained via Stochastic Gradient Variational Bayes (SGVB) (Kingma 8 Welling, 2014a; Rezende et al., 2014) efficiently couple the expressiveness of deep neural networks with the robustness to uncertainty of probabilistic latent variables. This combination has lead tc their success in tasks ranging from image generation (Gregor et al., 2015; Rezende et al., 2016) tc semi-supervised learning (Kingma et al., 2014; Maalge et al., 2016) to language modeling (Bowmar et al., 2016). Various extensions to SGVB have been proposed (Burda et al., 2016; Maalge et al 2016; Salimans et al., 2015), but one conspicuous absence is an extension to Bayesian nonparametric processes. Using SGVB to perform inference for nonparametric distributions is quite attractive. Fo instance, SGVB allows for a broad class of non-conjugate approximate posteriors and thus has the potential to expand Bayesian nonparametric models beyond the exponential family distributions tc which they are usually confined. Moreover, coupling nonparametric processes with neural networl inference models equips the networks with automatic model selection properties such as a self determined width, which we explore in this paper.\nWe make progress on this problem by first describing how to use SGVB for posterior inference for the weights of Stick-Breaking processes (Ishwaran & James, 2001). This is not a straightforward task as the Beta distribution, the natural choice for an approximate posterior, does not have the differentiable non-centered parametrization that SGVB requires. We bypass this obstacle by using the little-known Kumaraswamy distribution (Kumaraswamy, 1980).\nUsing the Kumaraswamy as an approximate posterior, we then reformulate two popular deep gen erative models-the Variational Autoencoder (Kingma & Welling. 2014a) and its semi-supervisec variant (model M2 proposed by Kingma et al. (2014))into their nonparametric analogs. These mod els perform automatic model selection via an infinite capacity hidden layer that employs as many stick segments (latent variables) as the data requires. We experimentally show that, for datasets of natural images, stick-breaking priors improve upon previously proposed deep generative models by having a latent representation that better preserves class boundaries and provides beneficial regularization for semi-supervised learning."}, {"section_index": "2", "section_name": "2 BACKGROUND", "section_text": "We begin by reviewing the relevant background material on Variational Autoencoders (Kingma & Welling, 2014a), Stochastic Gradient Variational Bayes (also known as Stochastic Backpropagation). (Kingma & Welling, 2014a; Rezende et al., 2014), and Stick-Breaking Processes (Ishwaran & James 2001).\nPadhraic Smyth\nDepartment of Computer Science University of California, Irvine"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "A Variational Autoencoder (VAE) is model comprised of two multilayer perceptrons: one acts as a density network (MacKay & Gibbs, 1999) mapping a latent variable z; to an observed datapoint x; and the other acts as an inference model (Salimans & Knowles, 2013) performing the reverse mapping. from x; to z. Together the two form a computational pipeline that resembles an unsupervised autoencoder (Hinton & Salakhutdinov, 2006). The generative process can be written mathematically aS\nwhere p(z) is the prior and pe(x z) is the density network with parameters 0. The approximate posterior of this generative process, call it qo(z|x;), is then parametrized by the inference network (with parameters $). In previous work (Kingma & Welling, 2014a; Rezende & Mohamed, 2015 Burda et al., 2016; Li & Turner, 2016), the prior p(z) and variational posterior have been marginally Gaussian.\nThe VAE's generative and variational parameters are estimated by Stochastic Gradient Variational Bayes (SGVB). SGVB is distinguished from classical variational Bayes by it's use of differentiable Monte Carlo (MC) expectations. To elaborate, consider SGVB's approximation of the usual evidence. lowerbound (ELBO) (Jordan et al., 1999):\n1 L(0,$;xi) = S s=1\nC S a a log pe(xq|Zi,s) = log pe(xi|Zi,s do dzi s=1 s=1\nIn other words, z must have a functional form that deterministically exposes the variational distri bution's parameters and allows the randomness to come from draws from some fixed distribution Location-scale representations and inverse cumulative distribution functions are two examples of DNCPs. For instance, the VAE's Gaussian latent variable (with diagonal covariance matrix) is represented as z, = + O e where e ~ N(0, 1)."}, {"section_index": "4", "section_name": "2.3 STICK-BREAKING PROCESSES", "section_text": "Lastly, we define stick-breaking processes with the ultimate goal of using their weights for the VAE's prior p(z). A random measure is referred to as a stick-breaking prior (SBP) (Ishwaran & James, 2001) if it is of the form G(-) = =1 k0g, where oc, is a discrete measure concentrated at (k ~ Go, a draw from the base distribution Go (Ishwaran & James, 2001). The s are random weights independent of Go, chosen such that 0 1, and k k = 1 almost surely. SBPs have been termed as such because of their constructive definition known as the stick-breaking process (Sethuraman, 1994). Mathematically, this definition implies that the weights can be drawn according to the following iterative procedure:\nVj if k = 1 TT k (1 -v) for k > 1\nwhere vk ~ Beta(a, ). When vk ~ Beta(1, ao), then we have the stick-breaking construction for the Dirichlet Process (Ferguson, 1973). In this case, the name for the joint distribution over the infinite sequence of stick-breaking weights is the Griffiths, Engen and McCloskey distribution with concentration parameter Qo (Pitman, 2002): (1, 2, ...) ~ GEM(ao).\nZi ~ p(Z), X; ~ pe(X|Zj\nfor S samples of z, and where K L is the Kullback-Leibler divergence. An essential requirement. of SGVB is that the latent variable be represented in a differentiable, non-centered parametrization (DNCP) (Kingma & Welling, 2014b); this is what allows the gradients to be taken through the MC. expectation, 1.e.:"}, {"section_index": "5", "section_name": "3.1 COMPOSITION OF GAMMA RANDOM VARIABLES", "section_text": "for ~ Uniform(0, 1), shape parameter a, and scale parameter b. This approximation becomes pooi as a increases, however, and Knowles recommends a finite difference approximation of the inverse CDF when a > 1.\nAnother candidate posterior is the little-known Kumaraswamy distribution (Kumaraswamy, 1980). It is a two-parameter continuous distribution also on the unit interval with a density function defined as\nKumaraswamy(x; a, b) = abxa-1 (1 - xa)\nfor x E (0,1) and a,b > 0. In fact, if a = 1 or b = 1 or both, the Kumaraswamy and Beta are equivalent, and for equivalent parameter settings, the Kumaraswamy resembles the Beta albeit with higher entropy. The DNCP we desire is the Kumaraswamy's closed-form inverse CDF. Samples can be drawn via the inverse transform:\nx ~ (1 -u6 where u ~ Uniform(0, 1)\nNot only does the Kumaraswamy make sampling easy, its KL-divergence from the Beta can be closely approximated in closed-form (for ELBO computation).\nwhere e ~ N(0, 1). In the Probit SBP, g() is taken to be the Gaussian CDF, and it is chosen as such for posterior sampling considerations. This choice is impractical for our purposes, however, since the Gaussian CDF does not have a closed form. Instead, we use the logistic function q(x) = 1/(1 + e-x"}, {"section_index": "6", "section_name": "4 STICK-BREAKING VARIATIONAL AUTOENCODERS", "section_text": "Given the discussion above, we now propose the following novel modification to the VAE. Instead of drawing the latent variables from a Gaussian distribution, we draw them from the GEM distribution making the hidden representation an infinite sequence of stick-breaking weights. We term this model a Stick-Breaking Variational Autoencoder (SB-VAE) and below detail the generative and inference processes implemented in the decoding and encoding models respectively.\nHaving covered the relevant background material, we now discuss the first contribution of this paper. using Stochastic Gradient Variational Bayes for the weights of a stick-breaking process. Inference for the random measure G() is an open problem that we leave to future work. We focus on performing. inference for just the series of stick-breaking weights, which we will refer to as GEM random. variables after their joint distribution..\nIn the original SGVB paper, Kingma & Welling (2014a) suggest representing the Beta distribution as a. composition of Gamma random variables by using the fact v ~ Beta(a, ) can be sampled by drawing Gamma variables x ~ Gamma(a, 1), y ~ Gamma(, 1) and composing them as v = x/(x + y) However, this representation still does not admit a DNCP as the Gamma distribution does not have one with respect to its shape parameter. Knowles (2015) suggests that when the shape parameter is near zero, the following asymptotic approximation of the inverse CDF is a suitable DNCP:.\nuaF(a)) b\nAnother promising parametrization is inspired by the Probit Stick-Breaking Process (Rodriguez & Dunson, 2011). In a two-step process, we can draw a Gaussian and then use a squashing function to map it on (0, 1):\nUk =g(k +OkE\nFigure 1: Subfigures (a) and (b) show the plate diagrams for the relevant latent variable models Solid lines denote the generative process and dashed lines the inference model. Subfigure (a) shows the finite dimensional case considered in (Kingma & Welling, 2014a), and (b) shows the infinite dimensional case of our concern. Subfigure (c) shows the feedforward architecture of the Stick Breaking Autoencoder, which is a neural-network-based parametrization of the graphical model in (b)."}, {"section_index": "7", "section_name": "4.1 GENERATIVE PROCESS", "section_text": "The generative process is nearly identical to previous VAE formulations. The crucial difference is that we draw the latent variable from a stochastic process, the GEM distribution. Mathematically, the hierarchical formulation is written as\nwhere ; is the vector of stick-breaking weights and ao is the concentration parameter of the GEM distribution. The likelihood model pe(x,[, ) is a density network iust as described in Section 2.1."}, {"section_index": "8", "section_name": "4.2 INFERENCE", "section_text": "The inference process-how to draw ; ~ qo(;[z)-requires modification from the standard VAE's in order to sample from the GEM's stick-breaking construction. Firstly, an inference network computes the parameters of K fraction distributions and samples values vi,k according to one of the parametrizations in Section 3. Next, a linear-time operation composes the stick segments from the sampled fractions:\nThe computation path is summarized in Figure 1 (c) with arrows denoting the direction of feedforward computation. The gray blocks represent any deterministic function that can be trained with gradient descent-i.e. one or more neural network layers. Optimization of the SB-VAE is done just as for the VAE, by optimizing Equation 2 w.r.t. and 0. The KL divergence term can be computed (or closely approximated) in closed-form for all three parametrizations under consideration; the Kumaraswamy-to-Beta KL divergence is given in the appendix.\nAn important detail is that the Kth fraction vi,K is always set to one to ensure the stick segments sum. to one. This truncation of the variational posterior does not imply that we are using a finite dimensional prior. As explained by Blei & Jordan (2o06), the truncation level is a variational parameter and not part of the prior model specification. Truncation-free posteriors have been proposed, but these methods use split-and-merge steps (Hughes et al., 2015) or collapsed Gibbs sampling, both of which are not applicable to the models we consider. Nonetheless, because SGVB imposes few limitations on the inference model, it is possible to have an untruncated posterior. We conducted exploratory experiments using a truncation-free posterior by adding extra variational parameters in an on-line fashion, initializing new weights if more than 1% of the stick remained unbroken. However, we found this made optimization slower without any increase in performance.\n(c) The Stick-Breaking Variational Autoencoder\n7i ~ GEM(Q0),Xi ~ pe(xi|i\nK-1 Ti=(i,1,Ti,2,...,Ti,K Vi,1,Vi,2(1-Vi,1),...,(1-Vi, j=1"}, {"section_index": "9", "section_name": "S SEMI-SUPERVISED MODEL", "section_text": "We also propose an analogous approach for the semi-supervised relative of the VAE, the M2 model. described by Kingma et al. (2014). A second latent variable y; is introduced that represents a class label. Its distribution is the categorical one: qo(yi|xi) = Cat(y|gy(x)) where gy is a non-linear. function of the inference network. Although y's distribution is written as independent of z, the two. share parameters within the inference network and thus act to regularize one another. We assume the same factorization of the posterior and use the same objectives as in the finite dimensional version (Kingma et al., 2014). Since yi is present for some but not all observations, semi-supervised DGMs. need to be trained with different objectives depending on whether the label is present or not. If the. label is present, following Kingma et al. (2014) we optimize.\nS J(0,$;Xi,Yi) logpe(xi|i,s,Yi)- K L(qo(i|xi)||p(i;Qo)) + log qo(Yi|X S\nwhere log go(y,|x,) is the log-likelihood of the label. And if the label is missing, we optimize\nS J(0,$;xi) = qg(Yj|xi) [logPe(xi|i,s,Yj)]+H[qo(y|Xi) S s=1 Yj\nwhere lH[qo(y[x,)is the entropy of y's variational distribution\nIn regards to the autoencoder implementations we describe, they are closely related to the existing work on representation learning with adaptive latent factors-i.e. where the number of latent dimensions grows as the data necessitates. The best known model of this kind is the infinite binary latent feature model defined by the Indian Buffet Process (Ghahramani & Griffiths, 2005): but its discrete latent variables prevent this model from admitting fully differentiable inference. Recent work that is much closer in spirit is the Infinite Restricted Boltzmann Machine (iRBM) (Cote & Larochelle, 2016), which has gradient-based learning, expands its capacity by adding hidden units and induces a similar ordering on latent factors. The most significant difference between our SB-VAF and the iRBM is that the latter's nonparametric behavior arises from a particular definition of the energy function of the Gibbs distribution, not from an infinite dimensional Bayesian prior. Lastly, our training procedure bears some semblance to Nested Dropout (Rippel et al., 2014), which removes all hidden units at an index lower than some threshold index. The SB-VAE can be seen as performing soft nested dropout since the latent variable values decrease as their index increases."}, {"section_index": "10", "section_name": "7 EXPERIMENTS", "section_text": "We analyze the behavior of the three parametrizations of the SB-VAE and examine how they compare to the Gaussian VAE. We do this by examining their ability to reconstruct the data (i.e. density estimation) and to preserve class structure. Following the original DGM papers (Kingma et al., 2014: Kingma & Welling, 2014a; Rezende et al., 2014), we performed unsupervised and semi-supervised\n'During preparation of this draft, the work of Ruiz et al. (2016) on the Generalized Reparametrizatior Gradient was released (on 10/7/16), which can be used for Beta random variables. We plan to compare thei technique to our proposed use of the Kumaraswamy in subsequent drafts.\nTo the best of our knowledge, neither SGVB nor any of the other recently proposed amortized VI methods (Kingma & Welling, 2014b; Rezende & Mohamed, 2015; Rezende et al., 2014; Tran et al. 2016) have been used in conjunction with BNP priors. There has been work on using nonparametric. posterior approximations-in particular, the Variational Gaussian Process (Tran et al., 2016)but in that work the variational distribution is nonparametric, not the generative model. Moreover, we are not aware of prior work that uses SGVB for Beta (or Beta-like) random variables'..\nEnror 200 250 1400 Gamma-SB VAE 180 Gamma-SB VAE Gamma-SB VAE 1200 Logit-SB VAE Logit-SB VAE Logit-SB VAE 160 200 1000 Kumar-SB VAE Kumar-SB VAE Kumar-SB VAE Gauss VAE 140 800 Gauss VAE Gauss VAE 120 150 600 (eretteee) 100 400 80 100 200 60 00 50 100150200 250 300 350 50 0 100 200300 400500600 700800 100 200 300 400 500 600 700 800 Training Epoch Training Epoch Training Epoch. (a) Frey Faces (b) MNIST (c) MNIST+rot\nFigure 2: Subfigure (a) shows test (expected) reconstruction error vs training epoch for the SB-VAF and Gauss VAE on the Frey Faces dataset, subfigure (b) shows the same quantities for the same models on the MNIST dataset, and subfigure (c) shows the same quantities for the same models or the MNIST+rot dataset.\ntasks on the following image datasets: Frey Faces2, MNIST, MNIST+rot, and Street View House Numbers3 (SVHN). MNIST+rot is a dataset we created by combining MNIST and rotated MNIST4 for the purpose of testing the latent representation under the conjecture that the rotated digits should use more latent variables than the non-rotated ones.."}, {"section_index": "11", "section_name": "7.1 UNSUPERVISED", "section_text": "We first performed unsupervised experiments testing each model's ability to recreate the data as. well as preserve class structure (without having access to labels). The inference and generative models both contained one hidden layer of 200 units for Frey Faces and 500 units for MNIST and MNIST+rot. For Frey Faces, the Gauss VAE had a 25 dimensional (factorized) distribution, and we set the truncation level of the SB-VAE also to K = 25, so the SB-VAE could use only as many latent variables as the Gauss VAE. For the MNIST datasets, the latent dimensionality/truncation-level was set at 50. Cross-validation chose Qo = 1 for Frey Faces and ao = 5 for both MNISTs..\nDensity Estimation. In order to show each model's optimization progress, Figure 2 (a), (b), and (c). report test expected reconstruction error (i.e. the first term in the ELBO) vs training progress (epochs). for Frey Faces, MNIST, and MNIST+rot respectively. Optimization proceeds much the same in both models except that the SB-VAE learns at a slightly slower pace for all parametrizations. This is not. too surprising since the recursive definition of the latent variables likely causes coupled gradients..\nWe compare the final converged models in Table 1, reporting the marginal likelihood of each model via the MC approximation log p(x) ~ log )s p(x;[Zi.s)p(Zi.s)/q(Zi,s) using 100 samples. The Gaussian VAE has a better likelihood than all stick-breaking implementations (~ 96 vs ~ 98) Between the stick-breaking parametrizations, the Kumaraswamy outperforms both the Gamma and Gauss-Logit on both datasets, which is not surprising given the others' flaws (i.e. the Gamma is approximate, the Gauss-Logit is restricted). Given this result, we used the Kumaraswamy parametriza-\n2Available at http://www.cs.nyu.edu/~roweis/data.html 3Available at http://ufldl.stanford.edu/housenumbers/ 4Available at http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/MnistVariations 5Theano implementations available at https: //github. com/enalisnick/stick-breaking. dgms\nComplete implementation and optimization details can be found in the appendix and code repository In all experiments, to best isolate the effects of Gaussian versus stick-breaking latent variables, the same architecture and optimization hyperparameters were used for each model. The only difference was in the prior: p(z) = N(0, 1) for Gaussian latent variables and p(v) = Beta(1, ao) (Dirichlet process) for stick-breaking latent variables. We cross-validated the concentration parameter over the range ao E {1, 3, 5, 8}. The Gaussian model's performance potentially could have been improved by cross validating its prior variance. However, the standard Normal prior is widely used as a default choice (Bowman et al., 2016; Gregor et al., 2015; Kingma et al., 2014; Kingma & Welling 2014a; Rezende et al., 2014; Salimans et al., 2015), and our goal is to experimentally demonstrate a stick-breaking prior is a competitive alternative.\nlogpxi Model MNIST MNIST+rot Gauss VAE 96.80 108.40 Kumar-SB VAE 98.01 112.33 Logit-SB VAE 99.48 114.09 Gamma-SB VAE 100.74 113.22\nTable 1: Marginal likelihood results (estimated) for Gaussian VAE and the three parametrizations of the Stick-Breaking VAE.\n50 091.30 5994 (b) Gauss VAE: MNIST Samples (a) SB-VAE: MNIST San mnles Drawn from Priot\nFigure 3: Subfigure (a) depicts samples from the SB-VAE trained on MNIST. We show the ordered. factored nature of the latent variables by sampling from Dirichlet's of increasing dimensionality Subfigure (b) depicts samples from the Gauss VAE trained on MNIST..\ntion for all subsequently reported experiments. Note that the likelihoods reported are worse than the ones reported by Burda et al. (2016) because our training set consisted of 50k examples whereas theirs contained 60k (training and validation)..\nWe also investigated whether the SB-VAE is using its adaptive capacity in the manner we expect, i.e the SB-VAE should use a larger latent dimensionality for the rotated images in MNIST+rot than i. does for the non-rotated ones. We examined if this is the case by tracking how many 'breaks' it tool the model to deconstruct 99% of the stick. On average, the rotated images in the training set were represented by 28.7 dimensions and the non-rotated by 27.4. Furthermore, the rotated images usec. more latent variables in eight out of ten classes. Although the difference is not as large as we were expecting, it is statistically significant. Moreover, the difference is made smaller by the non-rotated one digits, which use 32 dimensions on average, the most for any class. The non-rotated average. decreases to 26.3 when ones are excluded..\nFigure 3 (a) shows MNIST digits drawn from the SB-VAE by sampling from the prior-i.e. vk ~ Beta(1, 5), and Figure 3 (b) shows Gauss VAE samples for comparison. SB-VAE samples using all fifty dimensions of the truncated posterior are shown in the bottom block. Samples from Dirichlets constrained to a subset of the dimensions are shown in the two columns in order to test that the latent features are concentrating onto lower-dimensional simplices. This is indeed the case: adding a latent variable results in markedly different but still coherent samples. For instance, the second and third dimensions seem to capture the 7-class, the fourth and fifth the 6-class, and the eighth the 5-class. The seventh dimension seems to model notably thick digits.\nDiscriminative Qualities. The discriminative qualities of the models' latent spaces are assessed by running a k-Nearest Neighbors classifier on (sampled) MNIST latent variables. Results are shown in the table in Figure 4 (a). The SB-VAE exhibits conspicuously better performance than the Gauss VAE at all choices of k, which suggests that although the Gauss VAE converges to a better likelihood. the SB-VAE's latent space better captures class structure. We also report results for two Gaussian mixture VAEs: Dilokthanakul et al. (2016)'s Gaussian mixture Variational Autoencoder (GMVAE)\nk=3 k=5 k=10 SB-VAE 9.34 8.65 8.90 DLGMM 9.14 8.38 8.42 Gauss VAE 28.4 20.96 15.33 Raw Pixels 2.95 3.12 3.35 GMVAE6 8.96 MNIST Or kNNL on. Jlatentd\nFigure 4: Subfigure (a) shows results of a kNN classifier trained on the latent representations producec oy each model. Subfigures (b) and (c) show t-SNE projections of the latent representations learnec by the SB-VAE and Gauss VAE respectively.\nLatent Space Sparsity vs Decoder Sparsity 8 Latent Space Sparsity vs Decoder Sparsity 10 KLD from Prior Average Value Decoder Weight Norm 0.6 Rescaled Decoder Weight Norm 14 12 0.5 10 8 0.3 0.1 10 20 3C 40 50 10 20 30 40 50 Latent Dimension Latent Dimension (a) Gauss VAE (b) Stick-Breaking VAE\nFigure 5: Sparsity in the latent representation vs sparsity in the decoder network. The Gaussian VAE turns off' unused latent dimensions by setting the outgoing weights to zero (in order to dispel the sampled noise). The SB VAE, on the other hand, also has sparse representations but without decay of the associated decoder weights.\nand Nalisnick et al. (2016)'s Deep Latent Gaussian Mixture Model (DLGMM). The GMVAE ha sixteen mixture components and the DLGMM has five, and hence both have many more parameter than the SB-VAE. Despite the SB-VAE's lower capacity, we see that its performance is competitive to the mixture VAEs' (8.65 vs 8.38/8.96).\nCombating Decoder Pruning. The 'component collapsing' behavior of the variational autoencode. has been well noted (Maalge et al., 2016): the model will set to zero the outgoing weights of laten. variables that remain near the prior. Figure 5 (a) depicts this phenomenon for the Gauss VAI. by plotting the KL divergence from the prior and outgoing decoder weight norm for each laten dimension. We see the weights are only nonzero in the dimensions in which there is posterio. deviation. Ostensibly the model receives only sampling noise from the dimensions that remain at the. prior, and setting the decoder weights to zero quells this variance. While the behavior of the Gaus. VAE is not necessarily improper, all examples are restricted to pass through the same latent variables. A sparse-coded representation-one having few active components per example (like the Gauss VAE but diversity of activations across examples-would likely be better.\nWe compare the activation patterns against the sparsity of the decoder for the SB-VAE in Figure 5 (b) Since KL-divergence doesn't directly correspond to sparsity in stick-breaking latent variables like it. does for Gaussian ones, the black lines denote the average activation value per dimension. Similarly to (a), blue lines denoted the decoder weight norms, but they had to be down-scaled by a factor of 100 so they could be visualized on the same plot. The SB-VAE does not seem to have any component. collapsing, which is not too surprising since the model can set latent variables to zero to deactivate decoder weights without being in the heart of the prior. We conjecture that this increased capacity is\n6The GMVAE's evaluation is different from performing kNN. Rather, test images are assigned to clusters an whole clusters are given a label. Thus results are not strictly comparable but the ultimate goal of unsupervise MNIST classification is the same.\nThe discriminative qualities of the SB-VAE's latent space are further supported by Figures 4 (b) and. (c). t-SNE was used to embed the Gaussian (c) and stick-breaking (b) latent MNIST representations into two dimensions. Digit classes (denoted by color) in the stick-breaking latent space are clustered with noticeably more cohesion and separation..\nTable 2: Percent error on three semi-supervised classification tasks with 10%, 5%, and 1% of labels present for training. Our DGM with stick-breaking latent variables (SB-DGM) is compared with a. DGM with Gaussian latent variables (Gauss-DGM), and a k-Nearest Neighbors classifier (k=5)..\nthe reason stick-breaking variables demonstrate better discriminative performance in many of our experiments."}, {"section_index": "12", "section_name": "7.2 SEMI-SUPERVISED", "section_text": "Quantitative Evaluation. Table 2 shows percent error on a test set when training with the specified percentage of labeled examples. We see the the SB-DGM performs markedly better across almost all experiments. The Gauss-DGM achieves a superior error rate only on the easiest tasks: MNIST with 10% and 5% of the data labeled.\nWe have described how to employ the Kumaraswamy distribution to extend Stochastic Gradient. Variational Bayes to the weights of stick-breaking Bayesian nonparametric priors. Using this. development we then defined deep generative models with infinite dimensional latent variables and. showed that their latent representations are more discriminative than those of the popular Gaussian. variant. Moreover, the only extra computational cost is in assembling the stick segments, a linear. operation on the order of the truncation size. Not only are the ideas herein immediately useful as. presented, they are an important first-step to integrating black box variational inference and Bayesian. nonparametrics, resulting in scalable models that have differentiable control of their capacity. In. particular, applying SGVB to full Dirichlet processes with non-trivial base measures is an interesting. next step. Furthermore, differentiable stick-breaking has the potential to increase the dynamism and adaptivity of neural networks, a subject of recent interest (Graves, 2016), in a probabilistically. principled way."}, {"section_index": "13", "section_name": "ACKNOWLEDGEMENTS", "section_text": "Many thanks to Marc-Alexandre Cote and Hugo Larochelle for helpful discussions. This work was supported in part by NSF award number IIS-1320527."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "David M Blei and Michael I Jordan. Variational inference for Dirichlet process mixtures. Bayesiar Analysis, pp. 121-143. 2006.\nWe also performed semi-supervised classification, replicating and extending the experiments in the original semi-supervised DGMs paper (Kingma et al., 2014). We used the MNIST, MNIST+rot, and SVHN datasets and reduced the number of labeled training examples to 10%, 5%, and 1% of the total training set size. Labels were removed completely at random and as a result, class imbalance was all but certainly introduced. Similarly to the unsupervised setting, we compared DGMs with stick-breaking (SB-DGM) and Gaussian (Gauss-DGM) latent variables against one another and a baseline k-Nearest Neighbors classifier (k=5). We used 50 for the latent variable dimensionality / truncation level. The MNIST networks use one hidden layer of 500 hidden units. The MNIST+rot and SVHN networks use four hidden layers of 500 units in each. The last three hidden layers have identity function skip-connections. Cross-validation chose ao = 5 for MNISTs and ao = 8 for SVHN.\nSamuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio Generating sentences from a continuous space. CoNLL, 2016.\nMarc-Alexandre Cote and Hugo Larochelle. An infinite restricted Boltzmann machine. Neura Computation, 2016.\nAlex Graves. Adaptive computation time for recurrent neural networks. ArXiv e-prints, 2016\nMichael I Jordan. Zoubin Ghahramani. Tommi S Jaakkola, and Lawrence K Saul. An introduction t variational methods for graphical models. Machine learning, 1999.\nDiederik Kingma and Max Welling. Auto-encoding variational bayes. International Conference or Learning Representations (ICLR), 2014a\nDiederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervise learning with deep generative models. Neural Information Processing Systems (NIPs), 2014\nDavid A Knowles. Stochastic gradient variational bayes for gamma approximating distributions ArXiv e-prints, 2015.\nYingzhen Li and Richard E Turner. Variational inference with renyi divergence. Neural Information Processing Systems (NIPS), 2016.\nLars Maalge, Casper Kaae Sgnderby, Sgren Kaae Sonderby, and Ole Winther. Auxiliary deey generative models. International Conference on Machine Learning (ICML), 2016\nDavid JC MacKay and Mark N Gibbs. Density networks. Statistics and neural networks: advances at the interface, 1999\nYuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. Interna tional Conference on Learning Representations (ICLR), 2016..\nim Pitman. Combinatorial stochastic processes. UC Berkeley Technical Report (621), 2002\nAbel Rodriguez and David B Dunson. Nonparametric bayesian models through probit stick-breaking processes. Bayesian analysis, 2011\nTim Salimans, Diederik Kingma, and Max Welling. Markov chain monte carlo and variationa. inference: Bridging the gap. International Conference on Machine Learning (CML). 2015\nJayaram Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 1994\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. International Conference on Machine Learning (ICML), 2014.\nFrancisco Ruiz, Michalis Titsias, and David Blei. The generalized reparameterization gradien Neural Information Processing Systems (NIPS), 2016.\nTim Salimans and David A Knowles. Fixed-form variational posterior approximation through stochastic linear regression. Bayesian Analysis, 2013.\nustin Tran, Rajesh Ranganath, and David M Blei. Variational Gaussian process. Internationo Conference on Learning Representations (ICLR), 2016."}, {"section_index": "15", "section_name": "APPENDIX", "section_text": "The Kullback-Leibler divergence between the Kumaraswamy and Beta distributions is\na log ab + log B(a, )-- 1 6 m + a6 m=1"}, {"section_index": "16", "section_name": "EXPERIMENTS AND OPTIMIZATION", "section_text": "Below we give more details about our experiments and model optimization.. Alsowe have released Theano implementations available at https://github.com/enalisnick/ stick-breaking dgms. All experiments were run on AWS G2.2XL instances..\nIn regards to datasets, we used the following train-valid-test splits. Frey Faces was divided into. {1500. 100. 300}. MNIST int0 {45000. 5000. 10000}. MNIST+r0t int0 {70000. 10000. 20000}. and SVHN into {65000, 8257, 26032}. SVHN was the only dataset that underwent preprocessing; following (Kingma et al., 2014), we reduced the dimensionality via PCA to 500 dimensions that. capture 99.9% of the data's variance. No effort was made to preserve class balance across train-valid. splits nor during label removal for the semi-supervised tasks..\nRegarding optimization, all models were trained with minibatches of size 100 and using AdaM. (Kingma & Ba, 2014) to set the gradient descent step size. For AdaM, we used a = 0.0003 b1 = 0.95, and b2 = 0.999 in all experiments. Early stopping was used during semi-supervisec. training with a look-ahead threshold of 30 epochs. For the semi-supervised deep generative models. classification loss needs up-weighted in some way. In (Kingma et al., 2014), an extra weight wa. placed on the label log likelihood term. We attempted this strategy but attained better performanc. (for all models) by re-weighting the contribution of the supervised data within each mini-batch, i.e. XVJ(0, ; x,, yi) + (1 A)J(0, ; x;). We calibrated X by comparing the log likelihood of the. supervised and unsupervised data in preliminary training runs and setting the parameter such that th supervised data had a slightly higher likelihood. For the MNIST datasets, X = .375, and for SVHN X = .45.\nwhere q is Kumaraswamy(a,b), p is Beta(a,). Above y is Euler's constant, () is the Digamma function, and B() is the Beta function. The infinite sum is present in the KLD because a Taylor expansion is needed to represent Eg[log(1 - vk)]; it should be well approximated by the first few. terms.\nAs for the model architectures, all experiments used ReLUs exclusively for hidden unit activations The dimensionality / truncation-level of the latent variables was set at 50 for every experiment except Frey Faces. All weights were initialized by drawing from N(0, 0.001 . 1), and biases were set to zero to start. No regularization (dropout, weight decay, etc) was used, and only one sample was used for each calculation of the Monte Carlo expectations. We used the leading ten terms to compute the infinite sum in the KL divergence between the Beta and Kumaraswamy."}]
S1LVSrcge
[{"section_index": "0", "section_name": "VARIABLE COMPUTATION IN RECURRENT NEURAL NETWORKS", "section_text": "Yacine Jernite\nDepartment of Computer Science New York University. New York. NY 10012. USA\negrave,ajoulin, tmikolov}@fb.com\nRecurrent neural networks (RNNs) have been used extensively and with increasing success to model various types of sequential data. Much of this. progress has been achieved through devising recurrent units and architectures. with the flexibility to capture complex statistics in the data, such as long range. dependency or localized attention phenomena. However, while many sequential data (such as video, speech or language) can have highly variable information. flow, most recurrent models still consume input features at a constant rate and perform a constant number of computations per time step, which can be detrimental to both speed and model capacity. In this paper, we explore a. modification to existing recurrent units which allows them to learn to vary the. amount of computation they perform at each step, without prior knowledge of the sequence's time structure. We show experimentally that not only do our models require fewer operations, they also lead to better performance overall on. evaluation tasks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The class of Recurrent Neural Network models (RNNs) is particularly well suited to dealing with sequential data, and has been successfully applied to a diverse array of tasks, such as language modeling and speech recognition (Mikolov2012), machine translation (Mikolov]2012) Cho et al. 2014a), or acoustic modeling (Robinson et al.f1993f Graves & Jaitly2014) among others.Two factors have been instrumental in allowing this paradigm to be so widely adopted and give rise to the aforementioned successes. On the one hand, recent advances in both hardware and software have had a significant role in bringing the training of recurrent models to tractable time periods. On the other hand, novel units and architectures have allowed recurrent networks to model certain features of sequential data better than Elman's simple RNN architecture (Elman!|1990). These include such developments as the LSTM (Hochreiter & SchmidhuberJ 1997) and GRU (Cho et al.|2014a) units, which can more easily learn to model long range interactions (Chung et al.]2014), or attention mechanisms that allow the model to focus on a specific part of its history when making a prediction (Bahdanau et al.]2014). In this work, we focus on another feature of recurrent networks: the ability to efficiently model processes happening at different and possibly varying time scales.\nMost existing recurrent models take one of two approaches regarding the amount of computation. they require. Either the computational load is constant over time, or it follows a fixed (or deter-. ministic) schedule (Koutnik et al.[2014), (Mikolov et al.2014). The latter approach has proven especially useful when dealing with sequences which reflect processes taking place at different lev-.\nWork done at Facebook AI Research"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this work, we show how to modify two commonly used recurrent unit architectures, namely the. Elman and Gated Recurrent Unit, to obtain their variable computation counterparts. This gives rise. to two new architecture, the Variable Computation RNN and Variable Computation GRU (VCRNN. and VCGRU), which take advantage of these phenomena by deciding at each time step how much. computation is required based on the current hidden state and input. We show that the models learn time patterns of interest, can perform fewer operations, and may even take advantage of these time structures to produce better predictions than the constant computation versions..\nWe start by giving an overview of related work in Section 2] provide background on the class of Recurrent Neural Networks in Section 3] describe our model and learning procedure in Section 4 and present experimental results on music as well as bit and character level language modeling ir section 5] Finally, Section 6 concludes and lays out possible directions for future work."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "How to properly handle sequences which reflect processes happening at different time scales has. been a widely explored question. Among the proposed approaches, a variety of notable systems. based on Hidden Markov Models (HMMs) have been put forward in the last two decades. The Fac torial HMM model of (Ghahramani & Jordan][1997) (and its infinite extension in (Gael et al.|[2008) use parallel interacting hidden states to model concurrent processes. While there is no explicit han-. dling of different time scales, the model achieves good held-out likelihood on Bach chorales, which. exhibit multi-scale behaviors. The hierarchical HMM model of (Fine et al.1998) and (Murphy & Paskin2001) takes a more direct approach to representing multiple scales of processes. In these. works, the higher level HMM can recursively call sub-HMMs to generate short sequences with-. out changing its state, and the authors show a successful application to modeling cursive writing. Finally, the Switching State-Space Model of (Ghahramani & Hinton20oo) combines HMMs and. Linear Dynamical Systems: in this model, the HMM is used to switch between LDS parameters,. and the experiments show that the HMM learns higher-level, slower dynamics than the LDS..\nOn the side of Recurrent Neural Networks, the idea that the models should have mechanisms tha allow them to handle processes happening at different time scales is not a new one either. On the one. hand, such early works as (Schmidhuber1991) and (Schmidhuber1992) already presented a two level architecture, with an \"automatizer\"' acting on every time step and a \"chunker\"' which should. only be called when the automatizer fails to predict the next item, and which the author hypothesizes. learns to model slower scale processes. On the other hand, the model proposed in (Mozer1993) has. slow-moving units as well as regular ones, where the slowness is defined by a parameter t E [0, 1. deciding how fast the representation changes by taking a convex combination of the previous and. predicted hidden state.\nBoth these notions, along with different approaches to multi-scale sequence modeling, have been developed in more recent work. (Mikolov et al.2014) expand upon the idea of having slow moving. units in an RNN by proposing an extension of the Elman unit which forces parts of the transition. matrix to be close to the identity. The idea of having recurrent layers called at different time steps has also recently regained popularity. The Clockwork RNN of (Koutnik et al.]|2014), for example,. has RNN layers called every 1, 2, 4, 8, etc... time steps. The conditional RNN of (Bojanowski et al.. 2015) takes another approach by using known temporal structure in the data: in the character level.\nConsider sequential data such as video feeds, audio signal, or language. In video data, there are time. eriods where the frames differ very slightly, and where the underlying model should probably dc. nuch less computation than when the scene completely changes. When modeling speech from ar. udio signal, it is also reasonable to expect that the model should be able do little to no computatior during silences. Finally, in the case of character level language modeling, having more computa. ional power at word boundaries can certainly help: after reading the left context The prime..., the. nodel should be able to put a higher likelihood on the sequence of characters that make up the worc. ninister. However, we can take this idea one step further: after reading The prime min..., the nex. ew characters are almost deterministic, and the model should require little computation to predic. he sequence i-s-t-e-r.\nlevel language modeling application, the first layer is called for every character, while the second is only called once per word. It should also be noted that state-of-the art results for language models have been obtained using multi-layer RNNs (Jozefowicz et al.]2016), where the higher layers can in theory model slower processes. However, introspection in these models is more challenging, and it is difficult to determine whether they are actually exhibiting significant temporal behaviors.\nFinally, even more recent efforts have considered using dynamic time schedules. (Chung et al.l2016 presents a multi-layer LSTM, where each layer decides whether or not to activate the next one at every time step. They show that the model is able to learn sensible time behaviors and achieve good perplexity on their chosen tasks. Another implementation of the general concept of adaptive time- dependent computation is presented in (Graves!2016). In that work, the amount of computation performed at each time step is varied not by calling units in several layers, but rather by having a unique RNN perform more than one update of the hidden state on a single time step. There too, the model can be shown to learn an intuitive time schedule.\nLet us start by formally defining the class of Recurrent Neural Networks (RNNs). For tasks such as language modeling, we are interested in defining a probability distribution over sequences w =- W1 . wt). Using the chain rule. the negative log likelihood of a sequence can be written:\nT L(w) =-)`log(p(wt|F(w1,...,Wt-1) t=1\nwhere F is a filtration, a function which summarizes all the relevant information from the past. RNNs are a class of models that can read sequences of arbitrary length to provide such a summary. in the form of a hidden state ht ~ F(w1, ... , wt-1), by applying the same operation (recurrent unit). at each time step. More specifically, the recurrent unit is defined by a recurrence function g which takes as input the previous hidden state ht-1 at each time step t, as well as a representation of the input xt (where ht-1 and xt are D-dimensional vectors), and (with the convention ho = 0,) outputs the new hidden state:\nElman Unit. The unit described in (Elman![1990) is often considered to be the standard unit. It is parametrized by U and V, which are square, D-dimensional transition matrices, and uses a sigmoid. non-linearity to obtain the new hidden state:.\nIn the Elman unit, the bulk of the computation comes from the matrix multiplications, and the cost per time step is O(D2). In the following section, we show a simple modification of the unit which. allows it to reduce this cost significantly.\nGated Recurrent Unit. The Gated Recurrent Unit (GRU) was introduced in (Cho et al.]2014b) The main difference between the GRU and Elman unit consists in the model's ability to interpolate between a proposed new hidden state and the current one, which makes it easier to model longer range dependencies. More specifically, at each time step t, the model computes a reset gate rt, an update gate zt, a proposed new hidden state ht and a final new hidden state ht as follows:\nht = tanh(U(rt O ht-1) + Vxt)\nht =ZtO ht+(1-Zt) O ht-1\nIn this paper, we present an alternative view of adaptive computation, where a single Variable Com putation Unit (VCU) decides dynamically how much of its hidden state needs to change, leading to both savings in the number of operations per time step and the possibility for the higher dimensions of the hidden state to keep longer term memory.\nht=g(ht-1,Xt\nht = tanh(Uht-1+ Vxt)\nrt = o(Urht-1+ Vrxt) Zt = o(Uzht-1+ Vzxt\nht-1 ht+1 Scheduler Scheduler Xt Xt+1\nFigure 1: Two time steps of a VCU. At each step t, the scheduler takes in the current hidden vector ht-1 and input vector xt and decides on a number of dimensions to use d. The unit then uses the first d dimensions of ht-1 and xt to compute the first d elements of the new hidden state ht, and carries the remaining D - d dimensions over from ht-1."}, {"section_index": "4", "section_name": "4 VARIABLE COMPUTATION RNN", "section_text": "As noted in the previous section, the bulk of the computation in the aforementioned settings comes. from the linear layers; a natural option to reduce the number of operations would then be to only. apply the linear transformations to a sub-set of the hidden dimensions. These could in theory cor. respond to any sub-set indices in {1, ..., D}; however, we want a setting where the computationa. cost of the choice is much less than the cost of computing the new hidden state. Thus, we onl consider the sets of first d dimensions of RD, so that there is a single parameter d to compute..\nOur Variable Computation Units (VCUs) implement this idea using two modules: a scheduler de-. cides how many dimensions need to be updated at the current time step, and the VCU performs a partial update of its hidden state accordingly, as illustrated in Figure[1 Section4.1|formally de- scribes the scheduler and partial update operations, and Section|4.2|outlines the procedure to jointly. learn both modules."}, {"section_index": "5", "section_name": "4.1 MODEL DESCRIPTION", "section_text": "Scheduler. The model first needs to decide how much computation is required at the current tim. step. To make that decision, the recurrent unit has access to the current hidden state and input this way, the model can learn to ignore an uninformative input, or to decide on more computatior when an it is unexpected given the current hidden state. The scheduler is then defined as a functior m : R2D -> [0, 1] which decides what portion of the hidden state to change based on the curren hidden and input vectors. In this work, we decide to implement it as a simple log-linear functior with parameter vectors u and v, and bias b, and at each time step t, we have:.\nPartial update. Once the scheduler has decided on a computation budget mt, the VCU needs t perform a partial update of the first [mD] dimensions of its hidden state. Recall the hidden stat ht-1 is a D-dimensional vector. Given a smaller dimension d E {1, ..., D}, a partial update of th hidden state would take the following form. Let ga be the d-dimensional version of the model' recurrence function g as defined in Equation|2] which uses the upper left d by d square sub-matrice of the linear transformations (Ua, Vd, ...), and hg-1 and x denote the first d elements of ht-1 anc xt. We apply ga to ht-1 and x, and carry dimensions d + 1 to D from the previous hidden state so the new hidden state ht is defined by:\nand Vi>d, hti=ht-1,i\nmt=(uht-1+vxt+b)\nVi E 1,..., D, (et) = Threse (o((mtD-i)))"}, {"section_index": "6", "section_name": "4.2 LEARNING", "section_text": "Since the soft mask et is a continuous function of the model parameters, the scheduler can be learne through back-propagation. However, we have found that the naive approach of using a fixed sharp ness parameter and simply minimizing the negative log-likelihood defined in Equation|1 led to th model being stuck in a local optimum which updates all dimensions at every step. We found that th following two modifications allowed the model to learn better parametrizations.\nO(w,U,V,O,u,v,b) = L(w,U,V,O,u,v,b) +Q(m,m)\nSecondly, for the model to be able to explore the effect of using fewer or more dimensions, we neec to start training with a smooth mask (small parameter), since for small values of A, the mode actually uses the whole hidden state. We can then gradually increase the sharpness parameter unti the model truly does a partial update."}, {"section_index": "7", "section_name": "5 EXPERIMENTS", "section_text": "We ran experiments with the Variable Computation variants of the Elman and Gated Recurrent Units (VCRNN and VCGRU respectively) on several sequence modeling tasks. All experiments were run using a symmetrical l1 penalty on the scheduler m, that is, penalizing mt when it is greater or smaller than target m, with m taking various values in the range [0.2, 0.5]. In all experiments, we start with a sharpness parameter = 0.1, and increase it by 0.1 per epoch to a maximum value of 1.\nIn each of our experiments, we are interested in investigating two specific aspects of our model On the one hand, do the time patterns that emerge agree with our intuition of the time dynamic. expressed in the data? On the other hand, does the Variable Computation Unit (VCU) yield a goo predictive model? More specifically, does it lead to lower perplexity than a constant computatior counterpart which performs as many or more operations? In order to be able to properly asses. the efficiency of the model, and since we do not know a priori how much computation the VCt uses, we always report the \"equivalent RNN\" dimension (noted as RNN-d in Table[3) along with the performance on test data, i.e. the dimension of an Elman RNN that would have performed the same amount of computation. Note that the computational complexity gains we refer to are exclusively ir terms of lowering the number of operations, which does not necessarily correlate with a speed up o training when using general purpose GPU kernels; it is however a prerequisite to achieving such a speed up with the proper implementation, motivating our effort.\nWe answer both of these questions on the tasks of music modeling, bit and character level language. modeling on the Penn Treebank text, and character level language modeling on the Text8 data set as well as two languages from the Europarl corpus.\nSoft mask. In practice, the transition function we just defined would require making a hard. choice at each time step of the number of dimensions to be updated, which makes the model non-differentiable and can significantly complicate optimization. Instead, we approximate the hard choice by using a gate function to apply a soft mask. Given mt E [0, 1] and a sharpness parameter A, we use the gating vector et E RD defined by:.\nwhere Thres, maps all values greater than 1 -- e and smaller than e to 1 and O respectively. That way, the model performs an update using the first (mt D + n) dimensions of the hidden state, where n goes to O as X increases, and leaves its last ((1 mt) D - n) dimensions unchanged. Thus, if g is the recurrence function defined in Equation[2] we have:\nFirst, we can encourage m to be either close or no greater than a target m at all time by adding a penalty term to the objective. For example, we can apply a l1 or l2 penalty to values of m that are greater than the target, or that simply diverge from it (in which case we also discourage the model from using too few dimensions). The cost function defined in Equation[1 then becomes:"}, {"section_index": "8", "section_name": "5.1 MUSIC MODELING", "section_text": "We downloaded a corpus of Irish traditional tunes from https://thesession.org and split them intc a training validation and test of 16,000 (2.4M tokens), 1,511 (227,000 tokens) and 2,000 (288,00( tokens) melodies respectively. Each sub-set includes variations of melodies, but no melody ha. variations across subsets. We consider each (pitch, length) pair to be a different symbol; with rests. and bar symbols, this comes to a total vocabulary of 730 symbols..\nTable[1compares the perplexity on the test set to Elman RNNs with equivalent computational costs an VCRNN with hidden dimension 500 achieves better perplexity with fewer operations than ar RNN with dimension 250.\nLooking at the output of the scheduler on the validation set also reveals some interesting patterns. First, bar symbols are mostly ignored: the average value of mt on bar symbols is 0.14, as opposec. to 0.46 on all others. This is not surprising: our pre-processing does not handle polyphony or time. signatures, so bars en up having different lengths. The best thing for the model to do is then just tc ignore them and focus on the melody. Similarly, the model spends lest computation on rests (0.34 average mt), and pays less attention to repeated notes (0.51 average for mt on the first note of a. repetition, 0.45 on the second).\nTable 1: Music modeling, test set perplexity on a corpus of traditional Irish tunes. Our mode manages to achieve better perplexity with less computation than the Elman RNN..\nWe also notice that the model needs to do more computation on fast passages, which often have. richer ornamentation, as illustrated in Table[2] While it is difficult to think a priori of all the sorts. of behaviors that could be of interest, these initial results certainly show a sensible behavior of the scheduler on the music modeling task..\nnote length 0.25 1/3 0.5 0.75 1 1.5 2 0.61 0.77 0.39 0.59 0.44 0.46 0.57 average m\nTable 2: Average amount of computation (mt) for various note lengths. More effort is required fc the faster passages with 16th notes and triplets.\nWe also chose to apply our model to the tasks of bit level and character level language modeling. Those appeared as good applications since we know a priori what kind of temporal structure to look. for: ASCII encoding means that we expect a significant change (change of character) every 8 bits in bit level modeling, and we believe the structure of word units to be useful when modeling text at the. character level."}, {"section_index": "9", "section_name": "5.2.1 PENN TREEBANK AND TEXT8", "section_text": "We first ran experiments on two English language modeling tasks, using the Penn TreeBank and Text8 data sets. We chose the former as it is a well studied corpus, and one of the few corpora for which people have reported bit-level language modeling results. It is however quite small for ou. purposes, with under 6M characters, which motivated us to apply our models to the larger Text8 data set (1ooM characters). Table|3|shows bit per bit and bit per character results for bit and character level language modeling. We compare our results with those obtained with standard Elman RNN, GRU, and LSTM networks, as well as with the Conditional RNN of (Bojanowski et al. 2015)\nunit type equivalent RNN perplexity RNN-200 9.13 RNN-250 8.70 VCRNN-500 233 8.51\nQuantitative Results. We first compare the VCRNN to the regular Elman RNN, as well as to the Conditional RNN of (Bojanowski et al.]2015), which combines two layers running at bit and. character level for bit level modeling, or character and word level for character level modeling. For bit level language modeling, the VCRNN not only performs fewer operations than the standard. unit, it also achieves better performance. For character level modeling, the Elman model using a. hidden dimension of 1024 achieved 1.47 bits per character, while our best performing VCRNN does. slightly better while only requiring as much computation as a dimension 760 Elman unit. While we do slightly more computation than the Conditional RNN, it should be noted that our model is not explicitly given word-level information: it learns how to summarize it from character-level input..\nThe comparison between the constant computation and Variable Computation GRU (VCGRU) fol lows the same pattern, both on the PTB and Text8 corpora. On PTB, the VCGRU with the best. validation perplexity performs as well as a GRU (and LSTM) of the same dimension with less than. half the number of operations. On Text8, the VCGRU models with various values of the target m. always achieve better perplexity than other models performing similar or greater numbers of opera- tions. It should be noted that none of the models we ran on Text8 overfits significantly (the training. and validation perplexities are the same), which would indicate that the gain is not solely a matter. of regularization.\n2345678ssssssss123456781234567812345678pppppppp Figure 2: Top: Per-bit computation by VCRNN, higher dimensions (950 to 1000). Middle: adding 8 bits of buffer between every character. Bottom: adding 24 bits of buffer between each character.\nCharacter level PTB Character level Text8 unit type RNN-d bpc Bit level PTB unit type m RNN-d bpc GRU-1024 1450 1.42 unit type RNN-d bpb RNN-512* 512 1.80 LSTM-1024 2048 1.42 1 RNN-1024* 1024 1.69 RNN-1024 1024 1.47 RNN-100 100 0.287 LSTM-512* 1024 1.65 RNN-500 500 0.227 CRNN-500 700 1.46 LSTM-1024* - 2048 1.52 RNN-1000 1000 0.223 VCRNN-1024 760 1.46 RNN-512 512 1.80 CRNN-100 140 0.222 RNN-760 760 1.47 GRU-512 725 1.69 LSTM-380 760 1.44 VCRNN-1000 340 0.231 GRU-1024 1450 1.58 460 0.215 GRU-538 760 1.43 VCRNN-1000 VCGRU-1024 0.3 464 1.69 VCGRU-1024 648 1.42 VCGRU-1024 0.4 648 1.64 LSTM-324 648 1.46 VCGRU-1024 0.5 820 1.63 GRU-458 648 1.47\nTable 3: Left: Bits per character for character level language modeling on Penn TreeBank. CRNN refers to the Conditional RNN from (Bojanowski et al.||2015). Middle: Bits per bit for bit level lan- guage modeling on Penn TreeBank. Right: Bits per character for character level language modeling. on Text8. *From (Zhang et al.|2016)\n0.98 0.96 0.94 uuuuuuuutttttttteeeeeeeessssssss WWWWWWWW 0.35 0.30 0.25 0.20 12345678ssssssss12345678pppppppp12345678eeeeeeee 0.6 0.4 0.2 12345678ssssssss123456781234567812345678pppppppp\n2345678ssssssss12345678pppppppp12345678eeeeeee\n0.8 0.6 0.4 days everyone is looking for a way to get viewers more exci 0.8 0.6 0.4 na konci zdlouhaveho a namahaveho procesu. Navrat do teto e 0.8 0.6 0.4 d i e deut li ch iber d em An t e i 1 des Luftverkehrs liegt, der eb\nFigure 3: Per-character computation by VCRNN. Top: English. Middle: Czech. Bottom: German All languages learn to make use of word units.\nBit Level Scheduler. The scheduler in the bit level language model manages to learn the structure. of ASCII encoding: Figure 2|shows that the higher dimensions are modified roughly every 8 bits We also created some artificial data by taking the PTB text and adding 8 or 24 0 bits between each character. Figure[2] shows that the model learns to mostly ignore these \"buffers\", doing most of its. computation on actual characters.\nCharacter Level Scheduler. On character level language modeling, the scheduler learns to make. use of word boundaries and some language structures. Figure|3|shows that the higher dimensions are used about once per words, and in some cases, we even observe a spike at the end of each morpheme (long-stand-ing, as shown in Figure[5). While we provide results for the VCRNN specifically in this Section, the VCGRU scheduler follows the same patterns.."}, {"section_index": "10", "section_name": "5.2.2 EUROPARL CZECH AND GERMAN", "section_text": "We also ran our model on two languages form the Europarl corpus. We chose Czech, which has a larger alphabet than other languages in the corpus, and German , which is a language that features long composite words without white spaces to indicate a new unit. Both are made up of about 20M characters. We tried two settings. In the \"guide\" setting, we use the penalty on mt to encourage the model to use more dimensions on white spaces. The \"learn' setting is fully unsupervised, and. encourages lower values of mt across the board..\nEuroparl-cs Europarl-de 1.9 4 O Elman 1.7 o Elman . x x Guide VCRNN 4 x Guide VCRNN 1.8 1.6 MO A Learn VCRNN A Learn VCRNN x 1.5 4 0 1.7 4 0 4 0 A 0 . 1.4 x 4 X A A 0 1.6 XxX XX A XA 100 200 300 400 500 100 200 300 400 500 hidden dimension hidden dimension\nFigure 4: Bits per character for different computational loads on the Europarl Czech (left) and German (right) datasets. The VCRNN, whether guided to use boundaries or fully unsupervised. achieves better held-out log-likelihood more efficiently than the standard RNN..\n0.7 0.7 0.7 0.6 0.6 0.6 0.5 0.5 0.5 0.4 longstanding ban. Der Ansatz der. e i n e umweltfreundliche a\nFigure 5: Per-character computation by VCRNN. The model appears to make use of morphology separating sub-word units.\nFigure 4|shows that both perform similarly on the Czech dataset, achieving better performanc more efficiently than the standard RNN. On German, the guided settings remains slightly mor efficient than the fully learned one, but both are more efficient than the RNN and achieve the sam performance when using more dimensions. Both learn to use more dimensions at word boundarie as shown in Figure|3] The German model also appears to be learning interesting morphology (Luft ver-kehrs, eben-falls in Figure[3] An-satz, Um-welt-freund-lich in Figure[5), and grammar (focusing on case markers at the end of articles, Figure[5).\nIn this work, we have presented two kinds of Variable Computation recurrent units: the VCRNN and VCGRU, which modify the Elman and Gated Recurrent Unit respectively to allow the model to achieve better performance with fewer operations, and can be shown to find time patterns oi interest in sequential data. We hope that these encouraging results will open up paths for furthe exploration of adaptive computation paradigms in neural networks in general, which could lead tc more computation-efficient models, better able to deal with varying information flow or multi-scal processes. We also see a few immediate possibilities for extensions of this specific model. Fo example, the same idea of adaptive computation can similarly be applied to yet other commonly used recurrent units, such as LSTMs, or to work within the different layers of a stacked architecture and we are working on adapting our implementation to those settings. We also hope to investigat the benefits of using stronger supervision signals to train the scheduler, such as the entropy of the prediction, to hopefully push our current results even further."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014\nPiotr Bojanowski, Armand Joulin, and Tomas Mikolov. Alternative structures for character-leve rnns. CoRR, abs/1511.06303, 2015.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR. abs/1412.3555. 2014\nJunyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net works. CoRR, abs/1609.01704, 2016.\nJeffrey L. Elman. Finding structure in time. Cognitive Science, 14(2):179-211, 1990. doi: 10.1207 s15516709c0g1402_1.\nShai Fine, Yoram Singer, and Naftali Tishby. The hierarchical hidden markov model: Analysis an applications. Machine Learning, 32(1):41-62, 1998\nJurgen Van Gael, Yee Whye Teh, and Zoubin Ghahramani. The infinite factorial hidden marko model. In Advances in Neural Information Processing Systems 21, Vancouver, British Columbia Canada, December 8-11, 2008, pp. 1697-1704, 2008.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8 1735-1780, 1997.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring th limits of language modeling. CoRR, abs/1602.02410, 2016.\nJan Koutnik, Klaus Greff, Faustino J. Gomez, and Jurgen Schmidhuber. A clockwork RNN. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pp. 1863-1871, 2014.\nTomas Mikolov. Statistical language models based on neural networks. Ph. D. thesis, Brno Univer. sity of Technology, 2012.\nTomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc'Aurelio Ranzato Learning longer memory in recurrent neural networks. CoRR. abs/1412.7753. 2014.\nZoubin Ghahramani and Geoffrey E. Hinton. Variational learning for switching state-space models Neural Computation, 12(4):831-864, 2000.\nAlex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pp. 1764-1772, 2014.\nTony Robinson, Luis B. Almeida, Jean-Marc Boite, Herve Bourlard, Frank Fallside, Mike Hochberg. Dan J. Kershaw, Phil Kohn, Yochai Konig, Nelson Morgan, Joao Paulo Neto, Steve Renals, Marc. Saerens, and Chuck Wooters. A neural network based, speaker independent, large vocabulary continuous speech recognition system: the WERNICKE project. In Third European Conference on Speech Communication and Technology, EUROSPEECH 1993, Berlin, Germany, Septembe. 22-25, 1993, 1993.\nJurgen Schmidhuber. Neural sequence chunkers. Technical Report, 1991.\nSaizheng Zhang, Yuhuai Wu, Tong Che, Zhouhan Lin, Roland Memisevic, Ruslan Salakhutdinov and Yoshua Bengio. Architectural complexity measures of recurrent neural networks. In Ad vances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systemsn, 2016."}, {"section_index": "12", "section_name": "A APPENDIX", "section_text": "Vi E 1,...,D, (et) = Threse (o((mtD-i)))\nht-1 =et O ht-1 and it = et O Xt\nht = etO g(ht-1,xt) +(1-et) O ht\nht = tanh(U(rt O ht-1) + Vxt)\nht=ZtO ht+(1-Zt)O ht-1\nWe apply the method outlined in the previous paragraph to two commonly used architecture. Recall that, given a proportion of dimensions to use mt E 0, 1| and a sharpness parameter X, we the gating vector et E RD is defined as:\nFirst, we derive a variable computation version of the Elman RNN to get the Variable Computation Recurrent Neural Network (VCRNN) by transforming Equation|3|as follows:.\nSecondly, we obtain the Variable Computation Gated Recurrent Unit (VCGRU) by deriving the variable computation of the GRU architecture. This is achieved by modifying Equations4|to|6|as follows:\nrt = o(U,ht-1+ Vrxt), Zt = et O o(Uzht-1+ Vzxt"}]
r1aGWUqgg
[{"section_index": "0", "section_name": "UNSUPERVISED LEARNING OF STATE REPRESENTATIONS FOR MULTIPLE TASKS", "section_text": "Antonin Raffin\nEcole Nationale Superieure de Techniques Avancees (ENSTA-ParisTech), Paris, France antonin.raffin@ensta- -paristech.fr\nSebastian Hoferl. Rico Jonschkowski & Oliver Brock\nRobotics and Biology Laboratory, Technische Universitat Berlin, Germany. {sebastian.hoefer,rico.ionschkowski,oliver.brock}@tu-be\nRobotics and Biology Laboratory, Technische Universitat Berlin, Germany\nRobotics and Mechatronics Center, German Aerospace Center (DLR), Wessling, Germany freek.stulp@dlr.de"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "n many reinforcement learning problems, the agent has. o solve a variety of different tasks to fulfill its overall. goal. A common approach to this problem is to learn a. ingle policy for the whole problem, and leave the de-. composition of the problem into subtasks to the learner. In many cases, this approach is successful (Mnih et al.. 2015, Zahavy et al.2016), but it comes at the expense. Figl f requiring large amounts of training data. Alternatively,. has nultiple policies dedicated to different subtasks can be. as fa earned. This, however, requires prior knowledge about. obse now the overal problem decomposes into subtasks. More-. over, it can run into the same issue of requiring large amounts overlap and thus afford shared computation to solve them..\nA common approach to address overlapping problems is multi-task learning (Caruana 1997): b learning a single policy with different subgoals, knowledge between the different tasks can be trans ferred. This not only allows to learn a compact representation more efficiently, but also improve the agent's performance on all the individual subtasks (Rusu et al.]2016)..\nMulti-task learning, however, faces two problems: it requires the decomposition of the overall prob- lem into subtasks to be given. Moreover, it is not applicable if the subtasks are unrelated, and are better solved without sharing computation. In this case, the single-policy approach results in an agent that does not perform well on any of the individual tasks (Stulp et al.][2014) or that unlearns\n1 The first two authors contributed equally to this work\nWe present an approach for learning state representations in multi-task reinforce- ment learning. Our method learns multiple low-dimensional state representations from raw observations in an unsupervised fashion, without any knowledge of which task is executed, nor of the number of tasks involved. The method is based on a gated neural network architecture, trained with an extension of the learning with robotic priors objective. In simulated experiments, we show that our method is able to learn better state representations for reinforcement learning, and we an- alyze why and when it manages to do so.\nFigure 1: Slot car racing - the agent nas learn how to drive any of the cars s far as possible (left), based on its raw observations (right).\nIn this work, we address the problem of identifying and isolating individual unrelated subtasks. and learning multiple separate policies in an unsupervised way. To that end, we present MT-LRP an algorithm for learning state representations for multiple tasks by learning with robotic priors MT-LRP is able to acquire different low-dimensional state representations for multiple tasks ir. an unsupervised fashion. Importantly, MT-LRP does not require knowledge about which task is executed at a given time or about the number of tasks involved. The representations learned with MT-LRP enable the use of standard reinforcement learning methods to compute effective policies. from few data.\nAs explained before, our approach is orthogonal to the classical multi-task learning approach, and constitutes a problem of its own right due to the issues of underperformance and catastrophic forget ting. Therefore, we disregard the shared knowledge problem in this paper. However, any complete reinforcement learning system will need to combine both flavors of multi-task learning, for related and unrelated tasks, and future work will have to address the two problems together.\nMT-LRP is implemented as two neural networks, coupled by a gating mechanism (Sigaud et al.. 2015}Droniou et al.]2015) as illustrated in Figure 2] The first network, x, detects which task is. being executed and selects the corresponding state representation. The second network, , learns task-specific state representations. The networks are trained simultaneously using the robotic priors learning objective (Jonschkowski & Brock| 2015), exploiting physics-based prior knowledge about how states, actions, and rewards relate to each other. Both networks learn from raw sensor data, without supervision and solely based on the robot's experiences..\ntask T. select policy X extract. task select state obser- o state s. T2 action a. vation extract task- TT3 specific state\nFigure 2: Overview of the gated network for state representation learning for multiple tasks\nIn a simulated experimental scenario, we show that MT-LRP is able to learn multiple state represen tations and task detectors from raw observations and that these representations allow to learn bette policies from fewer data when compared with other methods. Moreover, we analyze the contributior to this result of each the method's individual components.\nMT-LRP combines three ideas into a novel approach for task discovery and state representation. learning: 1) extracting state representations for each task with robotic priors (Jonschkowski & Brock||2015); 2) discovering discrete tasks and corresponding actions/policies in a RL context (Stulp et al.[[2014j[Hofer & Brock[2016); 3) using gated networks to implement a \"mixture of experts'' (Ja- cobs et al.f1991} Droniou et al.[2015)\nState Representation Learning: Learning from raw observations is considered a holy grail in re inforcement learning (RL). Deep RL has had major success in this, using model-free (Mnih et al. 2015) but also by combining model-free and model-based RL (Levine et al.]2015). These ap proaches apply end-to-end learning to get from raw input to value functions and policies. A dif ferent approach is to explicitly learn state representations using unsupervised learning, e.g. using auto-encoders (Lange et al.[2012). Recently, Watter et al.[(2015) extended this idea to learn state representations jointly with dynamic models and apply optimal control to compute a policy. We use learning with robotic priors (Jonschkowski & Brock|2015), a state representation learning methoc\nthat exploits information about temporal structure, actions, and rewards. We go beyond previous work by not only learning single state representations, but learning multiple state representations given raw data from multiple tasks\nOptions and Parameterized Skills: A common approach to factorizing a RL problem into subtasks. are macro-actions, often called options (Sutton et al.l[1999] Hengst]2002). The main difference with. our approach is that options are used to hierarchically decompose one high-level task into subtasks. (and learn sub-policies for these subtasks), whereas we learn task-specific state representations for . different high-level tasks. However, options bear resemblance on a technical level, since they are often implemented by a high-level \"selection\"' policy that parametrizes low-level policies (Daniel. et al.2012] Kupcsik et al.]2013} Stulp et al.| 2014). Continuous versions of options, referred to as. parametrized skills, have been proposed, too (Da Silva et al.[2012f Deisenroth et al.|2014) Doshi- Velez & Konidarisl2016). However, in all the work above, the state representation is given. To the. best of our knowledge, state representation learning has not yet been considered in the context of. RL with options or parameterized skills..\nGated Networks for Mixtures of Experts and Submanifold Learning: Gated networks are net. works that contain gating connections, in which the outputs of at least two neurons are multi plied (Sigaud et al.]2015). This allows a gating neuron g to prohibit (or limit) the flow of in. formation from one neuron x to another neuron y, similar to how transistors function. An early. example of gated networks is the mixture of experts approach (Jacobs et al.1991] Jacobs & Jor dan]1993] Haruno et al.[2001), where separate networks in a modular neural network specialize ir predicting subsets of training examples from a database. Our contribution is to extend mixtures o experts by state representation learning (e.g. from raw images) and to the more difficult RL (rathe. than supervised learning) context. Our gated network architecture is similar to the one proposed b Droniou et al.(2015). Their network simultaneously learns discrete classes jointly with continuou. class variations (called submanifolds) in an unsupervised way, e.g., discrete digit classes and shap. variations within each class. We use a similar architecture, but in a different way: rather than learn. ing discrete classes, we learn discrete tasks; class-specific submanifolds correspond to task-specifi state representations; and finally, we consider a RL rather than an unsupervised learning context..\nAs mentioned in the introduction, our work is orthogonal to multi-task learning (Caruana1997) which has been extensively studied in recent reinforcement learning literature, too (Parisotto et al. 2016). Our approach can be trivially combined with multi-task learning by by prepending the gate and state extraction modules with a subnetwork that shares knowledge across tasks. Another inter- esting multi-task approach is policy distillation (Rusu et al.|2016). This method combines different policies for multiple tasks into a single network, which enables to share information between tasks and to learn a compact network that can even outperform the individual policies.\nWe formulate MT-LRP in a reinforcement learning (RL) setting using a Markov decision process (MDP) (S,A,T,R,y): Based on the current state s E S, the agent chooses and executes an action a E A, obtains a new state s' E S (according to the transition function T) and collects a reward r E R. The agent's goal is to learn a policy t : S -> A that maximizes the expected return E(=o 7'rt), with. r being the reward collected at time t and O < y 1 the discount factor. We consider an episodic setting with episodes of finite length, a continuous state space S and a discrete action space A..\nIn this work, we assume that the agent cannot directly observe the state s but only has access to observations o E O, which are usually high-dimensional and contain task-irrelevant distractors This requires us to extract the state from the observations by learning an observation-state-mapping : O -> S, and use the resulting state representation S to solve the RL problem (assuming that a Markov state can be extracted from a single observation). To learn the state representation, we apply learning with robotic priors (Jonschkowski & Brock (2015), from now on referred to as LRP). This method learns o from a set of temporally ordered experiences D = {(ot,at,rt)}d=1 by optimizing the following loss:\nRP(D, ) = Wttemp.(D, ) + @pprop.(D,) + @cLcaus.(D, ) + @rLrep.(D,)\nThis loss consists of four terms, each expressing a different prior about suitable state representations. for robot RL. We optimize it using gradient descent, assuming to be differentiable. We now explain the four robotic prior loss terms in Eq. (1).\nTemporal Coherence enforces states to change gradually over time (Wiskott & Sejnowski] 2002)\nCausality enforces two states St,St, to be dissimilar if executing the same action in St, generates a different reward than in St?.\nLcaus.(D,) =E Atj =At2,^tj+1Frt2+1\nLrep.(D,) =E\nAdditionally, the method enforces simplicity by requiring s to be low-dimensional."}, {"section_index": "2", "section_name": "4 MULTI-TASK STATE REPRESENTATIONS: MT-LRP", "section_text": "Now consider a scenario in which an agent is learning multiple distinct tasks. For each task t E {1, ..., T}, the agent now requires a task-specific policy t : S -> A. We approach the problem by learning a task-specific state representation Qr : O -> S, for each policy, and a task detector x whicl determines the task, given the current observation. We will consider a probabilistic task-detecto. x : O -> [0, 1]T that assigns a probability to each task being active."}, {"section_index": "3", "section_name": "4.1 GATED NEURAL NETWORK ARCHITECTURE", "section_text": "We use a gated neural network architecture as shown schematically in Fig.2 The key idea is that both the task detector x as well as the state representation o are computed from raw inputs. However. the output of the task detector gates the output of the state representation. Effectively, this means the output of x(o) decides which task-specific state representation @t is passed further to the policy which is also gated by the output of x(o).\nFormally, x(o) = o(xpre(o)) is composed of a function Xpre with T-dimensional output and a soft e j max o(z) = . The softmax ensures that x computes a proper probability distribution over tasks. Ek ek The probabilities are then used to gate q. To do this, we decompose into a pre-gating function\nLtemp.(D, ) = E\nwhere s, = St+1 - st denotes the state change. (To increase readability we replace o(o) by s.) Proportionality expresses the prior that the same action should change the state by the same magni tude, irrespective of time and the location in the state space:\nLprop.(D,o) = E (st2lI - l|st lD)\nNote that learning with robotic priors only makes use of the actions a, rewards r, and temporal information t during optimization, but not at test time for computing o(o) = s. Using a, r and t in this way is an instance of the learning with side information paradigm (Jonschkowski et al.2015)\nIn order to solve the full multi-task RL problem, we must learn X, { t } te{1,,T} and {t } te{1,,T}- We propose to address this problem by MT-LRP, a method that jointly learns X and {t } te{1,,T} from raw observations, actions, and rewards. MT-LRP then uses the state representations {9t} to learn task-specific policies {t } te{1,..,T} (using standard RL methods), and switches between them using the task detector x. To solve the joint learning problem, MT-LRP generalizes LRP (Jon- schkowski & Brock 2015) in the following regards: (i) we replace the linear observation-state- mapping from the original method with a gated neural network, where the gates act as task detectors that switch between different task-specific observation-state-mappings; (ii) we extend the list of robotic priors by the prior of task coherence, which allows us to train multiple task-specific state representations without any specification (or labels) of tasks and states.\nPpre that extracts features shared across all tasks (i.e. \"multi-task\" in the sense of[Caruana|(1997) unless set to the identity), and a T M N gating tensor G that encodes the T (linear) observation state mappings (M = dim(s) and N is the output dimension of Ppre). The value of the state's i-th dimension s; computes as the expectation of the dot product of gating tensor and Spre(o) over the task probabilities x(o):\nT si=Qi(0)= Xk(0)(Gk,i,;, Ppre(0)) k=1\nwhere w, is a scalar weight balancing the influence of the additional loss term. Task coherence is the assumption that a task only changes between training episodes, not within the same episode. It. does not presuppose any knowledge about the number of tasks or the task presented in an episode,. but it exploits the fact that task switching weakly correlates with training episodes. Moreover, this. assumption only needs to hold during training: since x operates directly on the observation o, it can. in principle switch the task at every point in time during execution. Task-coherence applies directly to the output of the task detector, x(o), and consists of two terms:.\nThe second term expr resses task senaration and encout to assign tasks to different episodes\nThis loss is complementary to task consistency, as it penalizes x if it assigns similar task distributions to ot, ot, from different episodes. Note that $ep will in general not become zero. The reason is that. the number of episodes usually exceeds the number of tasks, and therefore two observations from different episodes sometimes do belong to the same task. We will evaluate the contribution of each of the two terms to learning success in Section|5.2."}, {"section_index": "4", "section_name": "5 EXPERIMENTS", "section_text": "We evaluate MT-LRP in two scenarios. In the multi-task slot-car racing scenario (inspired byLange et al.(2012), we apply MT-LRP to a linearly solvable problem, allowing us to easily inspect wha and how MT-LRP learns. In slot-car racing, the agent controls one of multiple cars (Figure 1) with the goal of traversing the circuit as fast as possible without leaving the track due to speeding in curves. However, the agent does not know a priori which car it controls, and only receives the raw visual signal as input. Additionally, uncontrolled cars driving at random velocity, act as visual distractors. We turn this scenario into a multi-task problem in which the agent must learn to contro. each car, where controlling the different cars corresponds to separate tasks. We will now provide the technical details of our experimental set-up."}, {"section_index": "5", "section_name": "5.1 EXPERIMENTAL SET-UP: SLOT-CAR RACING", "section_text": "The agent controls the velocity of one car (see Fig.1), receives a reward proportional to the car's velocity, chosen from [0.01, 0.02, . , O.1], and a negative reward of -10 if the car goes too fast.\nL = LRp(D,)+@tLr(D,X)\nccon+sep osep\nccon =E|H(x(0t),x(0t2)) episode, = episode,\nwhere H denotes the cross-entropy H(p,q) = - Lx p(x) logg(x). It can be viewed as a measure of dissimilarity between probability distributions p and q. We use it to penalize x if it assigns different task distributions to inputs ot, Ot, that belong to the same episode. Note that task-consistency can be viewed as a temporal coherence prior on the task level (Wiskott & SejnowskiJ2002).\nsep =E -H(x(0t1),x(0t2) episode, episode\nStatic Visual Cue 60 Dynamic Visual Cue 60 50 50 epodode eposdde 40 40 deer err eun uunnep 30 deer rrn eun eennrp 30 20 20 10 10 O MT-LRP MT-LRP LRP 0 LRP PCA PCA -10 Observations -10 Observations Known Car Position Known Car Position -20 -20 1000 2000 3000 4000 5000 6000 7000 8000 1000 2000 3000 4000 5000 6000 7000 8000 Training steps Training steps\nFigure 3: Reinforcement learning curves (mean and standard error) for different state representations for the two-slot car scenarios. Left: static visual cue. Right: dynamic visual cue\nin curves. The velocity is subject to Gaussian noise (zero mean, standard deviation 10%) of th. commanded velocity. All cars move on independent lanes and do not influence each other. Th agent observes the scenario by getting a downscaled 16x16 RGB top-down view (dimension N = 16 16 3 = 768) of the car circuit (Fig.1(b))\nIn our experiments, there are two or three cars on the track, and the agent controls a different one in every episode. To recognize the task, the agent must be able to extract a visual cue from the observation which correlates with the task. We study two types of visual cues:. Static Visual Cue: The arrangement of cars stays the same in all episodes and a static visual cue (a picture of the controlled car) in the top-left image corner indicates which car is currently controlled. Dynamic Visual Cue: The agent always controls the same car (with a certain color), but in each task the car is located on a different lane (as in Fig.[1(b))\nData Collection and Learning Procedure: The agent collects 40 episodes per task, each episode consisting of 100 steps. To select an action in each step, the agent performs -greedy exploration by. picking a random action with probability & = O.3 and the best action according to its current policy otherwise. The agent computes a policy after every t episodes, by first learning the observation-state mapping (state representation) and then computing policies 1,..., , (based on the outcomes of. the learned x and o). To monitor the agent's learning progress, we measure the average reward. the agent attains on T test episodes, i.e. one test episode of length 100 per task (using the greedy policy), amounting to 80o0 experiences in total. To collect sufficient statistics, the whole experiment is repeated 10 times.\nand solve it using nearest-neighbor Q-learning kNN-TD-RL (Martin H et al.I 2009) with k = 1( More recent approaches to model-free RL would be equally applicable (Mnih et al.|2015). Learning Strategies and Baselines: We compare five strategies. We run a) MT-LRP with coherence prior. We compare MT-LRP to several state representation methods; for each metho we evaluate different M and report only the best performing M: a) robotic priors without gate network, LRP (M = 4), b) principal components analysis (PCA) on the observations (M = 2O) an ) raw observations (M = 768). Additionally, we evaluate d) a lower baseline in the form of randomly moving agent and e) an upper baseline by applying RL on the known 2D-position o the slot car under control (M = 2). We use the same RL algorithm for all methods. To learn th state representations with robotic priors, we base our implementation on Theano and lasagne, us ng the Adam optimizer with learning rate O.005, batch size 100, Glorot's weight initialization an @t = 1, @p = 5, @c = 1, @r = 5, @t = 10. Moreover, we apply an L1 regularization of O.001 on . Additionally, we analyze the contribution of task coherence priors by applying MT-LRP to the ful con+sep\nPolicy Learning: We consider the model-free setting with continuous states S, discrete actions A and solve it using nearest-neighbor Q-learning kNN-TD-RL (Martin H et al.]2009) with k = 10. More recent approaches to model-free RL would be equally applicable (Mnih et al.||2015)."}, {"section_index": "6", "section_name": "5.2 RESULTS", "section_text": "We will now present the three main results of our experiments: (i) we show that MT-LRP enable. the agent to extract better representations for RL; (ii) we provide insight in how the learner detect. the task and encodes the state representations; and finally, (ii) we show the contribution of each o the task-coherence loss terms.\nMT-LRP Extracts Better State Representations for RL Figure 3|shows the learning curves for RL based on state representations learned by the different methods in the two-slot-car sce. nario (static visual cue on the left, dynamic on the right). No method reaches the performance of the upper baseline, mainly due to aliasing errors resulting from the low image resolution. The random baseline ranges around an average re-.\nof the upper baseline, mainly due to aliasing erro. The random baseline ranges around an average re-. ward of -84.9 with standard error 0.72 and was. omitted from the Figure. The state representation. learning baselines without robotic priors perform. poorly because they are unable to identify the task-. irrelevant distractions. MT-LRP gets very close to. the performance of the upper baseline, especially. for very low amounts of training data (d < 2500). whereas LRP does not even attain this level of per-. formance for the full training set d = 80o0 in the. static task. The gap between MT-LRP and LRP in-. creases even more if we add another car (Figure [5] because LRP can only learn one state representation for all three tasks. Including the three slot cars in. this representation results in distractions for the RL. method. However, in the dynamic-visual-cue sce- nario LRP-4 performs on par with MT-LRP. Sur-. prisingly, running LRP with only two dimensions. suffices to achieve the performance of MT-LRP. We. will explain this phenomenon below. To conclude, M. cies than the baselines in all slot-car scenarios..\nMT-LRP Detects All Tasks and Learns Good State Representations To gain more nsight into what is learned, we analyze the tate representations extracted by MT-LRP and RP. Figure 4 shows the state representation earned by MT-LRP for the static-visual-cue Ol cenario. Each point in the figure corresponds Known ( o one observation, markers indicate the task and colors the most active gate unit. We see hat the first gate unit (blue) is always active for ask 1 (circle), and the second gate unit for task Figure .. This shows that the task is detected with high the th accuracy. The task detector x is also highly cer- ain which is reflected in the fact that its entropy evalu he states reflect the circular structure of the slot car ra as learned to identify the tasks and to represent the po\nThe RL experiments raised the question why LRP manages to solve the dynamic, but not the static-visual-cue scenario as well as MT-LRP. We hypothesize that, for the dynamic cue, LRP is able to extract the position of the car on regardless of which lane it is in using a single lin ear mapping. Figure 6 confirms this hypothesis: LRP filters for the car's color (blue) along the track and assigns increasing weights to these pixels which results in the extraction of its posi- tion. It also assigns constant weights along the track in the red channel using the lane change of the two cars as an offset. This results in a mapping to two circles similar to Fig.4 where the state encodes both the position and the task. Such a mapping can be expressed by a linear\n3 Task 1 2 Task 2 Gate Unit 1 1 Gate Unit 2 0 1 S -2 -2 -1 0 1 2 3 4 First State Dimension\nFigure 4: State representation learned per task (different markers) and per gate unit (different colors)\n3 cars 12000 steps MT-LRP LRP PCA Observations nown Car Position 0 10 20 30 40 50 Average reward per episode\nFigure 5: Reinforcement learning performance in. the three-slot car scenario with static visual cue\nfunction precisely because the features that are relevant for one task do not reappear in anothe. task (e.g. a blue slot car in track 1 does not appear in the task where the blue car is in track 2). However, there exists no equivalent linear map. ping for the static-visual-cue variant of the slot-. 0.60\nWe can generalize from this insight as follows A single linear observation-state-mapping is sufficient for multiple tasks if the state repre- sentation for every task can be extracted by a linear function using only features that stay constant for all other tasks. If this is the case, than there is no need for decoupling the extrac tion of task and state.\nPeroiace Tounderstandtheinlruenceol the different task-coherence prior variants, we compared their performance in Figure [7 We se that relying solely on the robotic priors gives poor results, mainly because the gate units are nc used properly: more than one gate unit is activated per task (x has high entropy). Adding the task separation prior forces the network to use as many gates as possible (5 in our case), leading to ba. state representations. Interestingly, using task consistency only gives roughly the same result a. using task consistency and task separation.\nDiscussion 1 The experiments showed that MT-LRP is. able to solve the representation and reinforcement learn- ing tasks better than the baselines. Important questions. for future work concern: the necessity and influence of the task-separation loss, in particular for short episode. lengths and if the number of expected tasks exceeds the. number of actual tasks; and transferring knowledge by. adding a shared neural network layers before gating.."}, {"section_index": "7", "section_name": "6 CONCLUSION", "section_text": "We have presented MT-LRP, a method for multi-task state representation learning with robotic pri. ors. The method learns in an unsupervised fashion, solely based on the robots own observations actions, and rewards. Our experiments confirmed that MT-LRP is effective in simultaneously iden. tifying tasks and learning task-specific state representations. This capability is beneficial for scaling. reinforcement learning to realistic scenarios that require dedicated skills for different tasks.."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "R. Caruana. Multitask learning. Machine learning, 28(1):41-75, 1997\n0.60 0.45 0.30 0.15 0.00 -0.15 -0.30 -0.45 R G B\nFigure 6: learned by LRP (M = 2) for the two- car dynamic visual cue tasks. Row corresponds to. state dimension. column to RGB color channel.\nCcon + sep con sep 100-80-60-40-20 0 20 40 60 Average reward per episode.\nFigure 7: Task coherence: Average re ward per episode (8000 samples).\nWe gratefully acknowledge the funding provided by the German Research Foundation (DFG, Ex. ploration Challenge, BR 2248/3-1), the Alexander von Humboldt foundation through an Alexander. von Humboldt professorship (funded by the German Federal Ministry of Education and Research) Additionally, Antonin Raffin was supported by an Erasmus+ grant..\nRico Jonschkowski, Sebastian Hofer, and Oliver Brock. Patterns for Learning with Side Information arXiv:1511.06429 [cs, stat], November 2015.\nBrenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Buildin Machines That Learn and Think Like People. arXiv preprint arXiv:1604.00289, 2016..\nS. Lange, M. Riedmiller, and A. Voigtlander. Autonomous reinforcement learning on raw visua input data in a real world application. In 2012 International Joint Conference on Neural Network. (IJCNN), pp. 1-8, June 2012. doi: 10.1109/IJCNN.2012.6252823.\nSergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-End Training of Deep Visuomotor Policies. arXiv:1504.00702 [cs], April 2015.\nEmilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning. In ICLR, San Juan, Puerto Rico, 2016.\nAlain Droniou, Serena Ivaldi, and Olivier Sigaud. Deep unsupervised network for multimodal per ception, representation and classification. Robotics and Autonomous Systems, 71:83-98, Septem ber 2015. 1SSN 0921-8890. doi: 10.1016/j.robot.2014.11.005.\nAndras Gabor Kupcsik, Marc Peter Deisenroth, Jan Peters, and Gerhard Neumann. Data-Efficien Generalization of Robot Skills with Contextual Policy Search. In AAAI. 2013..\nRichard S. Sutton, Doina Precup, and Satinder Singh. Between MDPs and semi-MDPs: A frame- work for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1):181-211, August 1999. ISSN 0004-3702. doi: 10.1016/S0004-3702(99)00052-1.\nManuel Watter, Jost Tobias Springberg, Joschka Boedecker, and Martin Riedmiller. Embed to Con trol: A Locally Linear Latent Dynamics Model for Control from Raw Images. arxiv, 2015..\nTom Zahavy, Nir Ben Zrihem, and Shie Mannor. Graying the black box: Understanding DQNs arXiv:1602.02658 [cs], February 2016"}]
SJBr9Mcxl
[{"section_index": "0", "section_name": "UNDERSTANDING TRAINED CNNS BY INDEXING NEORON SELECTIVITY", "section_text": "Ivet Rafegas & Maria Vanrell\nComputer Vision Center. Universitat Autonoma de Barcelona Bellaterra, Barcelona (Spain).\nivet.rafegas, maria.vanrell}@uab.cat\nThe impressive performance and plasticity of convolutional neural networks to solve different vision problems are shadowed by their black-box nature and its consequent lack of full understanding. To reduce this gap we propose to describe the activity of individual neurons by quantifying their inherent selectivity to spe- cific properties. Our approach is based on the definition of feature selectivity in- dexes that allow the ranking of neurons according to specific properties. Here we report the results of exploring selectivity indexes for: (a) an image feature (color); and (b) an image label (class membership). Our contribution is a framework to seek or classify neurons by indexing on these selectivity properties. It helps to find color selective neurons, such as a red-mushroom neuron in layer conv4 or class selective neurons such as dog-face neurons in layer conv5, and establishes a methodology to derive other selectivity properties. Indexing on neuron selectivity can statistically draw how features and classes are represented through layers at a moment when the size of trained nets is growing and automatic tools to index can be helpful."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Several works have proposed different methodologies to address the understanding problem. Re- cently, inLi et al.(2016) two main groups of works are mentioned. On one side those works that deal with the problem from a theoretical point of view. These are works such as Montavon et al.[(2011) where kernel sequences are used to conclude that deep networks create increasingly better representations as the number of layer increases,Paul & Venkatasubramanian (2014) which explains why a deep learning network learns simple features first and that the representation com- plexity increases as the layers get deeper, Goodfellow et al.(2014) where an explanation for why an adversarial example created for one network is still valid in many others and they usually assign it the same (wrong) class, or[Arora et al.(2014) that presents algorithms for training certain deep generative models with provable polynomial running time. On the other side, an empirical point of view, which comprises approaches that pursuit methodologies to visualize intermediate features in the image space, or approaches that analyze the effect of modifying a given feature map in a neuron activation. Our work is framed in the first subset of empirical approaches.\nVisualizing intermediate features seeks to describe the activity of individual neurons. This descrip tion is the basis of this work hypothesis that is based on the idea that a proper understanding of the activity of the individual neurons allow us to draw a map of the CNN behavior. This behavior can be understood either in terms of relevant image features or in terms of the discriminative power of the neurons across the full architecture.\nThe first and most obvious way to describe the activity of a single neuron is given by the inherent sel of weights of the learned filters. These weights can be used to compare neurons between them. eithe.\nLuis A. Alexandre\nDepartment of Computer Science Universidade da Beira Interior Covilha , Portugal"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In parallel with the success of CNNs to solve vision problems, there is a growing interest in de- veloping methodologies to understand and visualize the internal representations of these networks How the responses of a trained CNN encode the visual information is a fundamental question for computer and eventually for human vision.\nA second method to describe neuron activity is projecting the filter weights into the image space trying to get the inherent feature that maximally activates the filter. The projection can be computec by composing the inversion of the layer operators under a specific neuron towards the image space this was called a Decoded Filter (DF) in Rafegas & Vanrell(2016). The resulting image represents an estimation of the feature that should highly activate such neuron. The disentangling algorithn that inverts the filter would give a good estimation of the feature image if most of the layer operators were invertible. However, when the number of non-invertible operators increases, the estimatiol becomes unintelligible. The appearance of the DFs can be seen in Fig. 1 of Rafegas & Vanrel (2016). They have also been explored by[Springenberg et al.(2015) for architectures with no pooling layers since pooling is the less invertible operator. They point out the interest of obtaining such a representation, since it would allow the understanding of neuron activity independently of the inpu image. However, the majority of proficient CNNs contain pooling layers.\nFinally, other works focus on proposing approaches able to reconstruct the input image given a feature map, going further of analyzing the individual neuron activity.Mahendran & Vedaldi|(2015 make use of optimization algorithms to search for an image whose feature map best matches a given feature map by incorporating natural image priors. Contrary, in|Dosovitskiy & Brox(2015) the authors propose to reconstruct the input image from its feature maps of a given convolutional network by training a new deconvolutional network to learn filter weights that minimize the image reconstruction error when these filters are applied to the image feature maps. With this approach they are also able to get an image reconstruction with natural priors\nLikewise, in Zeiler & Fergus(2014) in this work we pursuit visualizing the intrinsic feature of a neuron by analyzing the images that maximally activates a specific neuron. However, to avoid the\nwithin the same layer or versus neurons in similar CNNs which have been trained under different initialization conditions, as it is proposed by Li et al.[(2016). A direct visualization of these weights is intuitive when they belong to neurons of a first convolutional layer. However, when layers are stacked. that intuition disappears and the capability to understand the neuron activity is lost.\nA third way to describe neuron activity is by exploring the images that maximally activate the neu ron. One of the most relevant works pursuing the visualization of intermediate features, is the one proposed byZeiler & Fergus|(2014), where they project intrinsic features of neurons from the image hat have provoked a maximum spike to a certain neuron, the network representation is projectec into the image space by isolating them in the deconvolution approach Zeiler et al. (2010). By ob- serving different projections that maximally activate a certain neuron they get the intuition about the main features learned on the network. Later on, in Springenberg et al.(2015) the guided back- oropagation improves the deconvolution approach by a new way of inverting rectified linear (ReLu nonlinearities, achieving better visualizations of the activations. These approaches present a main drawback, their feature visualization is image-specific, since the maximum activation of a neuron not always generalize the intrinsic feature of the neuron. To solve this problem, in some works in stead of using the image that provokes the maximum activation, they use optimization techniques to generate an image that maximizes the activation. The key point of these works is using an appropri ate regularization in the generation process, otherwise, the resulting image appearance is unrealistic and difficult to understand.Simonyan et al.[(2014) propose a method to generate an image which is representative of a certain class by maximizing the score of this image to be classified in a cer tain class (or highly activates the specified neuron) with an L2-regularization. A similar work was performed afterwards in |Yosinski et al.(2015) but taking advantage of combining three different regularizations to achieve more recognizable images. Although they have explored different reg ularizations to achieve more realistic intrinsic feature representations, their visualizations present important artifacts that complicate the understanding of the intrinsic property.\nIn the second subset of empirical approaches, Alexey Dosovitskiy(2015) train a generative decon volutional network to create images from neuron activations. With this methodology, the variation of the activations enables the visualization of the differences in the generated images. A similar analysis is done byAubry & Russell (2015), but instead of forward-propagate different activations to the image space and comparing them, they observe the changes on neuron activations when simi- lar computer-generated images with different scene factors are introduced into a CNN. These works contribute in giving a deeper understanding on the internal CNN behavior. Both works conclude that there are specific neurons which are sensitive to color changes, point of views, scale or lighting confi gurations.\nconv1 % AUC conv2 % AUC conv3 % AUC conv4 % AUC cony5 % AUC 100% 95.01% 90.04% 85.02% 85.53% 95.47% 90.05% 85.03% 80.01% -80.06% aet 85.01% aer 75.18% 75.05% 90.43% 80.09% -85.02% 80.24% -75.24% 70.01% -70.03% 80.23% 78.93% 70.04% 66.32% 65.02% 76.32% 73.32% 66.58% 61.42% 62.38% 70.21% 7 57.28% 60.02% -54.01% Image Ranking Image Ranking Image Ranking Image Ranking Image Ranking\nFigure 1: Normalized activations of a subset of neurons for the first 400 ranked images through al convolutional layers. For each layer we plot the normalized activation for the neurons with highes and smallest AUC (Area Under Curve), and some other examples in between these extremes. Fo all neurons the highest normalized activations is 1, and the percentage of AUC is computed witl respect to the neuron AUC achieving the biggest area in the entire network.\nlack of generality of this approach, we define the Neuron Feature which is not based on a single maximum activation. The Neuron Feature is a weighted average version of a set of maximum activation images that capture the essential properties shared by the most important activations and makes it not to be image-specific. Additionally, our Neuron Feature overcomes the problem of unrealistic representation we metnioned earlier, by directly averaging on the image space. In this way we achieve two main advantages: (a) keeping the properties of the natural images, and (b) providing a very straightforward approach to compute it.\nAfterwards. we introduce the concept of neuron selectivity index, that is used in human vision re search to characterize the response of specific cells to specific stimuli (Shapley & Hawken (2011) This concept allows to achieve a higher level of abstraction in the understanding of a single neuron. In this work we provide two selectivity indexes which are different in their essence: a color selectiv- ity index that quantifies the degree of response of a neuron to a specific color; a class selectivity index that quantifies the degree of response of a neuron to a specific class label. Indexes are derived from the neuron feature or directly from the set of images with maximum activations. We analyze both indexes on a VGG-M network (Chatfield et al.(2014)) trained on ImageNet (Deng et al.(2009)) and we confirm their flexibility to cluster neurons according to their index values and extract conclusions in terms of their task in the net. By selecting color selective neurons we are able to outline how color is represented by the network. Curiously we found some parallelism between color representation in the first convolutional layer and known evidences about the representation in the human visual system. Color selective-neurons also show some preferences towards specific colors which coincide with ImageNet color biases. Indexing on class selectivity neurons we found highly class selective neurons like digital-clock at conv2, cardoon at conv3 and ladybug at conv5, much before the fully connected layers.\nAs we mentioned in the previous section we propose to visualize the image feature that activates. a neuron, whenever is possible, by directly computing a weighted average of the N-th first images that maximally activate this neuron. We will refer to it as the Neuron Feature (NF).\nIn order to build the NF we need to calculate the activations associated to each individual neuron They need to be accordingly ranked with the rest of activations of the layer. For each neuron w select the set of images that achieve a minimum normalized activation value but constrained to a maximum number of images for practical reasons. By normalized activation we mean the value o the maximum activation of a neuron for a specific input image, which is normalized by the maximun of these values achieved by the same neuron over all the images in the dataset..\nIn Fig.1we can see the behavior of the ranked normalized responses of a subset of neurons for every. convolution layer of the VGG-M CNN trained on ImageNet byChatfield et al.(2014). The y-axis. represents the normalized activation value of a single neuron to an image of the dataset. Images. are ranked on the x-axis according with their activation value, from highest to lowest activation (we just plot the first 400 images for each neuron). Therefore, the first relative activation value is always 1 for all neurons and then the normalized activation values decrease monotonically. This\nconv1 conv2 conv3 % AUC conv4 % AUC conv5 % AUC % AUC % AUC 1 100% 95.01% 90.04% 85.02% 85.53% 90.05% 95.47% -85.03% 80.01% 80.06% 90.43% aet 85.01% act 80.09% aar 75.18% aer 75.05% 80.24% 85.02% -75.24% 70.01% -70.03% 80.23% -78.93% 70.04% 66.32% 65.02% 76.32% -73.32% -66.58% 61.42% 62.38% 70.21% 57.28% 60.02% 54.01% Image Ranking Image Ranking Image Ranking Image Ranking Image Ranking\nconv1 conv2 conv3 conv4 conv5\nFigure 2: Neuron Feature (NF) visualizations (top) for 5 neuronsof the different convolutional layers of VGG-M with their corresponding 100 cropped images (bottom). We scale all layers to the same. size due to space constraints.\nconv1 conv2 conv3 conv4 conv5 (a) (b)\nFigure 3: Examples of NFs for each convolutional layer of the network VGG-M (see section 4.1 (a) 20 examples of structured NF, (b), blurred NF. Although sizes of NF increments through layers we scale them into the same size. Original sizes are: 7x7x3 , 27x27x3, 75x75x3, 107x107x3 anc 139x139x3 for conv1, conv2, conv3, conv4 and conv5, respectively.\nThus. the NF is computed as:\nNmax 1 NF Wj.i.L1 lax j=1\nwhere wj,i,L is the relative activation of the j-th cropped image, denoted as I,, of the i-th neuron nL,i. at layer L. The relative activation is the activation aj,i of a neuron, given a input image, with respect to its maximum activation obtained for any image, wj,i,L =. a j, i where amax,i = max ak,i,Vk. am.a.\nIn Fig.2 we can see some NFs and their corresponding set of first 100 maximum activations, and in Fig.3[(a) we can see a selected subset of 20 NF per layer. In this image we can identify specific shapes that display the intrinsic property that fires a single neuron. At first glance, we can see how in this particular network the first two layers are devoted to basic properties. Oriented edges of different frequencies and in different colors in the first layer; textures, blobs, bars and more specific curves in the second layer. The rest of the layers seem to be devoted to more complex objects. We can see that dog and human faces, cars and flowers are detected at different scales in different layers, since the size of the NF and their corresponding cropped images increase with depth. This visualization of the neuron activity can be seen as a way to visualize a trained vocabulary of the CNN that opens multiple ways to analyze the global behavior of the network from its single units. However, not all neurons present such a clear tuning to an identifiable shape. Some neurons present a blurred version of NF, such as, those in Fig.3[b). The level of blurring is directly related to a high variability between the maximally activated images for a neuron.\nAt this point, we want to make a short parenthesis to relate the previous representational observa tions with the scientific problem about neural coding that is focus of attention in visual brain research (Kriegeskorte & Kreiman (2011)). We are referring to the hypothesis about distributed representa- tions that encode object information in neuron population codes, that co-exist with strong evidences of neurons which are only activated by a very specific object. In line with this idea, we invite to speculate about neurons presenting a highly structured NF could be closer to localist code neurons while neurons with a blurred NF as closer to a distributed code. We return on this discussion later on at sections4.3and5\nFinally, we want to add a further analysis about how neuron feature is related to the neuron activit is representing. In Fig.4|we plot the level of the neuron responses when the input image is its ow. NF. We can observe a high degree of activation (in green) between the NF and the response of the ne. to this feature. However we have some disagreements between the NF and the neuron activations. an important example is shown in layer 2, that is curiously bigger than in layer 3 and 4. This i. explained by the high number of dead neurons?|and also by a higher presence of texture selectiv. neurons, that is observed in Fig. 3] Another example, which is more understandable, is the clea. increase of disagreement that happens through layers 3, 4 and 5, that seems to be explained by a. increase in invariance that is obvious when the size of the image increases..\nThe results are shown for a maximum number of images equal to Nma. 100 and a minimum activation value over a 70% of the maximum activation. We plot these values on Fig.1 2Rv dead nel\nnormalization allows to compare different neuron behaviors, from neurons which are activated by most of the images (flatter behavior), to neurons that highly activates only for a subset of images and have very little activation for the rest (steeper behavior). In this figure we also provide the percentage of area for each plotted curve. This percentage is computed over the area of the neuron that presents the maximum AUC in the entire architecture. We can observe different behaviors in all layers. In general, we can state that in deeper layers the behavior of the neurons is steeper (lower AUC), i.e. neurons highly spike for a small number of images. However, in shallower layers the behavior is flatter, i.e. neurons highly spike for a lot of images. This is an expected behavior, since the image features spiking neurons in first layers (e.g. oriented edges) are shared by almost all the images, while the features spiking shallow neurons are more selective features (e.g. faces) that only spike for specific images. The observation of the responses confirms the adequacy of our assumption to fix a minimum value for the activation and a maximum number of images to capture the most important activations for all the neurons. Similar observations have been made for other networks like VGG-S and VGG-FChatfield et al.(2014)\n100% <0% 90% [0,10)% 80% [10,20]% 70% [20,30]% 60% 30,40% 50% [40,50)% 40% [50,60)% 30% [60,70)% 20% [70,80)% 10% [80,90]% 0% Conv1 Conv2 Conv3 Conv4 Conv5 [90,100] 9\nFigure 4: Number of neurons and degree of activation as a response to their own NF. Activations values are. normalized to a specific range within each layer..\nIn this section we propose to describe neurons by their inherent response to a specific property, using an index. The index has to allow to rank them in a proportional order between their response and the existence of the property in the input image. Therefore, we translate the problem of describing neuron activity to the problem of proposing methods which are able to quantify specific image facets that correlate with the degree of activation of the neuron holding such a property. A selectivity index of a single unit is a flexible an independent method for discriminating or clustering between neurons inside the same network. Selectivity indexes can be defined either for image features or for image labels. In what follows, we propose two selectivity indexes one on each group."}, {"section_index": "3", "section_name": "3.1 COLOR SELECTIVITY INDEX", "section_text": "Color selectivity is a property that can be proved in specific neurons of the human brain. The leve. of activation of the neuron when the observer is exposed to a stimulus with a strong color bias, an its corresponding low activation when the color is not present, is the object of attention in visio. research that pursuits the understanding of how color is coded in the human visual system (Shaple. & Hawken(2011)Conway & Tsao(2009)).\nHere we propose a method to compute a color selectivity index for neurons in artificial neural net works. We propose to base it directly on the image properties of the NF we have defined above. We quantify the selectivity to a specific chromaticity directly from the color distribution of the NF We define this index as the angle between the first principal component (v) of the color distribution. of the NF and the intensity axis (b) of the Opponent Color Space (OPP). To compute (v) we use a weighted Principal Component Analysis Delchambre[(2014) that allows to strengthen the selec. tivity of small color areas. Weights are applied to each pixel in order to reinforce those pixels that. are shared by most cropped images and that highly contribute to the NF. Therefore, the weights are the inverse of the standard deviation. In this way, a NF defined by cropped images with different. colors will tend to be represented by a grayish image and its principal component will be close to. the intensity axis in the OPP color space and it will receive a low selectivity index. We formulate. this index (in degrees) as follows:.\nOther selectivity indexes that can be derived from this, are those related to color attributes. We ca. easily extract color name labels using a color naming approach such asBenavente et al.(2008) anc directly define color selectivity to basic names such as red, or green, among others.\nFigure 5: Conv1 NFs sorted by their color selectivity in- dex.\n1 b: v arccos 90 l|b[v|\nClass selectivity is a property of a neuron that can help to establish its discriminative power for one specific class or can allow to cluster neurons accordingly with the ontological properties of their class labels.\nWe propose a method to compute a class selectivity index for individual neurons by compiling the class labels of the images that maximally activates this neuron in a single descriptor. We define class selectivity from the set of class labels of the N images used to build the NF. To quantify this index we build the class label distribution of the full set of images. As in the color selectivity index we weight the significance of a class label by the relative activation of its image. Thus, the relative frequency of each class c for a certain neuron is defined as:\nGiven the densities for all the classes. Finally, our class selectivity index is defined as follows:\nwhere M is the minimum number of classes that covers a pre-fixed ratio, th, of the neuron activation this can be denoted as M fc th. This threshold allow to avoid considering class labels with very small activation weight. Jointly with the index value the selectivity provides the set of M classes that describe the neuron selectivity and their corresponding relative frequency values.\nTherefore, a low class selectivity index indicates a poor contribution of this neuron to a single class (minimum is O when M = N), while a high value (maximum is 1) indicates a strong contribution of this neuron to a single class. In between we can have different degrees of selectivity to different number of classes. Obviously, this index is irrelevant for the last fully connected layers in a CNN but it allows to group related neurons across different convolutional layers.\nHere we want to point out, that this index can also contribute to give some insights about the problem. of how information is coded through layers, in the debate of localist and distributed neural codes we mentioned before (Kriegeskorte & Kreiman|(2011)). Neurons with high class selectivity index. should be in line with a localist code, while neurons with low class selectivity index should be part. of a distributed code. This way the index is defined allow a large range of interpretations in between. these two kinds of coding as it has been outlined in the visual coding literature.."}, {"section_index": "4", "section_name": "4 RESULTS", "section_text": "In this section we report some empirical results to show how the proposed selectivity indexes per form and what representational conclusions we can extract from the subsets of neurons sharing indexed properties"}, {"section_index": "5", "section_name": "4.1 EXPERIMENTAL SETUP", "section_text": "In this paper we analyze the neurons of a CNN architecture trained on ImageNet ILSVRC datasel Deng et al.(2009) (using a subset of 1.2M images classified in 1.000 categories). We report the results for the VGG-M CNN that was trained by [Chatfield et al.(2014) for a generic visual task of object recognition. The details of the CNN architecture are given in table[1 We selected this net work since it has a similar structure to those which have been reported as having a representational performance that competes with human performance (as was proved in|Cadieu et al. (2014)). Never- theless, we have obtained similar results for VGG-F and VGG-S that are provided in Chatfield et al. (2014). We used the Matconvnet library provided by[Vedaldi & Lenc (2015) for all the experiments\n.i.\nwhere N. refers to the number of images, among the N cropped images activating this neuron, that belong to class c.\nN - M N -1\nColor Selecitivity index Conv2, a=0.03 Conv3, a=0.15 Conv2, a=0.29 Conv3, a=0.48 Conv2, a=0.97 ept Conv4, a=0.03 Conv5, =0.12 Conv4, a=0.39 Conv5, a=0.53 Conv4, a=0.76\nFigure 6: Neurons with different color selectivity indexes. Images in 4 rows (1st and 3rd row are NFs, 2nd and 4th rows are sets of cropped images that maximally activates the neuron)"}, {"section_index": "6", "section_name": "4.2 COLOR SELECTIVITY", "section_text": "General purpose CNN architectures are usually trained on RGB color images. However there is a. strong belief in the computer vision community that color is a dispensable property. The results we. obtain by indexing color selective neurons make us conclude that there is no basis for such a belief Results show that color is strongly entangled at all levels of the CNN representation. In a preliminary. experiment we have tested a subset of ImageNet images with VGG-M in their original color and the. same subset in a gray scale representation. Classification results show a considerable decrease. while original RGB images are classified with a 27.50% top-1 error and 10.14% top-5 error, gray. scale image versions present 51.12% and 26.37% errors, top-1 and top-5 errors respectively.\nIn a first experiment we extract how many NFs are related to color in each convolutional layer usin. the proposed color selectivity index. The bars in Fig.8|plot the relative quantity of neurons that ar. color selective compared to those that are not. Grey represents the ratio of neurons that do not spik. for the presence of a color and reddish represent neurons that are highly activated by the presenc. of a color. In the graphic we can observe that shallow layers are the main responsible for the colo representation on the images: 50% and 40% of neurons are color selective in layers conv1 and conv2. respectively. Nevertheless, we also still found around 25% of color selective neurons in in deepe. layers. Therefore, although neurons in deeper layers tend to be color invariant, an important par. of the representation is devoted to color, that reinforces the discriminative power of color in objec recognition. In Fig.6|we show some examples of NFs with different degrees of color selectivity a. different layers of the network and showing the corresponding cropped images..\nRegarding color representation in layer 1 we want to point out two more observations derived from the NFs (see Fig.5): (a) selectivity to different spatial-frequencies is only tackled by gray-level neurons; and (b) four main color axis emerge (black-white, blue-yellow, orange-cyan and cyan magenta). Curiously, these two observations correlate with evidences in the human visual system (Shapley & Hawken(2011).\nTable 1: VGG-M architecture designed byChatfield et al.(2014), where M N P corresponds. to number of filters, number of rows and columns of the filters respectively. St. and pad. refers to stride and padding respectively: LRN is a ReLU and the corresponding pooling (pool) if applied..\nImagenet chromaticity distribution\n100% 90% [0,0.1) 80% [0.1,0,2) 70% [0.2,0.1) 60% [0.3,0.4) 50% [0.4,0.5) 40% [0.5,0.6) 30% [0.6,0.7 20% 0.7,0,8 10% >=0.80 0% Conv1 Conv2 Conv3 Conv4 Conv5\nFigure 8: Number of neurons and degree of color selectivity through layers. Grayish bars are for low index values and reddish for high index values.\nIn a second experiment, we analyze how color selective neurons from all layers cover the color space Figure[7|displays the distribution of color selective neurons with a 0.40. Each NF is plotted or the hue angle that represents the projection of its first principal component on the OPP chromaticity plane (red-green and blue-yellow components). Dashed rings identify different convolutional layer. from conv1 (inner ring) to conv5 (outer ring) linking the NFs that belong to the same layer. W can appreciate the emergence of an axis (from orange to cyan) that connects a crowded area o color selective neurons. We can add a low population of NFs in the magenta area, that become more crowded on the opposite side where green and yellow selectivity has several neurons. The interest of this explanation relies on the fact that a similar distribution appears in the ImageNet color distribution, that is plotted at the bottom of the same images, where a similar interpretation in terms of emergent axes can be done. A more in depth study is required to prove this correlation, but we illustrate how neuron selectivity helps in the understanding of how a specific property is representec by the CNN.\nFigure 7: Distribution of color selective neurons on a hue color space through layers. Maximum activation images for 4 top color selective neurons for each layer. Dashed rings connect NFs of. color selective neurons through layers, from inner ring (conv1) to outer ring (conv5)..\n100% 90% (0,0.1) 80% [0.1,0.2] 70% [0.2,0.3] 60% [0.3,0.4] 50% [0.4,0.5] 40% [0.5,0.6] 30% [0.6,0.7] 20% [0.7,0.8 10% [0.8,0.9 0% [0.9,1] Conv1 Conv2 Conv3 Conv4 Conv5\nFigure 9: Number of neurons and degree of. class selectivity through layers. Grayish bars. are for low index values and bluish for high index values.\nFollowing with the analysis of ranking neurons by their response to a certain property, here w. focus on the proposed selectivity index that relates to image labels instead of to an image property. is the class selectivity index, which only applies for classification networks. We report the results o. different experiments where we have fixed th = 1, which means we consider all the class labels fo. the N = 100 images that maximally activates the neuron. As we mentioned before, this index cai. enlighten how classes are encoded through the net layers, that again it can be related to the scientifi problem of how general object recognition is encoded in the human brain. Here we hypothesize tha. the difference between localist or distributed codes could correlate with the idea of neurons highl selective to a single class and neurons highly selective to several classes, we resume on this later a. section[5]\nSecondly, we have visualized the properties of a set of images presenting different degrees of class. selectivity in Fig.|10|for different levels of depth. We visualize each neuron with their NF visual-. ization and the corresponding cropped images. We also show two tag clouds of each neuron. They. visualize the importance of each class label. With an orange frame we plot the leave classes of the ImageNet ontology, while in the green frame we plot generic classes. This second analysis could. help finding neurons that are specialized to a general semantic concept that different final classes. share. Note that neurons with high class selectivity index have a set of cropped images that we can identify as belonging to the same class.\nFinally we stress the utility of ranking images by selectivity indexes in Fig.11 where we shov. interesting neurons in different convolutional layers that present high values for both selectivit indexes, neurons which are both, color and class selective."}, {"section_index": "7", "section_name": "5 CONCLUSIONS", "section_text": "In this paper we propose a framework to analyze a trained CNN by dissecting individual neurons. using their indexes of selectivity to specific properties. We have proposed two properties of differ. ent nature: (a) color, that is a low-level image property that we have shown to be entangled in all. the representations levels of the net; (b) class label, that is a high-level image property that can be. analyzed at different levels of abstraction. We have shown that while the number of color selective neurons decreases with depth, the number of class selective neuron increases. In this line of describ. ing the activity of individual images, we have also proposed to visualize the activity with what we. have called the neuron feature (NF), that allows to arise interesting structures that are shared by the. images that highly activate a neuron..\nThe proposed work have made us to speculate about two different ways to address the coding proper ties of individual neurons (localist versus distributed). Firstly, we have mentioned the possibility tha a blurred NF, i.e. without a clear structure, belongs to a neuron that can be part of a distributed code\nn a first experiment we analyze how many neurons present different degrees of class selectivity hrough layers. The bars in Fig.9[plot the relative quantity of neurons that are class selective com-. ared to those that are not. Grey represents the ratio of neurons that are not activated by a single. lass and bluish represent neurons that are highly activated by a single class. Opposite to what we. howed about color selectivity, we found most of class selective neurons in deeper layers, and no. class selectivity in shallow layers, as expected. We have moved from a very basic image property,. color, to a very high level property, class label. This fact corroborates the idea that CNNs start by lefining basic feature detectors that are share by most of the classes, and the neurons become more. specialized when they belong to deeper layers representing larger areas in the image space and there-. ore more complex shapes. We start to have neurons with relevant class selectivity in layer conv3.. where a 5% of neurons is quite class selective and we found some neurons with a degree of selective lose to 1. These ratios progressively increase up to layer conv5 where we have more than a 50% of. neurons with a class selectivity index greater than 0.6, that means that we have less than 40 different. lasses activating this neuron, which is a very selective ratio considering the number of classes of he ImageNet dataset. In the same layer a 20% of neurons present a high class selectivity index, than. neans less than 20 different classes. Further experiments should explore how this graphic evolves by.. noving from current class labels which are on the leaves of the ImageNet ontology towards higher. nodes with more generic classes.\nClass Selecitivity index Conv3, y=0.10 Conv2, y=0.21 Conv3, y=0.43 Conv2, y=0.63 Conv3, y=0.83 1 A OM .banniste ambulancepolicevanc benker bubble castle.. grannysmith mashedpotatc bookcase streetcar limousine. ragon pembroke .crib. domeelectricfan windowscreen blenheimspaniel gazelle cucumbcr greenhouse geanslot beachwagon audxrtle parkingmeter ringlet peacoek plowpolccat studiocouch cab.. plaseplowpopbottle convertible westhighlandwhiteterier zuchini lawnmower animal.. misc. artifact artifact --car. artifact animal toydog vehicle 'screen artifact.... blenheimspaniel organism conveyancedevi covering ..nstiru dog organism .chordateplacenta motorvehicl instrumentality misc instrumentality windowscreen vertebrate. domesticanimal wheeledvehicle protectivecovering cnglishtoyspanicl toyspanicl.... beeth instrumentality ..mammal Conv5, y=0.20 Conv4, y=0.41 Conv5, y=0.63 Conv4, y=0.96 Conv5, y=0.99 conch. sodalscaactire articho ...bellcote... cbottle cabbagebuttertle .church mosque cocktailshaker.palaceperfume acousticguitar pooltable cardoon orange chiton.cockroscl dome vault. chamberednautilus teddyju monastery animal.instrumentality artifact. animal. m1sc acousticguitar instrumentality nvertebrate. furnishing artifact chosdaic.. artifacl..s cardoon. artifact musicalinstrument artifact dogdomesticanimal vegetable covering structure.. furniture pooltable device... stringedinstrument .organism.... instumcstality organism .instrumentality guitar instrumentality table misc protectivecovering\nartifact conveyancedevice vehicle motorvehicle instrumentality wheeledvehicle\nFigure 10: Neurons with different class selectivity indexes. For each neuron two images (top: NF bottom: cropped images) and two tag clouds (top: leave classes, bottom: all classes in the ontology)\nartifact lamplanterm sourceofillumination insinamcnl devicemeasringisrmen instrumentality\ndevice animal. artifact... vcetcbrali mise instrumentality organisn\nFigure 11: Examples of neurons with high color and class selectivity indexes\nartifact screen covering ...instrumentality.-. windowscreen protectivecovering\nacousticguitar instrumentality artifact musicalinstrument device... stringedinstrument guitar\nDepth Conv2, y=0.40 Conv3, y=0.70 Conv4, y=0.76 Conv5, y=0.93 a=0.68 =0.50 a=0.79 a=0.72 beerboule. europeangallinule digitalclock popbottle digitalclock ladybug oscilloscope indigobunting jellyfish..... nematode theatercurtain-..,.volcano peacock vhistle tigerbeetle leafbeetlelifeboat levico animal animal organism artifact.... artifact lamplantern animal arthropod ladybug. sourceofillumination bird... misc devicem organism ...beetle... insect mise chordate veriebrate instrumentality instrumentality. misc. invertebrate\nanimal organism arthropod ladybug ... beetle... insect misc. invertebrate\nanimal bird.. misc chordate vertebrate organism\nwhere the neuron does not represent a selectivity to a single shape, maybe to diverse shapes thar. can be part of a code in deeper neurons. Secondly, we speculate about the possibility that neurons with high class selective index can represent a localist code, and part of a distributed when is low. In parallel, the analysis of the color selective neurons have made to arise some parallelism between. color representation in the 1st convolutional layer and known evidences about the representation in. the human visual system.\nAs further work we need to fully exploit the potential of the indexes in different CNN architectures and defining new selectivity indexes like shape or texture, that could be a perfect complement tc current ones."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Thomas Brox Alexey Dosovitskiy, Jost Tobias Springenberg. Learning to generate chairs with con volutional neural networks. In CVPR, 2015.\nSanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. In ICML, pp. 584-592, 2014.\nMathieu Aubry and Bryan C. Russell. Understanding deep features with computer-generated im agery. In ICCV, 2015.\nRobert Benavente, Maria Vanrell, and Ramon Baldrich. Parametric fuzzy sets for automatic color naming. JOSA, 25(10):2582-2593, Oct 2008\nIan J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR,abs/1412.6572,2014. URL http://arxiv.org/abs/1412.6572\nAravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. CVPR, 2015.\nNajib J Majaj, and James J DiCarlo. Deep neural networks rival the representation of primate it. cortex for core visual object recognition. PLoS computational biology, 10, 2014 Dec 2014.. K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving. deep into convolutional nets. In BMVC, 2014. Bevil R. Conway and Doris Y. Tsao. Color-tuned neurons are spatially clustered according to color preference within alert macaque posterior inferior temporal cortex. Proc Natl Acad Sci U S A., 42. (106):18034-18039, 2009. L. Delchambre. Weighted principal component analysis: a weighted covariance eigendecomposition. approach. MNRAS, 446:3545-3555, 2014. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical. Image Database. In CVPR09, 2009. Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks.. CoRR, abs/1506.02753,2015. URLhttp://arxiv.0rg/abs/1506.02753\nNicolaus Kriegeskorte and Gabriel Kreiman. Visual Population Codes - Toward a Common Multi ndFu\nYixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John E. Hopcroft. Convergent learning: Do different neural networks learn the same representations? In ICLR, 2016.\nIvet Rafegas and Maria Vanrell. Color spaces emerging from deep convolutional networks. In CIC 2016.\nRobert Shapley and Michael J. Hawken. Color in the cortex: Single- and double-opponent cells VR, 51(7):701-717, 4 2011.\nKaren Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks. Visualising image classification models and saliency maps. In In ICLR Workshop 2014, 2014.\nJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. Striving for simplicity: The all convolutional net. ICLR, 2015..\nA. Vedaldi and K. Lenc. Matconvnet - convolutional neural networks for matlab. 2015.\nJason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neura networks through deep visualization. In Deep Learning Workshop, (ICML), 2015.\nMatthew D. Zeiler, Dilip Krishnan, Graham W. Taylor, and Rob Fergus. Deconvolutional networks In CVPR, 2010."}]
ByG4hz5le
[{"section_index": "0", "section_name": "ADAPTIVE FEATURE ABSTRACTION FOR TRANSLATING VIDEO TO LANGUAGI", "section_text": "Yunchen Pu\nDepartment of Electrical and Computer Engineering Duke University\nDepartment of Electrical and Computer Engineering Duke University\nDepartment of Electrical and Computer Engineering Duke University"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Accurately understanding the fast-growing number of videos poses a significant challenge for com. puter vision and machine learning. An important component of video analyasis involves generating. natural-language video descriptions, i.e., video captioning. Inspired by the successful deploymen of the encoder-decoder framework used in machine translation (Cho et al.[2014) and image captior generation (Vinyals et al.]2015] Pu et al.]2016] Gan et al.]2017), most recent work on video cap tioning (Venugopalan et al.[2015}[Yu et al.[2016) employs a 2-dimentional (2D) or 3-dimentiona (3D) Convolutional Neural Network (CNN) as an encoder, mapping an input video to a compact fea. ture vector representation; a Recurrent Neural Network (RNN) is typically employed as a decoder. unrolling this feature vector to generate a sequence of words of arbitrary length.\n*Most of this work was done when the author was an intern at NEC Labs America\nMartin Renqiang Min Machine Learning Group NEC Laboratories America anoln\nMartin Renqiang Min"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "A new model for video captioning is developed, using a deep three-dimensional Convolutional Neural Network (C3D) as an encoder for videos and a Recurrent Neural Network (RNN) as a decoder for captions. A novel attention mechanism with spatiotemporal alignment is employed to adaptively and sequentially focus on different layers of CNN features (levels of feature \"abstraction'), as well as local spatiotemporal regions of the feature maps at each layer. The proposed approach is evaluated on the YouTube2Text benchmark. Experimental results demonstrate quantitatively the effectiveness of our proposed adaptive spatiotem- ooral feature abstraction for translating videos to sentences with rich semantic structures.\nDespite achieving encouraging successes in video captioning, previous models suffer from impor-. tant limitations. First, the rich contents in an input video is often compressed to a single compact feature vector for caption generation; this approach is prone to miss detailed spatiotemporal infor-. mation. Secondly, the video feature representations are typically extracted from the output of a. CNN at a manually-selected fixed layer, which is incapable of modeling rich context-aware seman-. tics that requires focusing on different abstraction levels of features. As investigated in Zeiler &. Fergus(2014); Simonyan et al.(2014), the features from layers at or near the top of a CNN tends. to focus on global semantic discriminative visual percepts, while low-layer feature provides more. local, fine-grained information. It is desirable to select/weight features from different CNN layers.\nadaptively when decoding a caption, selecting different levels of feature abstraction by sequentiall emphasizing features from different CNN layers. In addition to focusing on features from differ. ent CNN layers, it is also desirable to emphasize local spatiotemporal regions in feature maps at. particular layers.\nTo realize these desiderata, our proposed decoding process for generating a sequence of words dy- namically emphasizes different levels (CNN layers) of 3D convolutional features, to model impor- tant coarse or fine-grained spatiotemporal structure. Additionally, the model employs different con texts and adaptively attends to different spatiotemporal locations of an input video. While some pre- vious models use 2D CNN features to generate video representations, our model adopts the features from a pre-trained deep 3D convolutional neural network (C3D); such features have been shown to be natural and effective for video representations, action recognition and scene understanding (Tran et al.[2015) by learning the spatiotemporal features that can provide better appearance and motion information. In addition, the proposed model is inspired by the recent success of attention-based models that mimic human perception (Mnih et al.|2014} Xu et al.]2015).\nThe principal contributions of this paper are as follows: (i) A new video-caption-generation mode is developed by dynamically modeling context-dependent feature abstractions; (ii) New attentiol mechanisms to adaptively and sequentially emphasize different levels of feature abstraction (CNN layers), while also imposing attention within local spatiotemporal regions of the feature maps at eacl layer are employed; (iii) 3D convolutional transformations are introduced to achieve spatiotempora and semantic feature consistency across different layers; (iv) The proposed model achieves state-of the-art performance on Youtube2Text benchmark. We call the proposed algorithm Adaptive Spa tioTemporal representation with dynAmic abstRaction (ASTAR)."}, {"section_index": "3", "section_name": "2 METHOD", "section_text": "Consider N training videos, the nth of which is denoted X(n), with associated caption Y(n). The .(n) (n) vector, with V the size of the vocabulary."}, {"section_index": "4", "section_name": "2.1 CAPTION MODEL", "section_text": "For notational simplicity, henceforth we omit superscript n. The t-th word in a caption, yt, is mapped to an M-dimensional vector wt = Weyt, where We E RMV is a learned word- embedding matrix, i.e., wt is a column of We chosen by the one-hot yt. The probability of caption Y=3u} t-1 T is defined as\np(Y|A) =p(y1|A)IIt-2P(yt|y<t,A) (1) Specifically, the first word yi is drawn from p(yi|A) = softmax(Vh1), where h1. tanh(Car+1). Bias terms are omitted for simplicity throughout the paper. All the other words. in the caption are then sequentially generated using an RNN, until the end-sentence symbol is gen erated. Conditional distribution p(yt[y<t, A) is specified as softmax(Vht), where ht is recursively. updated as ht = H(wt-1, ht-1, zt). V is a matrix connecting the RNN hidden state to a softmax,. for computing a distribution over words. zt = (ht-1, a1,..., ar) is the context vector used in. the attention mechanism, capturing the relevant visual features associated with the spatiotemporal attention (also weighting level of feature abstraction), as detailed in Sec.2.2] The transition function H() is implemented with Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber! 1997).\np(Y|A) =p(y1|A)IIt=2p(yt|y<t,A)\nGiven the video X (with features A) and associated caption Y, the objective function is the sum o the log-likelihood of the caption conditioned on the video representation:.\nlog p(Y|A) = logp(y1|A) + T=2logp(yt|y<t,A),\nEquation (2) is a function of all model parameters to be learned; they are not explicitly depicted in (2) for notational simplicity. Further, (2) corresponds to a single video-caption pair, and when training we sum over all such training pairs."}, {"section_index": "5", "section_name": "2.2 ATTENTION MECHANISM", "section_text": "We introduce two attention mechanisms when predicting word yt: (i) spatiotemporal-localization attention, and (ii) abstraction-level attention; these, respectively, measure the relative importance oi a particular spatiotemporal location and a particular CNN layer (feature abstraction) for producing Yt, based on the word-history information y<t.\nTo achieve this, we seek to map a -> a, where 4D tensors a all have the same dimensions. are embedded into same semantic spaces, and are aligned spatialtemporally. Specifically, at, l = 1, . .., L- 1 are aligned in the above ways with a. To achieve this, we filter each at, l = 1, . .., L-- 1, and then apply max-pooling; the filters seek semantic alignment of the features (including feature. dimension), and the pooling is used to spatiotemporally align the features with at. Specifically consider\nfor l = 1,..., L - 1, and with at = a. a(k) is the 3D feature map (tensor) for dictionary. k E {1, ..., n'F} at layer l, and Uk,t is a 4D tensor. The convolution * in (3) operates in the three. shift dimensions, and a(k) * U,t manifests a 4D tensor. Function f () is an element-wise nonlinea. activation function, followed by max pooling, with the pooling dimensions meant to realize fina dimensions consistent with at. Consequently, a.i E RnF is a feature vector..\nWith { at }t=1,L semantically and spatiotemporally aligned, we now seek to jointly quantify the value of a particular spatiotemporal region and a particular feature layer (\"abstraction') for prediction of the next word. For each al, the attention mechanism generates two positive weights, Qti and t, which measure the relative importance of location i and layer l for producing yt based y<t Attention weights Qt; and ti and context vector zt are computed as\neti = w tanh(Waa; + Wnaht-1), Qti=softmax({eti}),St=l=1Qtiai, btl = wT tanh(WsStl + Wnht-1), Btt = softmax({bti}), Zt=i=1BtlStl\nWe present results on Microsoft Research Video Description Corpus (YouTube2Text) (Chen & Dolan2011). The Youtube2Text contains 1970 Youtube clips, and each video is annotated with around 40 sentences. For fair comparison, we used the same splits as provided in Yu et al.(2016) with 1200 videos for training, 100 videos for validation, and 670 videos for testing. We convert all captions to lower case and remove the punctuation, yielding vocabulary sizes V = 12594. We consider the RGB frames of videos as\nat = f(k=1ai(k) * Uk,l)\neti = w tanh(Waaai + Wnaht-1), Qti=softmax({eti}),St=t=1Qtiai (4 btl = wT tanh(WsStl + Wnht-1), Zt=i=1BtStl Bti = softmax({bti}), (5\nwhere a; is a vector composed by stacking {ai,t}l=1,L (all features at position i). eti and btt are. scalars reflecting the importance of spatiotemporal region i and layer t to predicting yt, while Qti. and Bti are relative weights of this importance, reflected by the softmax output. In (4) we provide attention in the spatiotemporal dimensions, with that spatiotemporal attention shared across all L. (now aligned) CNN layers. In (5) the attention is further refined, focusing attention in the layer dimension.\nTable 1: Results on BLEU-4. METEOR and CIDEr met rics compared to state-of-the-art results (Yu et al.[2016) or Youtube2Text. respectively\nMethods BLEU-4 METEOR CIDEr h-RNN [4] 49.9 32.6 65.8 ASTAR 51.74 36.39 72.18\nResults are summarized in Tables [1] and we outperform the previous state-of-the-art result on Youtube2Text. This demonstrates the importance of leveraging intermediate convolutional layer. features. In addition, we achieve these results using a single model, without averaging over an ensemble of such models"}, {"section_index": "6", "section_name": "4 CONCLUSION AND FUTURE WORK", "section_text": "We have proposed a novel video captioning model, that adaptively selects/weights the feature ab. straction (CNN layer), as well as the location within a layer-dependent feature map. Our model. achieves state-of-the-art video caption generation performance on Youtube2Text benchmark"}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "D. Chen and W. B. Dolan. Collecting highly parallel data for paraphrase evaluation. In ACL, 2011\nS. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997.\nM. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014\nZ. Gan, C. Gan, X. He, Y. Pu, K. Tran, J. Gao, L. Carin, and L. Deng. Semantic compositional. networks for visual captioning. In CVPR, 2017.. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014.. V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu. Recurrent models of visual attention. In NIPS, 2014. K. Papineni, S. Roukos, T. Ward, and W. Zhu. Bleu: a method for automatic evaluation of machine translation. Transactions of the Association for Computational Linguistics, 2002. Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin. Variational autoencoder for deep. learning of images, labels and captions. In NIPS, 2016.. K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image. classification models and saliency maps. In ICLR Workshop, 2014.. D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with. 3d convolutional networks. In ICCV, 2015.. R. Vedantam, Z. C. Lawrence, and D. Parikh. Cider: Consensus-based image description evaluation. In CVPR, 2015. S. Venugopalan, M. Rohrbach, J. Donahue, R. Mooney, T. Darrell, and K. Saenko. Sequence to sequence-video to text. In ICCV, 2015. O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator.. In CVPR, 2015. K. Xu, J. L. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015.. H. Yu, J. Wang, Z. Huang, Y. Yang, and W. Xu. Video paragraph captioning using hierarchical. recurrent neural networks. In CVPR, 2016. M. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014.."}]
BJjn-Yixl
[{"section_index": "0", "section_name": "ATTENTIVE RECURRENT COMPARATORS", "section_text": "Pranav Shyam* & Ambedkar Dukkipati\nDepartment of Computer Science and Automation Indian Institute of Science. nd1\nAttentive Recurrent Comparators (ARCs) are a novel class of neural networks built with attention and recurrence that learn to estimate the similarity of a set of objects by cycling through them and making observations. The observations made in one object are conditioned on the observations made in all the other ob- jects. This allows ARCs to learn to focus on the salient aspects needed to ascertain similarity. Our simplistic model that does not use any convolutions performs com- parably to Deep Convolutional Siamese Networks on various visual tasks. How- ever using ARCs and convolutional feature extractors in conjunction produces a model that is significantly better than any other method and has superior general ization capabilities. On the Omniglot dataset, ARC based models achieve an error rate of 1.5% in the One-Shot classification task - a 2-3x reduction compared to the previous best models. This is also the first Deep Learning model to outper form humans (4.5%) and surpass the state of the art accuracy set by the highly specialized Hierarchical Bayesian Program Learning (HBPL) system (3.3%)."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Advancing Deep Learning systems to solve Artificial Intelligence tasks requires that models be ca pable of performing continual meta-learning[ (Lake et al.]2016), (Schaul & Schmidhuber]2010)]. But top-down hierarchical designs of models (Santoro et al.[2016) to perform such tasks are not very successful on real world data and there are many reasons for this. First, most datasets are generally not designed with such higher order tasks in mind, thus researchers either work with syn- thetic data or fabricate higher level tasks based on traditional datasets - both of which constrain their utility. Second, hierarchical or meta models suffer from reduced supervision during training due to their inherent design. Third, with our experiments we found that the foundational architectures like Memory Augmented Neural Networks are still in their infancy and not ripe enough to be utilized in complex hierarchical systems. Therefore, in this paper, we present an alternative way of bridging this gap by building models in a bottom-up fashion. Comparing two or more inputs and estimating their similarity is a primal task using which more sophisticated models can be designed - an idea that has been well exploited in traditional Machine Learning for long (Bellet et al.]2013). Using the modern developments of attention mechanisms and by combining it with recurrent neural networks. we first built better comparators called Attentive Recurrent Comparators (ARCs)[' Using ARCs as a foundational element, we were then able to build more complex models and achieve qualitatively better results on tasks like one-shot learning. Thus, this work is proof of concept for the bottom-up design approach that can be applied to almost any dataset.\nWhen a person is asked to compare two objects and estimate their similarity, the person does so. by repeatedly looking back and forth between the two objects. With each glimpse of an object, a. specific observation is made. These observations made in both objects are then cumulatively used. to come to a conclusion about their similarity. A crucial characteristic of this process is that new. observations are made conditioned on the previous context that has been investigated so far by the. observer. The observation and it's contextual location are based on intermediate deductions. These intermediate deductions are themselves based on the observations made so far in the two objects.\n*Other Affiliation: Student at R V College of Engineering, Bengaluru 1 Code available at https://github.com/pranv/ARC"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "RNN nt- RNN Wg W 2 G-1 G\nA series of such guided observations and the entailing inferences are accumulated and finally th judgement on similarity is made\nIn stark contrast to this, current similarity estimating systems in Deep Learning are analogues of th Siamese similarity learning system (Bromley et al.]1993). In this system, a fixed set of features is detected in both the objects. Detection of features is independent of the features present in th other object. The two objects are compared based on the mutual agreement in the detected features More concretely, comparison between two objects in this system consists of measuring the distanc between their vector embeddings. A neural network defines the mapping from the object to the corresponding embedding vector in target space. This neural network is trained to extract the mos salient features from the object for the specific task in hand.\nThere is major underlying difference between the human approach discussed above and the siamese. approach to the problem. In the human way, the information from the two objects is fused from the. very beginning and this combined information primes the subsequent steps in comparison. There. are multiple lookups on each of the objects and each of these lookups are conditioned on the obser vations of both the objects so far. In the siamese way, when the embeddings in the target space are. compared the information fuses mostly at an abstract level and only in the last stage..\nWe tested ARCs across many visual tasks and compared it against strong baselines of prevalent methods. ARCs which did not use any convolutions showed superior performance compared to Deep Convolutional Siamese Neural Networks on challenging tasks. Though Dense ARCs are as capable as ConvNets, a combination of both ARCs and convolutions produces more superior models (hereafter referred to as ConvARCs), capable of better generalization and performance. In the task of estimating the similarity of two characters from the Omniglot dataset for example, ARCs and Deep Convnets both achieve about 93.4% accuracy, whereas ConvARCs achieve 96.10% accuracy.\nFigure 1: The abstract computational graph of a Binary ARC comparing two images. The controller which is an RNN primes the whole process. The two images are alternatively and repeatedly at tended to, depicted by the carousel below. At each time-step the glimpse taken from the image is based on the attention parameters Nt which is calculated using the previous state of RNN ht-1 by orojecting it with Wq. The glimpse obtained Gt and the previous state ht-1 together used to update the state of controller to ht. The vertical dotted lines demarcate the time-steps.\nWe were interested to see the utility of the human way of comparing objects. For this, we used the modern tools of attention and recurrence to build an end-to-end differentiable model that can learn to compare objects called Attentive Recurrent Comparators (ARCs). ARCs judge similarity of objects similar to the way people do as discussed above.\nFurther, as discussed above, similarity estimation is a generic and a primal task in many other higher. level cognitive tasks. Evaluating our model on these higher-level tasks also lets us explore the. generalisation capacity of ARCs. In this work, we study the performance of models designed to. perform One Shot Learning with ARCs as building blocks. On the Omniglot one-shot classification task, our model achieved 98.5% accuracy significantly surpassing the current state of the art set by. Deep Learning methods or other systems.\nFundamentally, the performance of ARCs shows the value of early fusion of information across the. entire context of the task. Further, it also strengthens the view that attention and recurrence together can be as good as convolutions in some cases.\nThe model operates on given two images over the span of an episode. The images are given at the beginning of the episode and the ARC is expected to emit a token of similarity at the end of this. episode. Given two images {xa, x}, the model repeatedly cycles through the both, attending to only one image at one time step. Thus the sequence of presentations is xa, xb, xa, xb, ... and so on. for a finite number of presentations of each image. An episode is nothing more than a collection of. time-steps, with an action being taken in each time-step..\nFor time step t the input image presented is given by\nIt < xg if t % 2 is 0 else xb\nThe model functionally consists of a recurrent core and an attention mechanism. During the spa. of the episode, the model iteratively focuses its attention on the current input. At each time step c. the episode, the model attends to only one input, but over the course of many time steps it woul. have observed many aspects of all the inputs. The observations are made by the model at eac. time step by directing its attention to a region of interest in each input. Since the core of the mode. is a recurrent neural network, this round robin like cyclic presentation of inputs allows for earl fusion of information from all the inputs. This makes the model aware of the context in which it i. operating. Consequently, this provides feedback to the attention mechanism to attend on the relevar. and crucial parts of each sample considering the context of all the inputs and observations made s. far.\nIf there are n inputs and we allow for g glimpses of each input, then the episode length L is ng The hidden state of the RNN controller at the final time step h1 can be then used for subsequent processing.\nThe attention mechanism focuses on a specific region of the image I to get the glimpse Gt\nGt < attend(It, t Nt = Wqht-1\nattend(.) is the attention mechanism described in the sub section below, that acts on image It. are the attention glimpse parameters which specify the location and size of the attention window At each step, we use the previous hidden state of the RNN core ht-1 to compute Nt. Wg is the projection matrix that maps the hidden state to the required number of attention parameters.\nNext, both the glimpse and previous hidden state are combined to form the next hidden state\nThe above 4 equations describe the Binary ARC. We arrived at the iterative cycling of input paradigm. after trying out many approaches to attend to multiple images at once. Iterative cycling turned out to\nThe ARC model can be directly derived by distilling the vital aspects from the human way discussed in Section 1. In the following paragraphs we describe the ARC model for the binary image case - where there are two images whose similarity has to be judged. It is trivial to generalise it to more objects or other modalities. See Figure 1 for a visual depiction of the model.\nht< RNN(Gt,ht-1)\nbe more computationally efficient, scalable and statistically more consistent than other approache. we tested. Note that I for some t alternates between xg and xb, while the rest of the equations are exactly the same for all time steps"}, {"section_index": "3", "section_name": "2.1 ATTENTION MECHANISM", "section_text": "The attention mechanism is based on DRAw (Gregor et al.]2015), zoomable and differentiable The attention window is defined by an N N 2D grid of Cauchy kernels. We found that the heavy tail of the Cauchy curve to aids in alleviating some of the vanishing gradient issues and it sped up training.\nThe grid's location and size is defined based on the glimpse parameters. The N N grid of kernels is placed at (x, y) on the S S image, with the central Cauchy kernel being located at (x, y) The distance between two Cauchy kernals either in the vertical or horizontal direction is o. In other words, the elemental square of the 2D grid is & in size. The glimpse parameter set Nt is unpacked IandSor tedfroM nd S using the following transforms\n(x+1) (y+1) (S-1) x = S y=( y = e1-2|] 2 2\nThe location of a ith row, jth column's Cauchy kernel in terms of the pixel coordinates of the image is given by:\ntx=x+i-(N+1)/2) and ly=y+(j-(N+1)/2)d\nThe horizontal and vertical filterbank matrices are then calculated as:\n1'v and Fy[j,b\nattend(It, Nt) = FyItFX\nattend thus gets an N N patch of the image, which is flattened and used in the model\nAs seen in the experimental section below, while simple attention over raw images performs as well as Deep ResNets, we found large improvements by using Convolutional feature extractors Applying several layers of Convolution produces a 3D solid of activations (or a stack of 2D feature maps). Attention over this corresponds to applying the same 2D attention over the entire depth of the 3D feature map and outputting the flattened glimpse.\nUnderstanding the empirical functioning of an ARC and identifying factors affecting its performance. requires both qualitative and quantitative studies. Qualitative analysis tells us what the model is do-. ing when it is comparing 2 images and how this relates to human ways of comparison. Quantitative analysis shows the variations in performance when certain aspects of the model are changed and thus provide an estimate of their importance. For the analysis presented below, we use the simple. ARC model (without convolutions) described in Section 2 above trained for the verification task on the Omniglot dataset. Data samples in the Omniglot dataset have an understandable structure with characters being composed of simple strokes drawn on a clean canvas. The dataset is also very. diverse, which allows us to study various characteristics of our model under a wide range of condi-. tions. Since our main result in the paper is also on the Omniglot dataset (Sections 4 and 5), we train.\nSYA S C 861 '861 76\n(a) It can be seen that the two characters look very similar in their stroke pattern and differ only ir their looping structure. ARC has learnt to focus on these crucial aspects.\n(b) ARC parses over the characters in a left to right, top to bottom fashion. Finally, it ends up focussing in the region where the first character has a prolonged downward stroke, whereas the second one does not.\nFigure 2: Attention windows over time when comparing the two Omniglot characters. The top row has the first image and the bottom row has the second. Each column represents a glimpse step. (a) Comparing two dissimilar characters and (b) Comparing two similar characters.\nThe verification task is a binary classification problem wherein the model is trained to predic whether the 2 drawings of characters provided belong to the same character or not (see Section 4 fo1 more details). The final hidden state of the RNN Controller h1 is given to a single logistic neuron tc estimate the probability of similarity. The whole setup is trained end to end with back-propagatior and SGD. The particular model under consideration had an LSTM controller (Hochreiter & Schmid huber[1997) with forget gates (Gers et al.[2000). The number of glimpses per image was fixed tc 8, thus the total number of recurrent steps being 16. 32 32 greyscale images of characters were used and the attention glimpse resolution is 4 4.\nThe following inferences were made after studying several cases of ARC's operation (see Figure 2 for an example):\n33333 3 33 7 7 1\n1. The observations in one image are definitely being conditioned on the observations in th other image. This can be seen in figures 2a and 2b. 2. The ARC seems to have learnt a fairly regular left to right parsing strategy, during whicl the attention window gradually reduces in size. This is quite similar to strategies found ii other sequential attentive models like DRAw (Gregor et al.]2015). 3. Deviation from such regular ordered parsing occurs if model finds some interesting featur in either character. This results in attention being fixated to that particular region of th character for a few subsequent glimpses. 4. There is no strict coordination or correspondence chronologically between the attende regions of the two images. While instances of ARC focussing on the same aspect/stroke o two characters were common, there were plenty more instances wherein the ARC attende to different aspects/strokes in each image during an interval. We hypothesise that the RNI"}, {"section_index": "4", "section_name": "3.2 QUANTITATIVE ANALYSIS", "section_text": "We performed a simple yet very insightful ablation study to understand ARC's dynamics. ARC accumulates information about both the input images by a series of attentive observations. We trained 8 separate binary classifiers to classify images as being similar or not based on hidden states of the LSTM controller at every even time step correspondingly . The performance of these binary classifiers are correlated with the information contained in the hidden states. The performance of these classifiers is reported in Table 1. Since the ARC has an attention window of only 4 4 pixels it can barely see anything in the first time step, where its attention is spread throughout the whole image. With more glimpses, finer observations bring in more precise information into the ARC and the recurrent transitions make use of this knowledge, leading to higher accuracies. We also used the 8 binary classifiers to study how models confidence grows with more glimpses and one such good example is provided in Figure 3.\nTable 1: Glimpses per image vs Classification Accuracy of ARC\n(a) ARC is very unsure of similarity at the beginning. But at 5th glimpse (4th column), the attention goes over the region where there are strokes in the first image and no strokes in the second one resulting in dropping of the score\n(b) Initially ARC is unsure or thinks that the characters are similar. But towards the end, at 6th glimpse (5th column), the model focusses on the region where the connecting strokes are different.. The similarity score drops and with more \"ponder'\", it falls down significantly..\ncontroller could be utilizing turns of glimpsing at an image to observe some other aspects which are not of immediate consequence.. 5. We also frequently encountered cases wherein the attention window, after parsing as de-. scribed in point 2, would end up focusing on some blank, stroke-less region, as if it had. stopped looking at the sample. We hypothesize that the model is preferring to utilize its recurrent transitions and not to be disturbed by any input stimuli..\nGLIMPSES ACCURACY 1 58.2% 2 65.0% 4 80.8% 6 89.25% 8 92.08% 0.6\nFigure 3: Attention windows over time and instantaneous predictions from independent binary clas. sifiers. The first glimpse is omitted as it covers the whole image. In the graph: x-axis - glimpse number, y-axis - similarity score. The red line is the decision threshold, above which the images are. considered to be similar. Both of the cases above are examples of a dissimilar pair.."}, {"section_index": "5", "section_name": "1+ SIMILARITY LEARNING", "section_text": "Verification is a generic and common task in Machine Learning. The verification task essentially requires models that can predict whether the two inputs are the same or different, for some notior of same (such as unique facial identity, objects of same class etc.,). Specifically, here we restrict ourselves to the task of estimating the similarity of the given pair of images. When given twc images the models are required to output a single logistic value, which is expected to be 1 for very similar inputs and O for very dissimilar inputs. We compare our ARC model with several baselines and report performance on two challenging datasets."}, {"section_index": "6", "section_name": "4.1.1 OMNIGLOT", "section_text": "The dataset is thoroughly detailed in the next section which is on one shot classification on thi dataset. And this task acts as a precursor to the more sophisticated next task. We use 32 32 images and similar/dissimilar pairs of character drawings are randomly chosen only within alphabet to make the task more challenging. Out of the 50 alphabets provided in the dataset, 30 were used for training 10 for validation and the last 10 for testing."}, {"section_index": "7", "section_name": "4.3 RESULTS", "section_text": "The results are in Table 1 for Omniglot respectively. Our simple ARC model without using any. convolutional layers obtains a performance that matches a AlexNet style 6 layer Deep Convnet with millions of parameters. Using convolutional feature extractors, ARCs outperform the Wide ResNet. based Siamese ConvNet baselines, even the ones containing an order of magnitude more parameters.\nOne shot learning requires Machine Learning models to be at the apotheosis of data efficiency. In case of classification. only a single example of each individual class is given and the model is\nWe consider strong convolutional baselines, which have been shown time and again to excel at such visual tasks. We particularly use Wide Resnets (WRNs) (Zagoruyko & Komodakis] 2016) which are the current state of the art models in image classification. Independent nets were tuned for each dataset. Hyper parameters were set for reasonable values for all our ARC models and no hyper- parameter tuning of any kind was employed. For the Omniglot dataset, we also include the result from (Koch et al.) We used moderate data augmentation consisting of translation, flipping, rotation and shearing, which we found to be critical for training ARC models.\nTable 2: Performance of ARC vs conventional methods on the verification task. All values are accuracies on the test set. For Wide ResNets, suffixes specify the depth and width. For example (d=60, w=4) means that it is a ResNet that 60 is layers deep with each residual block having a width multiplier of 4.\nOmniglot is a dataset byLake et al.(2015) that specially designed to compare and contrast th learning abilities of humans and machines. The dataset contains handwritten characters of 50 of th world's languages/alphabets. Though there are 1623 characters, there are only 20 samples for eac. which is drawn by 20 individuals. So this is in a diagonally opposite position when compared t MNIST or ImageNet. One Shot Classification on this dataset is very challenging one as most Deej Learning systems do not work well in such extreme conditions. Lake et al.(2015) developed dedicated system for such rapid knowledge acquisition called Hierarchical Bayesian Programmin Learning, which surpasses human performance and is the current state of the art of all methods.\nThe dataset is divided into a background set and an evaluation set. Background set contains 30 alpha bets (964 characters) and only this set should be used to perform all learning (e.g. hyper-parameter inference or feature learning). The remaining 20 alphabets are for pure evaluation purposes only Each character is a 105 105 image.\nA one shot classification task episode is as follows: from a randomly chosen alphabet, 20 character are chosen which becomes the support set. One character among these 20 becomes the test character 2 drawers are chosen, one each for the support set and the test character. The task is to match the test drawing to the correct character's drawing in the support set. Assigning an image to one of the 20 characters given results in a 20-way, 1-shot classification task."}, {"section_index": "8", "section_name": "5.2.1 NAIVE ARC MODEL", "section_text": "This is a trivial extension of ARC for used for verification to this task. The test image from the first set is chosen and compared against all 20 images from the second set. It is matched to the character with maximum similarity. This is done for 20 times for each character in the first set.."}, {"section_index": "9", "section_name": "5.2.2 FULL CONTEXT ARC", "section_text": "Our whole hypothesis in this work has been about the value of providing the full context to the. model. And we have shown to that models which are aware of the context of operation are better than the ones that aren't. While Naive ARC model is simple and efficient, it does not incorporate the whole context in which our model is expected to make the decision of similarity. When the character is being compared to 20 other characters from the support set, the comparisons are all independently done. That is, the model is not aware available options for matching, so it assigns the similarity score to each pair independently.\nIt is highly desirable to have a 20-way ARC, where each observation is conditioned on the all images.. Unfortunately, such a model is not practical. The recurrent controller has memory limitations in its state. Scaling up the memory incurs a huge parameter burden. So instead, we use a hierarchical setup, which decomposes the comparisons to be at two levels - first local pairwise comparison and a second global comparison. We found that this model reduces the information that has to be crammed in the controller state, while still providing sufficient context..\nAs with the Naive method, we compare one image from set A with one from set B in pairs. Bu1 instead of emitting a similarity score immediately, we collect the comparison embeddings of each comparison. Comparison embeddings are the final hidden state of the controller when the test image. processed by a Bi-Directional LSTM layer. This merges the information from all comparisons, thus.\nexpected to generalise to new samples. A classic example is of a human kid learning about the animal giraffe (Vinyals et al.|2016). The kid does not need to see thousands of images of a Giraffe to learn to detect it. Rather, just from a single example, the kid can not only recognize it at a future point, but going further, she can also speculate on its other characteristics. While humans excel at this task, current Deep Learning systems are at the opposite end of the spectrum, where they are trained on millions of samples to achieve the kind of results that they are well known for. With ARCs we have developed a generic method to comparing objects. We have also shown that our model generalizes extremely well. So we decided to test ARC on the challenging Omniglot dataset\nproviding the necessary context before score emission. This is also the method used in Matching Networks(Vinyals et al.2016)\nLSTM(e;) l c; =[LSTM Vj E [1,20]\nPj = softmax(Sj Vj E[1,20]"}, {"section_index": "10", "section_name": "5.3.2 WITHIN ALPHABETS", "section_text": "The across alphabet task is much more simpler as it is easy to distinguish characters belonging to different languages, compared to distinguishing characters belonging to the same language. Further across alphabet methods use a lot more data which is a particularly advantageous entity for Deep Learning Methods\nThere are large variations in the resolution of the images used as well. The Deep Siamese Networl. of Koch et al.uses 105x105 images and thus not comparable to out model, but we include it as it i. the current best result using deep neural nets. The performance of MANNs in this standard setup is. interpreted from the graph in the paper as the authors did not report it. It should also be noted tha. HBPL incorporates human stroke data into the model. Lake et al estimate the human performance. to be at 95.5%.\nTable 3: One Shot Classification accuracies of various methods and our ARC models\nEach embedding is mapped to a single score s; = f(c;), where f(.) is an affine transform followed by a non-linearity. The final output is the normalized similarity with respect to all similarity scores\nThis whole process is to make sure that we adhere to the fundamental principle of deep learning which is to optimise objectives that directly reflect the task. The normalisation allows for the ex- pression of relative similarity rather than absolute similarity..\nWe compare the two models discussed above with other methods in literature: starting from the simplest baseline of k-Nearest Neighbours to the latest meta-learning methods. The training and evaluation practises are non consistent.\nMany papers recently, like Matching NetworksVinyals et al.(2016) and MANNsSantoro et al. (2016) have used 1200 chars for background set (instead of 964 specified by Lake et al.[(2015) The remaining 423 characters are used for testing. Most importantly, the characters sampled for both training and evaluation are across all the alphabets in the training set.\nThis corresponds to standard Omniglot setting where characters are sampled within an alphabet and only the 30 background characters are used for training and validation.."}, {"section_index": "11", "section_name": "5.4 RESULTS", "section_text": "Results are presented in Table 2. Our ARC models outperform all previous methods according to both of the testing protocols and establish the corresponding state of the art results\nDeep Neural Networks (Schmidhuber2015) (LeCun et al.[ 2015) are very complex parametrisec. functions which can be adapted to have the required behaviour by specifying a suitable objective. function. Our overall model is a simple combination of the attention mechanism and recurrent. neural networks (RNNs). We test our model by analysing its performance in similarity learning. We. also test its generalisation ability by using it in a model built for the challenging task of one sho classification on hand-written character symbols..\nIt is known that attention brings in selectivity in processing information while reducing the process- ing load (Desimone & Duncan|1995). Attention and (Recurrent) Neural Networks were combined in Schmidhuber & Huber(1991) to learn fovea trajectories. Later attention was used in conjunc. tion with RBMs to learn what and where to attend in Larochelle & Hinton(2010) and in Denil et al.(2012). Hard Attention mechanism based on Reinforcement Learning was used in Mnih et al. (2014) and further extended to multiple objects in|Ba et al.(2014); both of these models showed that the computation required at inference is significantly less compared to highly parallel Convolutional Networks, while still achieving good performance. A soft or differentiable attention mechanisms have been used in Graves(2013). A specialised form of location based soft attention mechanism well suited for 2D images was developed for the DRAW architecture (Gregor et al.]2015), and this forms the basis of our attention mechanism in ARC.\nA survey of the methods and importance of measuring similarity of samples in Machine Learning is presented in|Bellet et al.[(2013). With respect to deep learning methods, the most popular archi tecture family is that of Siamese Networks (Bromley et al.|1993). The energy based derivation is presented in Chopra et al.[(2005) and since then they have been used across wide range of modalities - in vision (Zagoruyko & Komodakis] 2015) (Bertinetto et al.2016), for face recognition and verifi cation (Taigman et al.[[2014) and in Natural Language Processing (Lu & Li][2013) (Hu et al.[2014) Recently Triplet Losses (Hoffer & Ailon2015) are being used to achieve higher performance and is similar to our Ternary ARC model at an abstract level.\nA bayesian framework for one shot visual recognition was presented in Fe-Fei et al.(2003).Lake et al.[(2015) extensively study One Shot Learning and present a novel probabilistic framework called Hierarchical Bayesian Program Learning (HBPL) for rapid learning. They have also re leased the Omniglot dataset, which has become a testing ground for One Shot learning techniques Recently, many Deep Learning methods have been developed to do one shot learning: Koch et al use Deep Convolutional Siamese Networks for performing one shot classification. Matching Net- works (Vinyals et al.]2016) and Memory Augmented Neural Networks (Santoro et al.]2016) are other approaches to perform continual or meta learning in the low data regime. All the models except the HBPL have inferior one shot classification performance compared to humans on the Om niglot Dataset."}, {"section_index": "12", "section_name": "CONCLUSION AND FUTURE WORK", "section_text": "We presented a model that uses attention and recurrence to cycle through a set images repeatedly and. estimate their similarity. We showed that this model is not only viable but also much better than the siamese neural networks in wide use today in terms of performance and generalization. Our main\nresult is in the task of One Shot classification on the Omniglot dataset, where we achieved state oi the art performance surpassing HBPL's and human performance.\nOne potential downside of this model is that due to sequential execution of the recurrent core and by the very design of the model, it might be more computationally expensive than a distance metric method. But we believe that advancing hardware speeds, such costs will be outweighed by the benefits of ARCs.\nMore interesting extensions would involve developing more complex architectures using this bottom-up approach to solve even more challenging AI tasks.."}, {"section_index": "13", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We would like to thank all the members of the Statistics and Machine Learning Lab at the Indiar Institute of Science for their support and feedback. We would like to specifically thank Akshay Mehrotra for his extensive help with everything from the implementation to discussing results. We would also like to thank Siddharth Agrawal and Gaurav Pandey for their helpful feedback throughou the process. We would like to thank Soumith Chintala for his feedback on this manuscript and the idea."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visua attention. arXiv preprint arXiv:1412.7755, 2014.\nAurelien Bellet, Amaury Habrard, and Marc Sebban. A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709, 2013.\nLuca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully convolutional siamese networks for object tracking. arXiv preprint arXiv:1606.09549, 2016\nJane Bromley, James W Bentz, Leon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduarc Sackinger, and Roopak Shah. Signature verification using a ?siamese? time delay neural network International Journal of Pattern Recognition and Artificial Intelligence. 7(04):669-688. 1993\nSumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision. and Pattern Recognition (CVPR'05), volume 1, pp. 539-546. IEEE, 2005.\nMisha Denil, Loris Bazzani, Hugo Larochelle, and Nando de Freitas. Learning where to attend with deep architectures for image tracking. Neural computation, 24(8):2151-2184, 2012.\nKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw. recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.\nThough presented in the context of images, ARCs can be used in any modality. There are in numerable ways to extend ARCs. Better attention mechanisms, higher resolution images, different datasets, hyper-parameter tuning, more complicated controllers etc are the simple things which could. be employed to achieve better performance..\nRobert Desimone and John Duncan. Neural mechanisms of selective visual attention. Annual review of neuroscience, 18(1):193-222, 1995.\nElad Hoffer and Nir Ailon. Deep metric learning using triplet network. In International Worksho on Similarity-Based Pattern Recognition, pp. 84-92. Springer, 2015.\nBrenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015.\nHugo Larochelle and Geoffrey E Hinton. Learning to combine foveal glimpses with a third-order boltzmann machine. In Advances in neural information processing systems, pp. 1243-1251, 2010.\nYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436-444 2015.\nZhengdong Lu and Hang Li. A deep architecture for matching short texts. In Advances in Neural Information Processing Systems, pp. 1367-1375, 2013.\nTom Schaul and Jurgen Schmidhuber. Metalearning. Scholarpedia, 5(6):4650, 2010.\nJuergen Schmidhuber and Rudolf Huber. Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems, 2(01n02):125-134, 1991.\nVolodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In Advance 22122014\nAdam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One. shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016."}]
HyQWFOVge
[{"section_index": "0", "section_name": "SIGNIFICANCE OF SOFTMAX-BASED FEATURES OVER METRIC LEARNING-BASED FEATURES", "section_text": "Shota Horiguchi, Daiki Ikami & Kiyoharu Aizawa\nDepartment of Information and Communication Engineerin The University of Tokyo. 7-3-1Ho Tokvo.JP\nThe extraction of useful deep features is important for many computer vision tasks. Deep features extracted from classification networks have proved to per- form well in those tasks. To obtain features of greater usefulness, end-to-end distance metric learning (DML) has been applied to train the feature extractor di- rectly. End-to-end DML approaches such as Magnet Loss and lifted structured feature embedding show state-of-the-art performance in several image recogni- tion tasks. However, in these DML studies, there were no equitable comparisons between features extracted from a DML-based network and those from a softmax- based network. In this paper, by presenting objective comparisons between these two approaches under the same network architecture, we show that the softmax- based features are markedly better than the state-of-the-art DML features for tasks such as fine-grained recognition, attribute estimation, clustering, and retrieval."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recent developments in deep convolutional neural networks have made it possible to classify many classes of images with high accuracy. It has also been shown that such classification network work well as feature extractors. Features extracted from classification networks show excellent performance in image classification (Donahue et al.||2014), detection, and retrieval (Razavian et al. 2014} Liu et al.] 2015), even when they have been trained to classify 1000 classes of the ImageNe dataset (Russakovsky et al.[|2015). It has also been shown that fine-tuning for target domains further improves the features' performance (Wan et al.2014fBabenko et al.2014).\nOn the other hand, distance metric learning (DML) approaches have recently attracted considerable. attention. These obtain a feature space in which distance corresponds to class similarity; it is not a byproduct of the classification network. End-to-end distance metric learning is a typical approach. to constructing a feature extractor using convolutional neural networks and has been the focus ol numerous studies (Bell & Bala]2015)|Schroff et al.[|2015). Some DML methods have been reportec to show state-of-the-art performance in fine-grained classification (Rippel et al.2016) and clustering. and retrieval (Song et al.2016) contexts.\nHowever, there have been few experiments comparing softmax-based feature extraction with DML- based feature extraction under the same network architecture or with adequate fine-tuning. An analysis providing a true comparison of DML features and softmax-based features is long overdue As we explain more fully in the following section, we contend that there is no reason that DML, which learns feature embedding explicitly, should outperform a softmax-based feature extractor.\nFig.1|depicts the feature vectors extracted from a softmax-based classification network and a metric. learning-based network. We used LeNet architecture for both networks, and trained on the MNIST dataset (LeCun et al.f 1998). For DML, we used the contrastive loss function (Hadsell et al.||2006 to map images in two-dimensional space. For softmax-based classification, we added a two- or three-dimensional fully connected layer before the output layer for visualization. DML succeeds in learning feature embedding (Fig.1a). Softmax-based classification networks can also achieve a result very similar to that obtained by DML: Images are located near one another if they belong tc the same class and far apart otherwise (Fig.1b] Fig.1c)."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Figure 1: Depiction of MNIST dataset. (a) Two-dimensional features obtained by siamese network (b) Two-dimensional features extracted from softmax-based classifier; these features are well sepa rated by angle but not by Euclidean norm. (c) Three-dimensional features extracted from softmax based classifier; we normalized these to have unit L2 norm and depict them in an azimuth-elevatior coordinate system. The three-dimensional features are well separated by their classes.."}, {"section_index": "3", "section_name": "2.1 PREVIOUS WORK", "section_text": "Convolutional neural networks have demonstrated great potential for highly accurate image recog nition (Krizhevsky et al.2012)Simonyan & Zisserman2015 Szegedy et al.[2015fHe et al.[2016] It has been shown that features extracted from classification networks can be repurposed as a good feature representation for novel tasks (Donahue et al.]2014] Razavian et al.]2014]Qian et al.2015] even if the network was trained on ImageNet (Russakovsky et al.2015). For obtaining better feature representations, fine-tuning is also effective (Babenko et al.|2014)."}, {"section_index": "4", "section_name": "2.1.2 DEEP DISTANCE METRIC LEARNING", "section_text": "Distance metric learning (DML), which learns a distance metric, has been widely studied (Bromley. et al.|1994f|Chopra et al.[2005f [Chechik et al.[2010fQian et a1.f 2015). Recent studies have focused on end-to-end deep distance metric learning (Bell & Bala]2015fSchroff et al.2015] Li et al.]2015 Rippel et al.]2016] Song et al.]2016). However, in most studies comparisons of end-to-end DML with features extracted from classification networks have not been performed using architectures and conditions suited to enable a true comparison of performance.Bell & Bala (2015) compared classification networks and siamese networks, but they used coarse class labels for classification. networks and fine labels for siamese networks; thus, it was left unclear whether siamese networks are better for feature-embedding learning than classification networks.Schroff et al.(2015) used. triplet loss for deep metric learning in their FaceNet, which showed performance that was state of. the art at the time, but their network was deeper than that of the previous method (Taigman et al.. 2014); thus, triplet loss might not have been the only reason for the performance improvement, and. the contribution from adopting triplet loss remains uncertain.Rippel et al.[(2016) used the Magnet.\nT/ 0.5 2 3 4 5 -0.5 6 7 8 0.5 9 2 -1 0 -0.5 0 -TT/2 0 T/2 (a) Siamese (dim = 2) (b) Softmax (dim = 2) (c) Softmax (dim = 3) + L2 normalization\n/2 0.5 0 4 5 -0.5 -/2 2 -0.5 0 0.5 /2 /2 7 0\nWe show methods to exploit the ability of deep features extracted from softmax-based net works, such as normalization and proper dimensionality reduction. This is not technically novel, but this must be useful for fair comparison between image representations. We demonstrate that deep features extracted from softmax-based classification networks show markedly better results on fine-grained classification, attribute estimation, clustering and retrieval tasks than those from DML-based networks in almost all datasets. We show that DML-based methods offer performance competitive to softmax-based meth ods only when the training dataset consists of a very small number of samples per class\nLoss function for their DML. They tried softmax-based features as a comparison, but their results are unfairly low from our results as shown in Section4.2[and 4.3 Song et al.(2016) used lifted structured feature embedding, another state-of-the-art DML method; however, they only compared their method with a softmax-based classification network pretrained on ImageNet (Russakovsky et al.[2015) and did not compare it with a fine-tuned network.\n2.2 DIFFERENCES BETWEEN SOFTMAX-BASED CLASSIFICATION AND METRIC LEARNING\nFor classification, the softmax function (Eq.[1) is typically used.\nwhere pc denotes the probability that the vector u belongs to the class c. The loss of the softmax function is defined by the cross-entropy.\nwhere q is a one-hot encoding of the correct class of u. To minimize the cross-entropy loss, networks are trained to make the output vector u close to its corresponding one-hot vector. It is important to note that the target vectors (the correct outputs of the network) are fixed during the entire training (Fig.2).\nOn the other hand, DML methods use distance between samples. They do not use the values of the labels; rather, they ascertain whether the labels are the same between target samples. For example contrastive loss|Hadsell et al.(2006) considers the distance d between a pair of samples:\nwhere a represents the margin and q E {0, 1} indicates whether the images in a pair are in the same class (1) or not (0). Recent studies (Schroff et al.]2015)Rippel et al.] 2016]Song et al.2016) use pairwise distances between three or more images at the same time for fast convergence and efficient calculation. However, these methods have some drawbacks. DML methods sometimes require complicated operations such as hard negative sampling (Schroff et al.|2015) Rippel et al.2016) and k-means clustering for every epoch (Rippel et al.||2016). For DML, in contrast to optimization of the softmax cross-entropy loss, the optimization targets are not always consistent during training even if all possible distances within the mini-batch are considered. Thus, the DML optimization converges very slowly and is not stable and unsteadily. An additional problem is that methods for sampling. positive pairs and negative pairs have not been established..\niter 1 iter 2 iter 3 iter 4 (1,0,0) (1,0,0) (1,0,0) 1,0,0 Softmax A O O A A A (0,1,0) (0,0,1) (0,1,0) (0,0,1) (0,1,0) (0,0,1) 0,1,0) (0,0,1) O ^ 7 ^ Siamese O O A H 4\nFigure 2: Illustration of learning processes for softmax-based classification network and siamese- based DML network. For softmax, the gradient is defined by the distance between a sample and a fixed one-hot vector, and for siamese by the distance between samples.\nex] Pc X\nE=-> qc log Pc; c=1\n1 E qd2 + (1 q) max(a- d,0)"}, {"section_index": "5", "section_name": "3.1 DIMENSIONALITY REDUCTION LAYER", "section_text": "One of DML's strength in using fine-tuning is the flexibility of its output dimensionality. When us. ing features of a mid-layer of a softmax classification network, on the other hand, the dimensionalit of the features is fixed. Some existing methods (Babenko et al.[2014) use PCA or discriminative di . mensionality reduction to reduce the number of feature dimensions. In our experiment, we evaluate. three methods for changing the feature dimensionality. Following conventional PCA approaches, w. extracted features from a 1024-dimensional pool5 layer of GoogLeNet (Szegedy et al.]2015) Ioffe 8 Szegedy|2015) (Fig.3a) and applied PCA to reduce the dimensionality. In a contrasting approach we made use of a fully connected layer: We added a fully connected layer having the required num. ber of neurons just before the output layer (FCR 1, Fig.3b). We also investigated a third approacl. in which a fully connected layer is added followed by a dropout layer (FCR 2, Fig.3c). We intenc. to show that the features extracted from the pool5 layer of FCR 2 provide better performance thai. those from FCR 1 even though they differ only in the positions of their dropout layers.."}, {"section_index": "6", "section_name": "3.2 NORMALIZATION", "section_text": "In this study, all the features extracted from the classification networks were from the last laye. before the last output layer. The outputs were normalized by the softmax function and then. evaluated by the cross-entropy loss function in the networks. Assume that the output vector is. p = {pi|, Pi = 1}. For arbitrary positive constant a, y = {log ap} returns the same vector. p after the softmax function is applied. The features x we extract from the networks are given as. x = W-'y, where W denotes the linear projection matrix from the layer before the output layei. to the output layer. The vector y has an ambiguity in its scale, thus vector x, a linear transform of. y, also has an ambiguity in the scale; therefore x should be normalized. As Fig.[1b|clearly indi. cates, the distance between features extracted from a softmax-based classifier should be evaluatec. by cosine similarity, not by the Euclidean distance..\nSome studies used L2 normalization for deep features extracted from softmax-based classificatior networks (Taigman et al.[2014), whereas many recent studies have used the features without an normalization (Krizhevsky et al.]2012] Rippel et al.]2016 Song et al.]2016] Wei et al.]2016). Ir this study. we also planned to validate the efficiency of normalizing deep features."}, {"section_index": "7", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we compare the deep features extracted from classification networks to those reported from state-of-the-art deep metric learning methods (Rippel et al.I 2016f Song et al.2016) in their performance on several tasks."}, {"section_index": "8", "section_name": "4.1 PROCEDURE", "section_text": "All our networks were fine-tuned from the weights that were pretrained on ImageNet (Russakovsky et al.|2015). To evaluate fine-grained classification and attribute estimation performances, we used GoogLeNet with batch normalization (Ioffe & Szegedy2015) and did not use any dimentionality reduction layers described in Section[3.1] To evaluate clustering and retrieval performances we used GoogLeNet without batch normalization (Szegedy et al.[[2015) and dimentionality reduction layers. We used the Caffe (Jia et al.2014) framework for our experiments.\nFor the evaluation of deep features in fine-grained classification tasks, we used three image datasets:. Stanford Dogs(Khosla et al 2011), Oxford 102 Flowers (Nilsback & Zisserman 2008, and Oxford-IIIT Pet (Parkhi et al 2012). For the softmax-base method we fine-tuned the classifier. from weights that were pretrained on ImageNet. We defined the learning rate using validation data.. setting the learning rate to O.ooo1 for the Stanford Dogs dataset and the Oxford-IIIT Pet dataset and to 0.001 for the Oxford 102 Flowers dataset. Learning rates were not changed during the training.\nInput pool5 dropout Output inception5b/output 224 GoogLeNet 1024 1024 #Classes 1024 3 (a) GoogLeNet (dimensionality reduction by PCA) Input pool5 dropout fc_reduction Output inception5b/output 224 GoogLeNet 1024 1024 #Classes 1024 3 (b) GoogLeNet with dimensionality reduction by a fully connected layer just before the output layer (FCR 1) Input pool5 fc_reduction dropout Output inception5b/output A\n(b) GoogLeNet with dimensionality reduction by a fully connected layer just before the output layer (FCR 1)\ninception5b/output 224 GoogLeNet 1024 # Clas 224 1024\nFigure 3: GoogLeNet Szegedy et al.(2015) architecture we use in this paper. We extracted the features from the red-colored layers. For (a), we applied PCA to reduce the number of feature dimensions. For (b) and (c), the dimensionality is already reduced to the required number by the fc _reduction layer.\nWe rescaled all the input up by 30% and randomly cropped 224 224. These strategies are exactly the same as those of the previous method (Rippel et al.]2016).\nWe show the mean error rates for the three datasets in Table1 All our results were evaluated us- ing a 1-nearest neighbor search of the 1024-dimensional vectors extracted from the pool5 layer of. GoogLeNet with batch normalization (Ioffe & Szegedy2015). In all the experiments, the features extracted from the fine-tuned classification network show the best fine-grained classification perfor- mance. Our results of softmax-based classification are better than the results in Rippel et al.(2016) The experiments of softmax-based classification in[Rippel et al.[(2016) were not the best..\nRippel et al.(2016) evaluated features' expressiveness using mean attribute precision and showec that the features generated by their proposed method contain intra-class diversity. In this section we investigate the intra-class diversity of softmax features. We use the ImageNet Attribute datase (Rippel et al.|2016), which consists of overlap between the ImageNet training set (Russakovsky et al.2015) and the Object Attribute dataset (Russakovsky & Fei-Fei] 2010). We used only the images and their class labels during our training of the softmax classifier and did not use attributes.\npo015 aropout inception5b/output 224 GoogLeNet 1024 1024 #Classes 224 1024 3\ninception5b/output GoogLeNet 1024 1024 #Classes 1024\nTable 2|shows the error rates of 90-way classification under different training methods. Our fine- tuned softmax classifier outperformed those ofRippel et al.(2016) by a considerable margin. Fig. shows the mean attribute precision for the ImageNet Attribute dataset. Our fine-tuned softmax features markedly outperformed those from Rippel et al. (2016). These results implicitly indicate that the features extracted from the pool5 layer contain intra-class diversity that is better than those from DML networks designed to keep intra-class diversity.\nTable 1: Error rates for various fine-grained image datasets\nTable 2: Classification error rates for the ImageNet At tribute dataset.\nHere, we give our evaluation of clustering and retrieval scores for the state-of-the-art DML methoc (Song et al.2016) and for the softmax classification networks. We used the Caltech UCsD Birds 200-2011 (CUB) dataset (Wah et al.]2011), the Stanford Cars 196 (CAR) dataset (Krause et al. 2013), and the Stanford Online Products (OP) dataset (Song et al.]2016). For CUB and CAR, we used the first half of the dataset classes for training and the rest for testing. For OP, we used the training-testing class split provided. The dataset properties are shown in Table 3] We emphasize that the class sets used for training and testing are completely different. We multiplied the learning rates of the changed layers (output layers for all models and the fully connected layer added for FCR 1 and FCR 2) by 10. The batch size was set to 128, and the maximum number of iterations for ou training was set to 20,o0o. These training strategies are exactly the same as those used in the earliei study (Song et al.2016).\nFor clustering evaluation, we applied k-means clustering 100 times and calculated the average stan dard F1 and NMI (Manning et al.||2008); the value for k was set to the number of classes in the test. set. For retrieval evaluation, we used the Recall@K metric (Jegou et al.]. 2011\nFor F and NMI, all of the softmax models, including PCA, FCR 1, and FCR 2, show markedly better scores than does lifted structured feature embedding. It is clear that L2 normalization im proves the scores of all the softmax-based models. The scores of PCA and FCR 1 drop slightly as the feature dimensionality decreases from 1024 for both the CUB dataset and the CAR dataset. On the other hand, FCR 2, which has a fully connected layer followed by a dropout layer, improves the scores in spite of the reduction in dimensionality, as shown in Fig.6] It may be that 1024 dimensions is too large to describe the image classes. This result may imply that to obtain the best features we\nTable 3: Properties of datasets. used in Section 4.4] Each cell shows the number of images. (upper figure) and the number. of classes (lower figure).\nDataset Train Test Total 5,864 5.,924 11,788 CUB 100 100 200 8,054 8,131 16,185 CAR 98 98 196 59,551 60,502 120,053 OP 11,318 11,316 22,634\nFigure 4: Mean attribute orecision for the ImageNet Attribute dataset.\nWe show the results for the CUB dataset in Fig.5|and for the CAR dataset in Fig. 6] We notice that we have been able to reproduce nearly exactly the scores of lifted structured feature embedding (Song et al.]2016). However, the deep features extracted from the softmax-based classification networks outperformed the lifted structured feature embedding in all the evaluation metrics.\n- 0.3 0.62 0.9 0.28 0.61 score 0.6 score 0.26 0.59 *PCA+ L2 0.24 PCA + L2 *FCR2+ L2 Q O-PCA O-PCA OFCR 2 Z 0.58 *- Lifted struct R *-FCR 1+ L2 *FCR1+ L2 0.22 0.6 OFCR1 *PCA+ L2 *FCR2+ L2 0.57 FCR 1 *FCR 2 + L2 OPCA OFCR 2 ? 0.2 * 0.50 O-FCR 2 * FCR 1 + L2 -*- Lifted struct 0.56* * *-Lifted struct OFCR 1 0.18 0.55 0.4 64 128 256 512 1024 64 128 256 512 1024 1 2 4 8 16 32 Embedding size Embedding size K\n1 0.3 0.62 0.9 0.28* 0.61 * 0.6 SCore 0.26* = = -- *- FCR 2 + L2 *-PCA + L2 0.24 * PCA + L2 O-PCA E OPCA OFCR 2 *- FCR 1 + L2 -*- Lifted struct *-FCR 1 + L2 0.22 * 0.6 O-FCR 1 * PCA + L2 *FCR2+ L2 0.57 OFCR 1 *-FCR 2 + L2 O PCA O FCR 2 0.2 * 0.50 O-FCR 2 -*- FCR 1 + L2 * Lifted struct 0.56* * -*- Lifted struct -O FCR 1 0.18 0.55 0.4 64 128 256 512 1024 64 128 256 512 1024 1 2 4 8 16 32 Embedding size Embedding size K\nFigure 5: F1, NMI, and Recall@K scores for the test set of the Caltech UCsD Birds 200-2011 dataset.\n0.3 0.64 0.28 0.62 0.9 0.26 0.6 sCoree K C0r s 0.58 *- PCA + L2 0.24 IWN O-PCA *-FCR 1 + L2 0.22 0.56 PCA + L2 * FCR 2 + L2 OFCR1 * PCA + L2 * FCR 2 + L2 OPCA OFCR 2 *-FCR 2 + L2 OPCA OFCR 2 * FCR 1 + L2 -*- Lifted struct 0.6 0.2 * FCR 1 + L2 * Lifted struct 0.54 OFCR2 -* Lifted struct OFCR1 O FCR 1 0.18 0.52 0.5 64 128 256 512 1024 64 128 256 512 1024 1 2 4 8 16 32 Embedding size. Embedding size K\n0.3 j 0.64 1 0.28 0.62 0.9 0.26 0.61 ore 0.8 *- PCA + L2 0.24* IWN O-PCA *FCR1+ L2 0.22C 0.56 * PCA + L2 * FCR 2 + L2 OFCR1 * PCA + L2 * FCR 2 + L2 O PCA O PCA OFCR 2 *FCR 2 + L2 OFCR2 0.6 0.2 * FCR 1 + L2 -* Lifted struct 0.54 * FCR 1 + L2 -*- Lifted struct O-FCR 2 O FCR 1 *- Lifted struct OFCR 1 0.18 0.52 0.51 64 128 256 512 1024 64 128 256 512 1024 1 2 4 8 16 32 Embedding size Embedding size K\nFigure 6: F1, NMI, and Recall@K scores for the test set of the Stanford Cars 196 dataset\n0.28 0.88 0.26 0.875 0.9 0.24 sCrree 0.87 0.22 * PCA + L2 IWN 0.865 O-PCA 0.2 *-FCR 1 + L2 0.86 0.7 OFCR1 0.18 * PCA + L2 * FCR 2 + L2 * PCA + L2 * FCR2 + L2 *-FCR 2 + L2 O- PCA O FCR 2 O PCA O FCR 2 * O-FCR 2 0.16 0.855 *FCR 1 + L2 -*- Lifted struct *- FCR 1 + L2 -*- Lifted struct 0.6 -* Lifted struct -O FCR 1 -O FCR 1 0.14 0.85 64 128 256 512 1024 64 128 256 512 1024 100 10 102 103 Embedding size Embedding size. K\n0.28 0.88. *=-=* 0.26 0.875 0.9 0.24 score 0.87 sCore 0.22 *PCA+ L2 0.865 IWIN O-PCA 0.2 *-FCR 1 + L2 0.86 0.7 OFCR 1 0.18 * PCA+ L2 *FCR2+L2 * PCA + L2 * FCR 2 + L2 *FCR 2 + L2 OPCA OFCR 2 O PCA OFCR 2 O-FCR 2 0.16 *- FCR 1 + L2 -*- Lifted struct 0.855 -*- FCR 1 + L2 -*- Lifted struct 0.6 * *- Lifted struct O FCR 1 -O FCR 1 0.14 0.85 64 128 256 512 1024 64 128 256 512 1024 100 101 102 103 Embedding size Embedding size K\nneed to first determine the optimum dimensionality of the feature space for the dataset and ther apply PCA.\nFor the Recall@K metric, we used 1024-dimensional features for the CUB dataset and 256 dimensional features for the CAR dataset. The softmax-based features outperformed the DML- based features. The differences between PCA, FCR 1, and FCR 2 are very minor. Regarding feature normalization, features without normalization show worse scores than do L2-normalized features.\nFig.7 shows the standard F1, NMI, and Recall@K for the Online Products dataset. We used 1024 dimensional features for the Recall@K metric. As shown in Table[3] the OP dataset is very different from the CUB and CAR datasets in terms of the number of classes and the number of samples per class; the number of samples per class in the OP dataset is limited to 5.3 on average. In contrast tc CUB and CAR, in the OP dataset the scores for softmax and for lifted structured feature embedding are nearly the same.\nFrom the results for these three datasets, we conjecture that the number of images contained in the. dataset has a considerable effect on softmax-based classification. In other words, it is difficult for DML to make use of the rich information from a large number of samples because of the randomness. described in the previous section. Hence, we changed the size of datasets by subsampling the images. of CUB and CAR datasets for each class and ran the experiments again. We constructed seven. datasets of different sizes, containing 5, 10, 20, 40, 60, 80, and 100 %, respectively, of the whole. dataset. As shown in Fig.8|and Fig.9] the difference between the scores for softmax and DML is small or close to zero if the size of the training dataset is small. The gap between softmax and DML. becomes larger as the dataset size increases. It is surprising that the scores of lifted structured feature. embedding on the CUB dataset did not increase even though we used more images for the training\nFigure 7: F1, NMI, and Recall@ K scores for the test set of the Online Products dataset\n0.32 0.64 0.56 0.3 0.62 0.54 0.28 0.52 ecoree 0.6 *FCR2 + L2 *FCR2+ L2 scoree 0.26 -*- Lifted struct *-Lifted struct 0.5 *FCR 2 + L2 0.58 -*- Lifted struct 0.24 0.22 0.46 0.2 0.54 * * 0.44 * 0.18 0.52 0.42 510 20 40 60 80 100 510 20 40 60 80 100 510 20 40 60 80 100 Training dataset size (%) Training dataset size (%) Training dataset size (%)\nFigure 8: F1, NMI, and Recall@K scores for test set of the Caltech UCSD Birds 200-2011 dataset under different dataset sizes. The feature dimensionality is fixed at 1024.\n0.3 0.65 0.8 0.6 0.7 0.25 *FCR 2 + L2 scoree *- Lifted struct 0.2 0.5 * 0.5 * -*-FCR 2 + L2 *FCR2+ L2 -*- Lifted struct 0.15 *- Lifted struct * * 0.4 * 0.4 ** 0.1 0.35 0.3 510 20 40 60 80 100 510 20 40 60 80 100 510 20 40 60 80 100 Training dataset size (%) Training dataset size (%) Training dataset size (%)\nFigure 9: F1, NMI, and Recall@K scores for test set of the Stanford Cars 196 dataset under different dataset sizes. The feature dimensionality is fixed at 256.\n(Fig.8). It can be said that DML cannot exploit large training datasets, whereas the softmax-based classifier can obtain features of high expressiveness.\nLimitations. When the number of classes are huge, it is hard to train classification networks due to GPU memory constraints. DML-based methods are suitable for such cases because they do not need the output layer which is proportional to the number of classes. For cross-domain tasks, such as sketches to photos (Yu et al.]2016)Sangkloy et al.]2016) or aerial views to ground views (Lin et al. 2015), DML is also effective. Classification-based learning needs complicated learning strategies like in|Castrejon et al. (2016). DML-based methods can learn cross-domain representation only by using a pair of networks.\nAnelia Angelova and Philip M. Long. Benchmarking large-scale fine-grained categorization. In WACV, pp. 532-539, 2014.\nAnelia Angelova and Shenghuo Zhu. Efficient object detection and segmentation for fine-graine recognition. In CVPR, pp. 811-818, 2013.\nBecause there was no equitable comparison in previous studies, we conducted comparisons of the. softmax-based classifier and DML methods using a design that would enable the methods to objec-. tively demonstrate their true performance capabilities. Our results show that the features extracted. from softmax-based classifiers perform better than those from state-of-the-art DML methods (Rip-. pe1 et al.[2016Song et al.]2016) on fine-grained classification, clustering, and retrieval tasks,. especially when the size of the training dataset is large. The experimental results also show that. softmax-based features exhibit rich intra-class diversity even though the softmax classifier is not explicitly designed to do so, unlike to the previous method (Rippel et al.f 2016). It is obvious that. the softmax-based features are still strong baselines. We hope that softmax-based features are taken. into account when evaluating the performance of deep features..\nJane Bromley, Isabelle Guyon, Yann LeCun, Eduard Sackinger, and Roopak Shah. Signature verifi cation using a''siamese' time delay neural network. In NIPs. pp. 737-744. 1994..\nLluis Castrejon, Yusuf Aytar, Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Learning. aligned cross-modal representations from weakly aligned data. In CVPR, pp. 2940-2949, 2016.\nSumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, wit application to face verification. In CVPR, pp. 539-546, 2005.\nJeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor. Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In ICML, pp. 647-655, 2014. E. Gavves, B. Fernando, C. G. M. Snoek, A. W. M. Smeulders, and T. Tuytelaars. Fine-grained categorization by alignments. In ICCV, pp. 1713-1720, 2013.\nSergey Ioffe and Christian Szegedy. Batch normalization: accelerating deep network training b reducint internal covariate shift. In ICML, pp. 448-456, 2015.\nHerve Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbo. search. TPAMI, 33(1):117-128, 2011\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE. volume 86. pp. 2278-2324. 1998\nYangyan Li, Hao Su, Charles Ruizhongtai Qi, Noa Fish, Daniel Cohen-Or, and Leonidas J. Guibas Joint embeddings of shapes and images via CNN image purification. ACM TOG, 34(6):234:1- 234:12, 2015.\nTsung- Yi Lin, Yin Cui, Serge Belongie, and James Hays. Learning deep representations for ground to-aerial geolocalization. In CVPR, pp. 5007-5015, 2015.\nRaia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, pp. 1735-1742, 2006.\nNaila Murray and Florent Perronnin. Generalized max pooling. In CVPR, pp. 2473-2480, 2014\nAli Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. CNN features off-the-shelf: An astounding baseline for recognition. In CVPR Workshops, pp. 512-519, 2014..\nFlorian Schroff, Dmitry Kalenichenko, and James Philbin. FaceNet: A unified embedding for face recognition and clustering. In CVPR, pp. 815-823, 2015\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015\nHyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via liftec structured feature embedding. In CVPR, pp. 4004-4012, 2016.\nYaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, and Lior Wolf. DeepFace: Closing the gap t human-level performance in face verification. In CVPR, pp. 1701-1708, 2014.\nLingyu Wei, Qixing Huang, Duygu Ceylan, Etienne Vouga, and Hao Li. Dense human body corre spondences using convolutional networks. In CVPR, pp. 1544-1553, 2016.\nSaining Xie, Tinbao Yang, Xiaoyu Wang, and Yuanqing Lin. Hyper-class augmented and regularized deep learning for fine-grained image classification. In CVPR, pp. 2645-2654, 2015.\nQian Yu, Feng Liu, Yi-Zhe Song, Tao Xiang, Timothy M. Hospedales, and Chen-Change Lo Sketch me that shoe. In CVPR, pp. 799807, 2016\nM. E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes In Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing pp. 722-729, 2008. O. M. Parkhi, A. Vedaldi, A. Zisserman, and C. V. Jawahar. Cats and dogs. In CVPR, pp. 3498-3505, 2012. Qi Qian, Rong Jing, Shenghuo Zhu, and Yuanqing Lin. Fine-grained visual categorization via multi- stage metric learning. In CVPR, pp. 3716-3724, 2015.\nOren Rippel, Manohar Paluri, Piotr Dollar, and Lubomir Bourdev. Metric learning with adaptive density discrimination. In ICLR, 2016. Olga Russakovsky and Li Fei-Fei. Attribute learning in large-scale datasets. In ECCV, International. Workshop on Parts and Attributes, 2010.. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng. Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 115(3):211-252, 2015.\nPatsorn Sangkloy, Nathan Burnell, Cusuh Ham, and James Hays. The sketchy database: Learning to retrieve badly drawn bunnies. ACM TOG, 35(4):119:1-119:12, 2016."}]
SJRpRfKxx
[{"section_index": "0", "section_name": "RECURRENT MIXTURE DENSITY NETWORK FOR SPATIOTEMPORAL VISUAL ATTENTION", "section_text": "Loris Bazzani\nIn many computer vision tasks, the relevant information to solve the problem ai. hand is mixed with irrelevant, distracting information. This has motivated re. searchers to design attentional models that can dynamically focus on parts of im ages or videos that are salient, e.g., by down-weighting irrelevant pixels. In this. work, we propose a spatiotemporal attentional model that learns where to look in a. video directly from human fixation data. We model visual attention with a mixture of Gaussians at each frame. This distribution is used to express the probability of. saliency for each pixel. Time consistency in videos is modeled hierarchically by. 1) deep 3D convolutional features to represent spatial and short-term time rela-. tions at clip level and 2) a long short-term memory network on top that aggregates. the clip-level representation of sequential clips and therefore expands the tempora.. domain from few frames to seconds. The parameters of the proposed model are. optimized via maximum likelihood estimation using human fixations as training. data, without knowledge of the action in each video. Our experiments on Hol-. lywood2 show state-of-the-art performance on saliency prediction for video. We. also show that our attentional model trained on Hollywood2 generalizes well to. UCF101 and it can be leveraged to improve action classification accuracy on both. datasets."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Attentional modeling and saliency prediction in images has been an active research topic in compute vision over the last decade. Interest in attentional models is primarily motivated by their ability t eliminate or down-weight pixels that are not important for the task at hand, as for example showr in prior work using visual attention for image recognition and caption generation (Sermanet et al. 2014] Xu et al.2015] Mnih et al.2014). Integrating visual attention in an image analysis model car potentially lead to improved overall accuracy, as the system can focus on the most salient regions ir the photo without being disturbed by irrelevant information.\nRecently, we have witnessed a shift of trend from image saliency prediction (Borji & Itti][2013) to the modeling of saliency in videos (Rudoy et al.] 2013). Since human fixation patterns are strongly. correlated over time (Coull! 2004), it appears critical to model the relations between saliency maps of consecutive frames. In this scenario, attention can be defined as a spatiotemporal volume, where each saliency map (one for each frame) depends on the frames at the previous times. The saliency map can be interpreted as a probability distribution over pixels and the actual fixation patterns can. be generated by sampling from the the map.\nHugo Larochelle\nDepartement d'informatique. Universite de Sherbrooke\nhugo.larochelle@usherbrooke.ca"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Going from images to videos is not straightforward, since videos bring up many challenges. Firs of all, videos have an additional dimension (time), compared to images. This causes a dramati growth in the number of pixels to be processed and poses a significantly higher computational cos for analysis. At the same time, there are strong redundancies present in such data, which implie that visual attention may be particularly beneficial for the video setting. For example, typically th objects or people in a video do not change significantly in appearance over time. Yet, for analysi tasks such as action recognition (Wang & Schmid2013) or video description (Yao et al.[|2015), iti imperative to properly model the dynamical properties of these objects in the video. This suggest that, in order to identify spatiotemporal volumes that are salient for video analysis, an attentiona model must take into account high-level image semantics as well as the history of past fixations.\nIn order to cope with these challenges, we propose an efficient spatiotemporal attentional model (sec Fig.1) that leverages deep 3D convolutional features (Tran et al.]2015) as semantic, spatiotempora representation of short clips in the video. This clip-level representation is then aggregated by a Long Short-Term Memory (LSTM) network (Hochreiter & Schmidhuber1997), that expands the temporal range of analysis from few frames to seconds. The LSTM model connects into a Mixture Density Network (MDN) (Bishop1994) that at each frame outputs the parameters of a Gaussian mixture model expressing the saliency map. We refer to this model as Recurrent Mixture Density Network (RMDN). RMDN is trained via maximum likelihood estimation using human fixations as training data, without knowledge of the actions in the videos.\nThe potential applications of automatic saliency map prediction from videos are many. They includ attention-based video compression (Gitman et al.2014), visual attention for robots (Yu et al.]2010) crowd analysis for video surveillance (Jiang et al.]2014), salient object detection (Li & Yu]2015 Karthikeyan et al.2015) and activity recognition (Vig et al. 2012 Sapienza et al.2014).In this work we focus on a study of how visual attention may improve action recognition by leveraging the saliency map generated by RMDN for video classification. The idea is akin to soft attention anc consists in re-weighting the pixel values of the input video by the estimated saliency map. Despit its simplicity, we show that the combination of features extracted from this modified version o the video and those computed from the original input lead to a significant improvement in actior recognition, compared to a model that does not use attention.\nThe primary contribution of this work is a spatiotemporal saliency estimation network optimized to reproduce human fixations. The proposed approach offers several advantages: 1) the model can be trained without having to engineer spatiotemporal features; 2) RMDN is directly trained on examples of human fixations and thus learns to mimic human visual attention; 3) prediction of the saliency map is very fast (it takes 0.08s per 16-frame clip on a GPU); 4) the method outperforms the state-of-the-art (Mathe & Sminchisescu2015) in saliency accuracy; 5) our predicted saliency maps lead to to improvements in action classification accuracy."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Broadly speaking, the literature on attentional models can be split into two categories: task-agnostic approaches which model the bottom-up, free-viewing properties of attention, and task-specific meth- ods which model its top-down, task-driven properties. Researchers have devoted many years to cre- ate datasets, collecting human fixations and proposing solutions for biologically-plausible saliency estimators, built using low-level cues such as edge detectors and color filters (e.g. see Borji et al. (2013); Judd et al.(2009); Harel et al.(2006) for recent examples). We refer to Borji & Itti] (2013) and Bruce et al.(2016) for an interesting analysis and comparison of existing methods. Most of the techniques in the literature are focused on extracting features in a bottom-up and/or top-down manner and use them to estimate the saliency map. In this context, motion features are introduced when extending saliency methods from images to videos (Guo et al.2008}Zhao et al.]2015fZhai & Shah! 2006). However, there is no explicit modeling of the temporal dimension that can capture long-term relations. In fact, motion features (e.g., optical flow) describe short-term associations at the temporal scale of only a few consecutive frames.\nPrior approaches can also be categorized into soft-attentional versus hard-attentional models. Soft. attentional models use the predicted saliency maps to down-weight pixels that are not relevant or salient, e.g., Song et al.[(2016). Specifically deep networks have been used in this context to assign a. weight to each pixel in order to extract \"glimpses\" from images (Xu et al.]2015)[Gregor et al.]2015)\nor videos (Yao et al.]2015) in the form of weighted pixel averages. One strength of such approaches is that they can backpropagate through the attentional component and tune it in the context of its use in a deep network. Other work has been geared towards learning hard-attentional models, which explicitly ignore and discard parts of the input (Larochelle & Hinton, 2010) Bazzani et al.]2011 Denil et al.]2012]Mnih et al.]2014]Xu et al.[[2015]|Sermanet et al.[|2014]Ba et al.[2015]Yoo et al. 2015, Zheng et al.] 2015), thus providing significant computational savings. Unfortunately, such models are often hard to train because they rely on reinforcement learning techniques to generate the image/video locations during training."}, {"section_index": "4", "section_name": "3 PROPOSED MODEL", "section_text": "We start with a high-level description of our attentional model. We then formalize it in Sec.3.1 anc describe its training in Sec.3.2] Sec.[3.3|reports how prediction is efficiently carried out at test time Sec.3.4 describes how to leverage the predicted saliency map to improve action recognition\nThe convolutional network has access to a limited window of the video since it uses a fixed-size clip of 16 frames as input. In order to empower the visual attention model with the ability to take into account longer temporal extents, we need a mechanism that performs temporal aggregation of past clip-level signals. To this end, we propose to connect the internal representation of the C3D model to a recurrent neural network, as shown in Fig.1[(green module). The aim of the temporal\nAll of the aforementioned prior work attempts to learn attentional models indirectly rather than from. explicit information of where humans look. Recent work (Mauthner et al.|2015, Hossein Khatoon- abadi et al.[|2015]|Mathe & Sminchisescu2015]|Stefan Mathe[2013]|Kuimmerer et al.[2015]Rudoy et al.J2013) has shown that it may be possible to accurately reproduce gazing patterns of human sub-. jects attending to images and videos. However, these prior approaches rely on hand-crafted features. to estimate the saliency maps. Attempts at removing hand-engineering of features are represented. byJetley et al.(2016);Huang et al.(2015); Kummerer et al.(2015) where networks pre-trained for object recognition were subsequently finetuned using saliency-based loss functions for images.Pan. & i Nieto (2015) followed the same principle but without using any pre-trained network for initial-. ization.Liu et al.(2015) proposed a multi-scale architecture for saliency prediction, and Li & Yu. (2015) added a refinement step in order to enforce spatial coherence of the output. Simonyan et al.. (2014) and [Mahendran & Vedaldi(2016) proposed to reverse deep networks using deconvolutions for visualization and to estimate image saliency. However, these methods estimate saliency from still images and do not consider the temporal aspect of video.Chaabouni et al.(2016a b) trained a. ConvNet for saliency prediction on optical flow features and individual frames. However the model. uses only the very short-term temporal relations of two consecutive frames..\nIn this paper, we explore the following question: can deep networks be trained to reliably predict spatiotemporal attentional patterns, specifically in such a way that these predictions can be lever- aged successfully by a recognition system? To our knowledge, our work distinguishes itself from the aforementioned literature by being the first application of deep networks to the prediction of spatiotemporal human saliency in videos.\nThe proposed RMDN model for saliency estimation is depicted in Fig.1 At time t, the input of the model is a sequence of the last K = 16 frames, i.e., from time t- K +1 to current time t. We refer to this sequence as the input \"clip.' The first part of the model (Fig.[1] blue layers above the input clip) consists of a 3D convolutional network that provides a feature representation of the clip. Our choice of a clip-based representation rather than a single-frame descriptor is motivated by the fact that these features allow us to explicitly capture short-term information that is then aggregated for long-term spatiotemporal visual attention by RMDN. Furthermore, there is recently growing evidence (Tran et al.]2015] Srivastava et al.]2015] [Yue-Hei Ng et al.] 2015) that by modeling the temporal infor- mation it is possible to obtain improved performances in high-level video analysis tasks, such as action recognition. For the computation of spatiotemporal features from the input clip, we use the \"C3D\" architecture proposed by Tran et al.(2015), which has been shown to provide competitive results on video recognition tasks across different datasets. The C3D architecture is defined as: C64-P-C128-P-C256-C256-P-C512-C512-P-C512-C512-P-FC4096-FC4096-softmax, where C is a 3D convolutional layer, P is the pooling layer, FC is a fully-connected layer, and the number speci- fies the number of kernels of the layer (e.g. C64 has 64 kernels). For the details about the size and stride of the convolutional and pooling kernels, we refer to (Tran et al.]2015).\nMDN Tc C ho, Co h1, Cj LSTM C3D Net K frames\nFigure 1: Proposed recurrent mixture density network for saliency prediction. The input clip of K frames is fed into a 3D convolutional network (in blue), whose output becomes the input of a long short-term memory (LSTM) network (in green). Finally, a linear layer projects the LSTM representation to the parameters of a Gaussian mixture model, which describes the saliency map.\nconnections of the recurrent neural network is to propagate the clip-level features through time via memory units that can capture long-term dependencies. Our model uses LSTMs (Hochreiter & Schmidhuber 1997) as memory blocks.\nThe saliency map at each time t is expressed in terms of a Gaussian Mixture Model (GMM) with C. components. We denote its parameters with {(c, c, c)}C-1, where c, C and c are the mean, the mixture coefficient and the covariance of the c-th Gaussian component, respectively. The LSTM directly outputs these parameters (see details below). The resulting network is known as a Mixture Density Network (MDN) (Bishop1994} Graves2013)\nSince the model is recurrent, there is a direct connection between the inner representation of the LSTM at time t and the one at time t + 1. This favors temporal consistency between the saliency. maps at adjacent times\nconsisting of T, temporally overlapping clips c (i.e., sampled with stride 1) and a' = (at)T=1 is the sequence of ground-truth fixations for the i-th video aligned with the clips. Since we use C3D to represent each clip, cf has a fixed length of K = 16 frames and t = 0 means that the first K frames are used to build the 0-th clip. The fixations af = {at,}j=1 are a set of (x, y) image positions that are normalized to 0, 1| in order to deal with videos of different resolutions. The number of fixations vary from frame to frame in general, but in our experiments we control it via subsampling in order to obtain for each frame a set of fixed size A.\nLet xt = C3D(ct) be the internal representation of C3D for an input clip ct. In our model we. use the last convolutional layer, before the fully-connected layers. We choose a convolutional laye. instead of a fully-connected layer because the latter discards spatial information, which is crucial to. estimate a spatially-variant saliency map over the image.\nft=0(Wf[ht-1,Xt]+bf) it=0(W[ht-1,Xt]+bi) Ot =0(Wo[ht-1,Xt]+ bo), Ct = tanh(Wc [ht-1,Xt] + bc] Ct =ft*Ct-1+it*Ct, ht = 0t * tanh(Ct)\nCt =ft*Ct-1+it* Ct ht = Ot * tanh (Ct)\nCt =ft*Ct-1+it* Ct ht = 0t * tanh(Ct)\nwhere ft, it, Ot, Ct and ht are the forget gate, the input gate, the output gate, the memory cell, and the hidden representation, respectively. The learning parameters that need to be estimated during the training phase are W, and bz ,where z E {f, i, o, C}..\nThe MDN (Graves2013Bishop]1994) takes its inputs from the hidden representation of the LSTM network. Since the output space is 2D (the space of image locations), we can reparametrize the. model as {(t, t, ot, pt)}c-1, where t, , o and pt are the 2D mean position, the weight, the. 2D variance and the correlation of the c-th Gaussian component, respectively. The MDN is therefore defined as follows:\nYt ={(t,t,Ot,Pt)}C=1=Wyht+ br\nexp() t = t, T = of = exp(6t), pf = tanh (pt) i=1 exp(t)\nThe composition of the LSTM and the MDN results in the RMDN"}, {"section_index": "5", "section_name": "3.2 TRAINING", "section_text": "The log-likelihood of the RMDN is optimized using backpropagation through time, since it is a. composition of continuous functions (e.g. linear transformations and element-wise non-linearities) and the LSTM, for which we can compute the gradients. In particular, we refer to the paper of|Graves. (2013) for the derivation of the gradients for the MDN using the loss function of Eq.[6] In practice. due to the limited training data, we freeze the layers of the C3D network to the values pretrained by Tran et al.(2015) for action recognition. This implies that the low-level representation xt is fixed.. We jointly train the LSTM and MDN from randomly initialized parameters.."}, {"section_index": "6", "section_name": "3.4 SALIENCY FOR CLASSIFICATION", "section_text": "For the task of video classification, we generate a modified version of the video by using a soft. attentional mechanism: the idea is to weight each pixel value by the estimated saliency at tha\nwhere Wy and by are the parameters of the linear layer and ht is the hidden representation of the LSTM network. The parameters of the GMM in Eq.4|are normalized as follows in order to obtain a valid probability distribution:\nThe proposed model can be trained by optimizing the log-likelihood of the training ground truth. fixations, a', under the GMM. The loss function for the i-th video, v', is defined as the negative log-likelihood of the fixations under the GMM, as follows:.\nT-1 A (v,a= LL log L t=0 j=1 =1\nwhere W is the Gaussian distribution. Note that the parameters of the Gaussian components depend. on the input video v, but we do not make this explicit in the equation in order to keep notation simple.\nThe inference stage is straightforward by following the equations of Sec.3.1 At a given time t, the. clip from time t K + 1 to t is fed into the C3D network to produce the representation xt. This. vector is passed to the LSTM (Eqs.12] and[3) whose hidden representation is passed to the MDN, which outputs the GMM parameters (Eqs.4and[5). In order to generate the final saliency map, we compute the probability of each pixel position under the GMM model. We normalize the probability map to sum up to 1 over the image pixels in order to produce a normalized saliency map..\nposition. This operation effectively down-weights regions that are deemed not salient. The intuition. is that then the classifier will be able to focus on the parts of the frame which are most relevant without being distracted by the non-salient regions (see Fig.2lin Appendix[A).\nAt each time t, we extract two representations: the \"context\"' branch is given by the C3D representa. tion of the original clip, while the \"soft attentional' branch is given by the C3D representation of the input clip weighted by the saliency map. The rationale is that the context branch considers the globa evolution of the activity in the video while the soft attentional branch is focused on the most-salien local evolution of the activity. The two representations are then concatenated at the clip level anc. max-pooled over the video to obtain the final video-level descriptor. This video-level representatior. is then used as input to train the video classifier, which is a linear SVM in all our experiments.."}, {"section_index": "7", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we evaluate the proposed method for both saliency prediction and action recognition. on two challenging datasets: Hollywood2 (Marszalek et al.]2009) and UCF101 (Soomro et al. 2012). Section4.1|reports a quantitative analysis for the task of saliency prediction. Section4.2 shows the results for the action recognition task in two scenarios: 1) using the same dataset that. was used to train the saliency predictor and 2) applying the pretrained attentional model to a never-. seen dataset and a different set of actions. We reported the implementation details in Appendix B. We invite the reader to watch the qualitative results of the proposed method in the form of a video."}, {"section_index": "8", "section_name": "4.1 SALIENCY PREDICTION", "section_text": "The proposed model is trained using human fixation data. Few datasets provide both human fixations and class labels, which we need for the action recognition experiment discussed in the next section. Therefore, we used the Hollywood2 dataset, which was augmented with eye tracking data byMathe & Sminchisescu (2015). We follow the same evaluation protocol (i.e., same training/test splits) of Mathe & Sminchisescu[(2015) and their validation procedure to compute the final results in order to compare with their work.Mathe & Sminchisescu(2015) generate the ground truth saliency from a binary fixation map where the only non-zero values are at fixations points. The final saliency map is produced by convolving the binary map with an isotropic Gaussian filter with standard deviation o and then adding to it a uniform distribution with probability parameter p. As in|Mathe & Sminchis- escu[(2015), the values of these two parameters are chosen from o E {1.5, 3.0} and p E {0.25, 0.5} via hold-out validation. We use a validation set consisting of 20% of the training set. We use the remaining 80% of the training data to learn our models, and use the hold-out validation set to choose the hyperpameters of our model.\nWe evaluate all models on the test set, using popular metrics proposed in the literature of saliency map prediction for still images (Judd et al.]2012) Borji et al. 2013), such as Area Under the ROC Curve (AUC), Normalized Scanpath Saliency (NSS), linear Correlation Coefficient (CC) and the Similarity score (Sim). We refer to the papers of Judd et al. (2012) and Borji et al.(2013) for their detailed description.\nTable[1shows results achieved with different variants of our model and a simple baseline method,. which we refer to as Trained Central Bias (TCB). The TCB model is a single GMM trained using the fixations of all the videos in the training sets. TCB predicts the same saliency map for each testing. frame, thus it discards completely the temporal information and the image input. This experiment shows that all versions of our RMDN consistently outperform TCB under all metrics, even when. using a smaller number of fixations per frame during training..\nThe different variants of RMDN in Table[1|explore the following design choices in our model: 1) the impact of using LSTM hidden units as opposed to regular RNN units (second and third row) and 2) the number of fixations per frame used for training (third and fourth row). These experiments show that LSTM (third row) is better than an RNN (second row) in terms of AUC and NSS, but in order to have better CC and Sim we need to use more fixations per frame (fourth row). This is intuitive: since the LSTM has many more parameters than the RNN, it needs more training data to be properly optimized.\nAll of the experiments reported in Table[1|were obtained using C = 20 components in the GMM We have also studied how the accuracy varies by reducing the value of C. For example, using the RMDN variant of row 5 in Table 1 but with C = 10 components (instead of 20), the performance does not change dramatically, yielding AUC= 0.8966 and NSS= 2.4392. On the other hand, the AUC and the NSS decrease considerably, by 1.3% and 0.3 points respectively, when using only C = 2 components (AUC= 0.8836 and NSS= 2.1385). Based on this analysis, in all our subsequen experiments we used C = 20 as we noticed that our approach implements automatically a sort of Occam's razor, setting the weights c of many components close to zero when necessary.\nTable 1: Accuracy of saliency prediction for the Trained Central Bias baseline and different variants of our RMDN model in terms of AUC, NSS, CC and Similarity. Training and testing are performec on disjoint splits of the Hollywood2 dataset.\nModel Net(#units) Fix. per frame. AUC NSS CC Sim Trained Central Bias 150 0.8725 1.7646 - 0.5297 0.4812 RMDN RNN(128) 80 0.8745 1.9505 0.5495 0.4962 RMDN LSTM(128) 80 0.8866 2.0155 0.4606 0.4219 RMDN LSTM(256) 150 0.8986 2.5169 0.6007 0.5278 RMDN full LSTM(256) 150 0.9037 2.6455 0.6129 0.5349\nThe last row in Table[1 shows the results obtained by retraining our model using the full training set of[Mathe & Sminchisescu(2015) instead of just the 80% subset. For this case (RMDN full) we used the hyper-parameter values selected via hold out-validation for the experiment in the fourth row. This gives the best result for saliency prediction reported in our work and it is the model we used for all the subsequent experiments described below.\nWe have also carried out a few side experiments and discovered that using the fully-connected fea- tures of C3D instead of the convolutional representation gives results that are at least 1.5% lower in terms of AUC. Moreover, we tried to finetune the C3D network for action categorization on Holly- wood2. However we did not obtain any significant improvement, confirming the findings of Tran et al.(2015): the C3D representation is already general enough to perform effectively on different action recognition tasks and fine-tuning the model on smaller-scale datasets (such as Hollywood2) does not seem beneficial. We also experimented with deep LSTMs, but we obtained an insignificant improvement of performance. For this reason and also because deep LSTMs have more parameters and are more computationally expensive to train, we chose to use a shallow one-layer LSTM. Fi- nally, we run the ablation study where the recurrent link between time t - 1 and t of the RMDN is removed: the results in terms of AUC are 1.2% and 2.4% lower with respect to the RMDN which uses RNN (second row of Table[1) and LSTM (third row of Table[1), respectively.\nWe also compared our approach to the state-of-the-art in saliency prediction from video. Table2 includes the results of the best methods taken from the extensive analysis done in|Mathe & Smin- chisescu (2015). The table reports also some useful baselines, such as the central bias (CB) and the human accuracy for the task. Note that: 1) CB differs from TCB, since it does not use any training fixations; and 2) the human accuracy is computed in (Mathe & Sminchisescu2015) by deriving a saliency map from half of our human subjects and is evaluated with respect to fixations of the remaining ones. Furthermore, Table[2lcontrasts the use of static features, motion features and their combination. The last row reports the results obtained with our RMDN model. It is interest- ing to see that the results obtained with a single type of features (static or motion) have an AUC lower than 0.75, which is even lower than the one obtained by the central bias (0.84). Moreover, the combination reaches the best results when the central bias is combined with engineered features (SF+MF+CB). On the other hand, our method outperforms all the methods evaluated in Mathe & Sminchisescu (2015) by a large margin and our results are very close to human performance (the difference is only 3.2%). In addition to being the best method in Table [2] our method has several advantages: 1) it does not require any hand-engineering of spatiotemporal features, 2) it performs joint training of the LSTM and the saliency predictor, 3) it is very efficient. Specifically, although we cannot estimate the runtime for prior approaches, we believe that our method is much faster than most of the methods reported in Table|2|as these depend on features that are computationally expensive to extract. Our proposed method takes only 0.08s per clip for inference on GPU: 0.07s to compute C3D features and 0.01s to evaluate the RMDN.\nTable 2: Saliency prediction comparison against the state-of-the-art on the Hollywood2 dataset. Th top-3 best results for each set are taken from (Mathe & Sminchisescu]2015)\nSet Model AUC Uniform 0.500 Baselines Central Bias (CB) 0.840 Trained Central Bias (TCB) 0.872 Human 0.936 Color features Judd et al..2009 0.644 SF = Static Features Saliency map Oliva & Torralba. 200 0.702 Horizon det. Oliva & Torralba, 200 0.741 Flow magnitude 0.626 MF = Motion Features Flow bimodality 0.637 Mathe & Sminchisescu,2015 HOG-MBH det. 0.743 SF (Judd et al.,2009 0.789 Combo SF+MF 0.812 Mathe & Sminchisescu,2015 SF+MF+CB 0.871 Our Method RMDN 0.904\nSF = Static Features MF = Motion Features\nMathe & Sminchisescu,2015\nCombo Mathe & Sminchisescu,2015 Our Method"}, {"section_index": "9", "section_name": "4.2 ACTION RECOGNITION", "section_text": "In order to show how saliency can be used for action recognition we carried out a set of experi.. ments covering two scenarios: 1) using the same dataset where the saliency predictor was trained (Hollywood2) and 2) using a never-seen dataset with a different set of actions (UCF101).\nThe results on Hollywood2 are reported in terms of mean Average Precision (mAP) as done. by Mathe & Sminchisescu(2015). Table 3 shows an analysis of 1) the impact of using different feature representations as well as 2) the effect of the saliency map. As in Tran et al.[(2015), we experimented with different features, namely CONV5 and FC6, which correspond to the fifth con- volutional layer and the first fully-connected representation of C3D, respectively. We also tested two ways to use the saliency maps, called in the second column: \"feature\"' and \"clip\"'. In the feature mode (first row, experiments (2-5)), the convolutional representation is multiplied by the saliency map, after resizing it accordingly. In other words, the saliency weights directly the feature repre sentation, similarly to the work of Sharma et al.(2016). In the clip mode (second and third row, experiments (2-5)), we adopted the model presented in Sec.3.4] where the saliency maps are used to weight the input video pixels.\nTable[3|shows that the results of CONV5 and FC6 are very close when considering the original video (experiment (1)). The table also shows that the feature mode has lower performance compared to the clip mode (experiment (2)). Moreover, the concatenation (experiment (3)) is effective only wher visual attention is used to weight pixels rather than features. Based on the poor performance of the features mode, we decided to experiment only with the clip mode in our study with predicted saliency (experiment (4) and (5)). Also, we decided to use FC6 features for the rest of the paper because the representation is more compact and therefore allows to train the classifiers more quickly We can notice that even in the case of predicted saliency, the concatenation of FC6 context features and those obtained by weighting the input video with soft-attention (experiment (5)) produces a significant improvement over the original CONV5/FC6 features without attention. Furthermore, a pleasant surprise is represented by the the small difference in results between using the predicted saliency (experiment (5)) versus the ground truth maps (experiment (3)): only 0.27% for FC6 (last row).\nUniform Central Bias (CB) Trained Central Bias (TCB) Human Color features Judd et al.. 2009\nThe third column of Table [3](experiment (1)) reports the results using only the original video as input to C3D (referred to as context in Fig.2). Experiment (2) uses the ground truth saliency maps as soft attention to weight the input of C3D, while in experiment (3) this vector is concatenated with the context features. The last two columns (experiment (4) and (5)) represent the same setup, but in this case we use the saliency maps predicted by our model instead of the ground truth.\nThe SVM model complexity for experiment (3) in Table 3|is twice as large as the complexity fo 1) and (2) since the feature dimensionality is doubled by construction. The same applies wher comparing (5) against (1) and (4). In order to have a more fair comparison, we added PCA dimen ionality reduction to experiment (5) in order to match the same feature dimensionality as (1) (an 4)). Although the validation accuracies are very similar, the testing mAP drops from 54.85% of experiment (5) to 51.82% of the PCA experiment. This is not surprising, since the extra dimensions\nTable 3: Action categorization results in terms of mAP on the Hollywod2 dataset. Analysis o. different ways to use the saliency map and comparison between using the ground truth salienc. maps versus those predicted by our model..\nSaliency Ground Trutha Predicted Input Saliency Use. (1) Original (2) Weighted (3) Concat. (1, 2) (4) Weighted (5) Concat. (1, 4) CONV5 Feature 46.08% 40.76% 45.62% N/A N/A CONV5 Clip 46.08% 44.89% 55.49% 39.42% 53.41% FC6 Clip 47.00% 41.78% 55.12% 39.00% 54.85%\nTable 4: Recognition results in terms of mAP for the Hollywood2 dataset. The proposed method. (RMDN) is compared to the approaches reported byMathe & Sminchisescu(2015) (named as cen- tral bias and saliency sampling). Note that Mathe & Sminchisescu[(2015) and RDMN do not use the same video classification model.\nGround Truth Predicted Class SalSampling our RMDN Central Bias SalSampling our RMDN AnswerPhone 28.1% 21.8% 23.3% 23.7% 29.8% DriveCar 94.8% 89.2% 92.4% 92.8% 91.6% Eat 67.3% 59.4% 58.6% 70.0% 49.1% FightPerson 80.6% 80.9% 76.3% 76.1% 79.2% GetOutCar 55.1% 78.0% 49.6% 54.9% 76.9% HandShake 27.6% 58.6% 26.5% 27.9% 47.0% HugPerson 37.8% 27.5% 34.6% 39.5% 37.9% Kiss 66.4% 52.2% 62.1% 61.3% 51.0% Run 85.7% 85.5% 77.8% 82.2% 83.2% SitDown 62.5% 31.8% 62.1% 69.0% 31.4% SitUp 30.7% 38.0% 20.9% 29.7% 39.7% StandUp 58.2% 37.8% 61.3% 63.9% 41.3% Mean 57.9% 55.1% 53.7% 57.6% 54.8%\nprovided by the use of the saliency map are not redundant with respect to the context representation (1). Therefore, concatenation seems to be an effective way to make use of the saliency map.\nTable 4|compares our action categorization results with those presented in Mathe & Sminchisesci. (2015). As we did before, we separate experiments that use the ground truth maps and those tha. use predicted saliency. The results of Table 4 show that the performance our method (second anc. fifth column) is around 2% lower than Mathe & Sminchisescu(2015). However this is most likely. explained by the differences in the type of features and classifier, and not by the differences ir. saliency map prediction methods. Indeed, we already established in Table 2 that our proposec. saliency map predictor is more accurate than the one proposed in [Mathe & Sminchisescu (2015] On the other hand, Mathe & Sminchisescu|(2015) use a combination of many different features anc. a kernel chi-square SVM, while our method uses C3D features with a simple linear SVM classifier Adding more non-linearities, especially for the concatenation experiment, would probably help. Bu. we consider the experimentation with different types of action recognition features and classifier. Out of the scope of this paper..\nFinally, we perform an experiment to assess the generalization abilities of the learned saliency mode. to a different dataset, with classes and videos that have not been seen during its training. To this end. we used the attentional model trained on the Hollywood2 dataset to extract saliency maps on the UCF101 dataset. As saliency ground truth is not available for UCF101, we evaluate performance in terms of action recognition accuracy using the evaluation protocol and splits by Soomro et al.(2012) Table5|summarizes the results. The proposed method (C3D + RMDN, eight row) corresponds to the concatenation of the original C3D descriptor and the C3D descriptor with the input weighted by the saliency map, as was done in the Hollywood2 experiments. We compare our method with the results obtained using the C3D descriptor computed from the context only (seventh row) and other state-of- the-art methods (first row through sixth row). A linear SVM trained on C3D features computed from the context already outperforms most of the other methods (first row to forth row). But training the linear SVM on a concatenation of context C3D features and those obtained by reweighting the video\ninput with the RMDN saliency maps (seventh row) leads to a further improvement of 1.1%. This i an impressive result since the RMDN was trained on the separate and small Hollywood2 dataset\nSince we noticed that the saliency maps of RMDN for UCF101 tend to be highly peaked around a single location in each frame, we added the trained central bias (already analyzed in Table|1). This has the effect of diffusing the saliency map with the central bias, thus enlarging the area of attention used by the recognition system. The result of this experiment, which is reported in the last row of Table5 further improves the accuracy by 1.3%.\nTable 5: Action categorization results in terms of 3-fold accurac y on the UCF101 dataset"}, {"section_index": "10", "section_name": "5 CONCLUSIONS", "section_text": "We thank Du Tran for helpful discussion about the code of the C3D network and its usage. We are grateful to Stefan Mathe for explaining the format of the eyetracking data and the protocol of the Hollywood2 experiment. This work was funded in part by NSF award CNS-1205521. We gratefull acknowledge NVIDIA for the donation of GPUs used for portions of this work\nJimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visua attention. In Proceedings of the International Conference on Learning Representations (ICLR). 2015. L. Bazzani, N. de Freitas, H. Larochelle, V. Murino, and J-A Ting. Learning attentional policies. for object tracking and recognition in video with deep networks. In Proceedings of the 28th. International Conference on Machine Learning (ICML-11), pp. 937-944, June 2011.\nL. Bazzani, N. de Freitas, H. Larochelle, V. Murino, and J-A Ting. Learning attentional policies. for object tracking and recognition in video with deep networks. In Proceedings of the 28th. International Conference on Machine Learning (ICML-11). pp. 937-944, June 2011\nChristopher M Bishop. Mixture density networks. In Technical Report. Aston University, 1994\nIn this paper, we proposed a recurrent mixture density network for spatiotemporal visual attention. We showed that our model outperforms state-of-the-art methods for saliency prediction in videos We have also shown that the saliency maps generated by our model can be leveraged to improve action categorization using a very simple procedure. This suggests that saliency can enrich the original video representation. The runtime overhead to estimate the saliency map is very small: only 0.01s added to the feature extraction time of 0.07s.\nAs future work, we plan to close the gap between RMDN and action recognition with a joint net- work. The idea is to have as output of the model both the saliency map at each time and the class of the action for the entire video. This can be combined with the idea of using the saliency map esti mated at the previous time to weight the input for the current time. Putting together these two ideas in a single network would result in a joint model for saliency prediction and action recognition.\nSouad Chaabouni, Jenny Benois-Pineau, Ofer Hadar, and Chokri Ben Amar. Deep learning for saliency prediction in natura1 video. CoRR, abs/1604.08010, 2016b.\nJennifer T. Coull. fmri studies of temporal attention: allocating attention within, or towards, time Cognitive Brain Research, 21(2):216 - 226, 2004. Neuroimaging of Interval Timing.\nY. Gitman, M. Erofeev, D. Vatolin, B. Andrey, and F. Alexey. Semiautomatic visual-attention mod eling and its application to video compression. In Image Processing (ICIP), 2014 IEEE Interna- tional Conference on, pp. 1105-1109, Oct 2014. doi: 10.1109/ICIP.2014.7025220.\nAlex Graves. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nMatthias Kuimmerer, Lucas Theis, and Matthias Bethge. Deep gaze I: boosting saliency prediction with feature maps trained on imagenet. In ICLR, 2015.\nHugo Larochelle and Geoffrey E. Hinton. Learning to combine foveal glimpses with a third-orde Boltzmann machine. In Advances in Neural Information Processing Systems 23 (NIPS 2010), pp. 1243-1251, Vancouver, Canada, 2010.\nGuanbin Li and Yizhou Yu. Visual saliency based on multiscale deep features. In The IEEE Con ference on Computer Vision and Pattern Recognition (CVPR), June 2015.\nNian Liu, Junwei Han, Dingwen Zhang, Shifeng Wen, and Tianming Liu. Predicting eye fixa. tions using convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pp. 362-370, 2015.\nAravindh Mahendran and Andrea Vedaldi. Salient deconvolutional networks. In European Confer ence on Computer Vision, pp. 120-135. Springer, 2016.\nMarcin Marszalek, Ivan Laptev, and Cordelia Schmid. Actions in context. In IEEE Conference on Computer Vision & Pattern Recognition, 2009..\nVolodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In Advances in Neural Information Processing Systems. pp. 2204-2212. 2014.\nAude Oliva and Antonio Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. Int. J. Comput. Vision, 42(3):145-175, May 2001.\nJunting Pan and Xavier Giro i Nieto. End-to-end convolutional network for saliency prediction CoRR, abs/1507.01422. 2015\nDmitry Rudoy, Dan B. Goldman, Eli Shechtman, and Lihi Zelnik-Manor. Learning video salienc from human gaze using candidate selection. In The IEEE Conference on Computer Vision an Pattern Recognition (CVPR), June 2013.\nPierre Sermanet, Andrea Frome, and Esteban Real. Attention for fine-grained categorization. CoRR abs/1412.7054, 2014.\nK. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Proceedings of the International Conference on Learning Representations (ICLR), 2014.\nEleonora Vig, Michael Dorr, and David Cox. Space-variant descriptor sampling for action recog nition based on saliency and eye movements. In European conference on computer vision, pp 84-97. Springer, 2012\nHeng Wang and Cordelia Schmid. Action recognition with improved trajectories. In Proceedings oJ. the IEEE International Conference on Computer Vision. pp. 3551-3558. 2013\nKelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Ricl. Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visua attention. In Proceedings of the 32nd International Conference on Machine Learning, ICMI 2015, Lille, France, 6-11 July 2015, pp. 2048-2057, 2015.\nDonggeun Yoo, Sunggyun Park, Joon- Young Lee, Anthony S. Paek, and In So Kweon. Attentionnet Aggregating weak directions for accurate object detection. In The IEEE International Conference. on Computer Vision (ICCV), December 2015.\nClassifier Video-level Pooling Concat. Concat. C3D Net K frames Context Soft Attention Context Soft Attention\nFigure 2: Model for action recognition. The original clip of K frames is fed into a 3D convolutiona network. The same clip is then weighted by the predicted saliency map estimated by our RMDN anc then fed into the 3D convolutional network. The final clip-level representation is then concatenated All the clips of a video are merged using pooling and then a linear classifier can be trained."}, {"section_index": "11", "section_name": "SALIENCY FOR CLASSIFICATION", "section_text": "The proposed model for recognition is presented in Fig.2 At each time t, we extract two repre sentations: the context branch is given by the C3D representation of the original clip, while th soft attentional branch is given by the C3D representation of the input clip weighted by the saliency map. The two representations are then concatenated at the clip level and max-pooled over the videc to obtain the final video-level descriptor. This video-level representation is then used as input tc train the video classifier which is a linear SVM in our experiments.\nIn our experiments, we also evaluated the option of weighting the convolutional feature map x instead of the input, as for example done by Sharma et al.(2016). However, we will see thai soft-masking the input gives higher accuracy, probably because applying C3D's non-linear trans. formation after the soft-weighting produces a representation that is less redundant with the origina. (non-masked) C3D representation."}, {"section_index": "12", "section_name": "8 IMPLEMENTATION DETAILS", "section_text": "We used the pretrained C3D network (Tran et al.][2015) as feature representation which is the input of the LSTM network. The convolutional layer before the fully-connected layers is used for saliency prediction, while the last fully-connected layer before the softmax is used for classification, since Tran et al.(2015) showed to obtain the best performance.\nThe training of the RMDN is performed using RMSprop with adaptive learning rate and gradier. clipping. We start from a learning rate of 0.0003 and after 8 epochs it is reduced at each epoch wit a decay factor of 0.95. The gradient is clipped with a threshold of 20. Dropout with a ratio of 0.5 i applied only on the hidden layer of the LSTM network before the MDN. We trained for 40 epoch. but training is stopped if there is no significative improvement of the loss. During training, tempor. data augmentation is performed by clipping the videos to shorter videos of length 65 frames (whic. corresponds to 50 C3D descriptors since it needs a buffer of 16 frames for the first descriptor). Th. number of components of the GMM C is fixed to 20 for all the experiments. All the experiment. were carried out using an NVIDIA Tesla K40 card..\nAfter extracting the saliency maps and the feature representations on GPU, our recognition experi. ments were performed on CPU using a linear SVM. In order to compute the video-level represen. tation, we performed max pooling of the clip-level representations of the video. For all the experi ments, we used 20% of the training data as validation set to find the regularization parameter of the. SVM. We searched the parameter space on a grid between 10-9 to 103 with a step of 102. Finally,. we retrain the SVM on all the training set (including the validation set) using the best cross-validated. parameter."}]
ryAe2WBee
[{"section_index": "0", "section_name": "MULTI-LABEL LEARNING WITH SEMANTIC EMBED- DINGS", "section_text": "Liping Jing, MiaoMiao Cheng & Liu Yang\nBeijing Key Lab of Traffic Data Analysis and Mining Beijing Jiaotong University. 1000A\ngittens@icsi.berkeley.edu, mmahoney@stat.berkeley.edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The multi-label learning problem is to learn to predict potentially multiple relevant labels given an instance. Instances that have multiple labels naturally occur in many application domains including multimedia information retrieval, tag recommendation, semantic scene classification, query categorization, gene function prediction, medical diagnosis, drug discovery, and marketing."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Multi-label learning aims to automatically assign to an instance (e.g., an image or. a document) the most relevant subset of labels from a large set of possible labels. The main challenge is to maintain accurate predictions while scaling efficiently on. data sets with extremely large label sets and many training data points. We propose. a simple but effective neural net approach, the Semantic Embedding Model (SEM). that models the labels for an instance as draws from a multinomial distribution parametrized by nonlinear functions of the instance features. A Gauss-Siedel. mini-batch adaptive gradient descent algorithm is used to fit the model. To handle. extremely large label sets, we propose and experimentally validate the efficacy. of fitting randomly chosen marginal label distributions. Experimental results on. eight real-world data sets show that SEM garners significant performance gains. over existing methods. In particular, we compare SEM to four recent state-of-. the-art algorithms (NNML, BMLPL, REmbed, and SLEEC) and find that SEM uniformly outperforms these algorithms in several widely used evaluation metrics. while requiring significantly less training time..\nA popular approach to the multi-label learning problem is to embed the labels in a low-dimensional. latent space via linear or local non-linear embeddings. The approach of Hsu et al.(2009) projects the label vectors to a random low-dimensional space, fits a regression model in this space, then. projects these predictions back to the original label space. Balasubramanian & Lebanon|(2012) use a. sparsity-regularized least squares reconstruction objective to select a small set of landmark labels that are used to predict the remaining labels. Bi & Kwok(2013) take a similar approach, with a greatly decreased computation cost, by posing the problem of selecting the landmark labels as one. of column subset selection and adopting the leverage score sampling approach (Boutsidis et al.. 2009). Recently,[Yu et al.(2014) and Jing et al. (2015) propose using trace norm regularization to identify a low-dimensional representation of the original large label space.Mineiro & Karampatziakis (2015) use randomized dimensionality reduction to learn a low-dimensional embedding that explicitly captures correlations between the instance features and their labels. These approaches, like other. linear embedding methods, assume that the label matrix is low-rank. However, the label matrix in most applications of multi-label learning is a sparse binary matrix, and thus is extremely likely to violate this low-rank assumption (Bhatia et al.|2015)\nRather than working with the original label and feature matrices, some methods work instead with label or feature similarity matrices, and seek to preserve the local structure of the data in the learnec low-dimensional latent space.Tai & Lin[(2010) use PCA on the label covariance matrix to extraci a low-dimensional latent space for labels and Chen & Lin (2012) extend this method to integrate feature information.Lin et al.(2014) apply PCA to a similarity matrix constructed using both labe and feature information; this approach is time-consuming as it requires computing a large similarity matrix.Nam et al.(2014) introduce a neural network model to capture non-linear relationships between the input features and the labels. However, this approach is computationally infeasible wher the number of possible labels is large. Similarly,[Cisse et al.(2016) shows that using a deep learning approach built on top of an informative partitioning of the label space gives good performance the scalability of this method was not characterized.Prabhu & Varma(2014) propose a method tc efficiently train a classification tree by minimizing the Normalized Discounted Cumulative Gain. Ra et al.[(2015) assumes that the label vectors are generated by sampling from a weighted combination of label topics, where the mixture coefficients are determined by the instance features.\nBhatia et al. (2015) proposes a multi-phase algorithm (SLEEC) that first clusters the instances intc a number of relatively small groups, learns label embeddings for each group via an SVD, and ther trains linear regressors from the input features to the latent label factors for each group. SLEEC empirically outperforms previous state-of-the-art multi-label classifiers, but the label embedding in each group is learned from a nearest neighbor graph that is constructed solely from labelling information, ignoring the available feature matrix; the feature matrix has been shown repeatedly to be a source of useful information for label embedding (Chen & Lin]2012] Lin et al.]2014} [Yu et al. 2014; Jing et al.2015).\nNotation: In the sequel, n is the number of training instances, c is the cardinality of the set of possible. labels, d is the dimensionality of the feature vectors, and r is the dimension of the learned latent space The matrix X E IRn d contains the instance features, and Y E 0, 1nc indicates the labels assigned. to each instance. We denote the number of observed labels for instance i with l, = k=1 Yik. The notations Ay. and A.; respectively refer to the ith row and jth column of the matrix A. Unless otherwise specified, the notation f(A) denotes the elementwise application of an arbitrary function f to the A, so for example exp(A)i; = exp(aij).\nOur Semantic Embedding Model (SEM) assumes that the underlying parameters determining th observed labels are low-rank rather than that the observed label matrix is itself low-rank, and it use a nonlinear model to fit the probability distributions over the labels, conditioned on the instanc features.\nSEM models the i-th row of Y as the result of l, draws from a multinomial distribution:\nYg. ~ Multinomial(l; P). where P = Nik i=1,...,n =1,...,C\nThe contribution of this paper is a scalable, accurate, and simple neural network approach to multi label learning. Experiments establish that our method is faster and more accurate than SLEEC, the current state-of-the-art scalable algorithm\nThe parameter matrix H = UVT + 1nbT is the sum of label priors b E Rc and the product of explanatory latent factors associated with the instances (U E Rnxr) and the labels (V E Rcxr) Further, we allow the latent factors associated with each instance to be a nonlinear function of the features associated with that instance, U = f(X, W) for some W to be learned. We note that if f(X, W) = XW, SEM could be viewed as fitting a Bayesian Exponential Family PCA (Mohamed et al.]2009). However, throughout this paper we take f(Xw) = o(Xw), where o(X) = (1 + exp(--X))-1 denotes the elementwise application of the sigmoid function, as we find this gives good results; with this choice, SEM is more naturally viewed as a neural network model.\nWe fit the SEM parameters by maximizing the likelihood of the observed labels. This is equivalent. to minimizing the sum of the KL divergences between the empirical label distributions for each instance and the label distributions predicted by the model (Pawitan!2001). Accordingly, we define. the empirical label distribution matrix G, whose ith row satisfies G; = Y/li, then minimize the.\nn * Gij. n Jg|p= - Gij log Gij log Pij. Pij i=1 j=1 i=1 j=1\nexp(hii exp((o(XW)VT)ij+ bj Lc-1 exp((o(XW)VT)ik + bk)\nexp(o(XW)i.(VT)j+bj) J(W, V,b) = JG|p = - Gij log k=1 exp(0(XW)i.(VT).k + bk) i=1 j=1 n C =-Gi(0(XW)i.( exp(0(XW)i.(V).k+bk i=1 j=1 = -Tr(G(o(XW)vT +1nbT)T) +1 log(exp(o(Xw)vT +1nbT)1c)\nHere V E Rcr are the representations of the labels in a latent semantic space, W E Rdr controls the nonlinear mapping from the instance features to the same semantic space, and the offsets b E R allow for label-specific offsets in the mapping from the semantic space to the log probabilities."}, {"section_index": "3", "section_name": "3 MODEL FITTING", "section_text": "The optimization problem (4) is non-convex. To solve it efficiently, we use a Gauss-Siedel approach combined with mini-batching.\nNamely, we cyclically update each of W, V, and b using AdaGrad (Duchi et al.. 2011) while keeping the other two variable fixed. We compute the gradients using mini-batches. To state the expressions for the gradients with respect to the model parameters, we introduce some helpful notation: A o B denotes the entry-wise product of two matrices, M = (Xw) o (1 (Xw))\n(w) = xT (M c (XW)yT o(XW)yT 1x\nw(r) = W(r-1) _ T Qw O g(W(t-1)\np (9(W(m)) O G(W(m) YY-\nv(r) and b(r) are computed according to similar updating rules obtained from (8) and (9) by substituting 9(W) with G(V) (or G(b)), W with V (or b), and aw with ay (or ab).\nA listing of the proposed algorithm is given in Algorithm 1 Its computational complexity is O(Tnr(d + c)), where T is the number of epochs. We note that the gradient calculations in lines 7-9 of Algorithm[1are amenable to parallelization.\nrow-wise Kullback-Leibler distance (Yang et al.]2011) between G and P:. JG|P = Gy lo% p G ~-G log Pij. (2) i=1 j=1 i=1 j=1 Recalling that exp((o(XW)VT)ij+bj P ij - k=1 exp(hik) k=1 exp((o(XW)VT)ik+ bk) some algebraic manipulations give the final objective. exp(o(XW)i(VT)j + bj) i=1 j=1 -Gi(o(XW).(V) exp(0(XW)i.(V).k +bk O8 i=1 j=1 = -Tr(G(o(XW)vT +1nbT)T) +1 log(exp(o(XW)vT +1nbT)1c) (3) Thus the SEM parameters are learned by solving the optimization problem\nmin J(W,V,b) W,V,b\nwhere e and the learning rate p determine how much an entry W ; is updated during the first timestep\nAlgorithm 1 Mini-Batched Gauss-Siedel Adaptive Gradient Descent for learning SEM parameters\nAlthough Algorithm[1runs in time linear in the dimensions of the model parameters and the inpu datasets, it can be computationally expensive when there are more than a few thousand labels. Tc further reduce the running time of our algorithm, we note that in practice, each instance is ofter associated with l; < c labels.\nn Maginal(W,V,b)(r) =- G;j log exp(o(XW);.(VT) L(r) exp(o(XW)i.(VT).k + bk) i=1 jEL(r)\nNote that 7() is a random function that changes at each timestep. Minimizing this stochasti objective effectively seeks SEM parameters which fit all the randomly sampled marginals encountere during training. Thus it is important to sample the sets NL; so that the selected marginals captur non-trivial information about the label distributions. One can imagine that uniformly sampling from AL, will not provide very informative marginals. As an improvement on this naive scheme we sample labels from AL; with probability proportional to their frequency of occurrence in th training data set. The number of negative labels is set to be times the number of positive label i.e., NL;= |PL,| = l;. Further, when m > 1, to faciliate efficient BLAS operations whil mini-batching, we use the same marginals for each instance in the same minibatch, i.e., we fi 1. L(t), where I, denotes the set of instances in the current minibatch. marginals over L(t) := ( J.c 1.\nIn the experiments presented in Section4] we found that around 10 suffices when c is relativel small, and around 100 suffices when c is on the order of tens of thousands.\nWe present two methods for predicting the labels for a new instance x E Rd given the fitted SEM parameters.\nThe first uses the generative model behind SEM: form h = (xT'W)VT +bT and note the probabilit that the jth label is assigned to that instance is given by\nP(yj =1) = exp(hj)/> exp( k=1\nnarginals over where I, denotes the set of instances in the current minibatch\nAccordingly we assign the most probable labels to x. We call this prediction scheme the direct SEA method; it simply requires choosing the labels corresponding to the largest entries of h.\nn l(Yi,Zy(x;W)) +A|Z|F min ZERcXs i=1\nwhere E Rsxr is a matrix of i.i.d. standard Gaussians and 0 e [0, 2)s is a vector of i.i.d uniform samples from0, 2)."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "In the sequel we refer to the direct SEM scheme as simply SEM, and the kernelized SEM scheme as SEM-K. We compare SEM and SEM-K with several alternative multi-label learning algorithms NNML (Nam et al.]2014), REmbed (Mineiro & Karampatziakis]2015), SLEEC (Bhatia et al.]2015) and BMLPL (Rai et al.]2015). We do not compare to the models proposed in (Tai & Lin]2010f|Chen) & Lin][2012]|Bi & Kwok[2013fYu et al.[|2014] Prabhu & Varma]2014) because earlier works (Yu et al.|1 2014] Bhatia et al.2015) have shown that they are inferior to SLEEC.\nTable 1: Multi-label dataset summary"}, {"section_index": "5", "section_name": "4.2 METHODOLOGY", "section_text": "The codes of the methods we compare to are provided by the authors, in particular, we note tha. the computationally intensive portions of REmbed, SLEEC and NNML are implemented in C; by. way of comparison, our algorithms are entirely implemented in Matlab. Due to there being severa parameters for each method, we hand-tuned the parameters for each dataset as suggested by the. authors. All methods were run in MATLAB on a Windows server with 4GB memory and four 2.3GHz. CPUs with eight cores.\nThe second method builds a kernel classifier in the semantic space obtained from the SEM fac. torization. FollowingMineiro & Karampatziakis (2015), a classifier is trained on these semantic representations by solving the optimization problem.\n(x) = cos(x+ 0)\nAt test time, the predicted label probabilities for an instance x are given by Zy(xW), so we assign. the most probable labels according to this model. We refer to this scheme as the kernelized SEM method.\nTable[1summarizes the eight datasets used in our experiments. Here ntrain and ntest are the numbers of training and testing instances, d is the number of features, c is the number of labels/classes, and the avg(l,) column reports the average number of labels per instance. In these datasets, the number of labels varies from 23 to 30938, the average label cardinality varies from 2.508 to 19.020, and the number of instances in different classes varies over a large range. Thus predicting the labels assignments correctly over this collection of datasets is a challenging task.\nDataset Domain Ntrain Ntest d c avg(li) MSRC image 296 295 512 23 2.508 Corel5K image 4500 500 499 374 3.522 SUN image 12906 1434 512 102 15.526 Delicious text 12920 3185 500 983 19.020 EurLex-sub text 17413 1935 5000 201 2.213 Mediamill video 30993 12914 210 101 4.736 Eurlex-des text 17413 1935 5000 3993 5.31 WikilOK text 14146 6616 101938 30938 18.64\nThe prediction performance for each algorithm is evaluated according to widely-used metrics in the field of multi-label classification, viz., label-based Macro-F1 (MaF1) and Micro-F1 (MiF1) and instance-based Precision-at-k (P@ k, esp. P@1 and P@3) (Zhang & Zhou2014). MaF1 and MiF1 require predefining a threshold to determine the number of labels to be assigned to the testing data In our experiments, the number of labels assigned to each testing instance was set according to its ground truth.\nTable 2: The classification performance of six multi-label classification algorithms (NNML, BMLPI REmbed, SLEEC and the proposed SEM and SEM-K). The best and second best results are respec tively bolded and underlined for each evaluation measure..\nTable 3: The running times, in seconds, of six multi-label classification algorithms (NNML, BMLPI REmbed, SLEEC and the proposed SEM and SEM-K) for differing training sizes on the Mediamil dataset.\nFirst we compare the performance on six multi-label learning problems with c < 1000. To fit botl. SEM models, we take the number of epochs be 30 and the mini-batch size be 200---i.e., T = 30 an m = 200 in Algorithm|1 -and because c is small, we fit the full label distributions. The classificatio. performances of our SEM algorithms and the baseline methods are shown in Table[2] SEM or SEM-I outperform the alternative algorithms in most cases.\nTable 3|compares the running times of the algorithms as the size of the dataset is increased, using. MediaMill. We see that SEM is the fastest model, followed by REMBED, then closely by SEM-K the remaining three models are significantly more costly. It is clear that NNML, the previous neural\nMaF1 MiF1 P@1 P@3 MaF1 MiF1 P@1 P@3 MSRC Corel5K NNML 0.4086 0.5944 0.7356 0.5073 0.0547 0.2967 0.4020 0.3047 BMLPL 0.4592 0.6199 0.7017 0.5288 0.0315 0.2779 0.3940 0.2820 REmbed 0.3537 0.5128 0.5322 0.4384 0.0450 0.2144 0.3060 0.2247 SLEEC 0.4973 0.6314 0.7353 0.5243 0.0534 0.3188 0.4360 0.3287 SEM 0.5064 0.6173 0.7220 0.5333 0.0623 0.3188 0.4320 0.3293 SEM-K 0.5770 0.6492 0.7458 0.5525 0.0589 0.2649 0.3600 0.2773 SUN Mediamill NNML 0.2807 0.5248 0.9421 0.8580 0.0819 0.5890 0.8260 0.6675 BMLPL 0.1897 0.4766 0.9024 0.8001 0.0855 0.6012 0.8478 0.6854 REmbed 0.3408 0.5125 0.9393 0.8591 0.2634 0.6371 0.8741 0.6988 SLEEC 0.2935 0.5256 0.9484 0.8656 0.2851 0.6546 0.8899 0.7158 SEM 0.3648 0.5486 0.9365 0.8642 0.1593 0.6296 0.8746 0.6996 SEM-K 0.3703 0.5466 0.9575 0.8787 0.2570 0.6717 0.8953 0.7278 Delicious Eurlex-sub NNML 0.1721 0.3963 0.6687 0.6169 0.5761 0.8487 0.9173 0.6267 BMLPL 0.1061 0.3739 0.6378 0.5772 0.1459 0.6011 0.6789 0.4697 REmbed 0.1549 0.3713 0.6353 0.572 0.5335 0.8031 0.8785 0.5977 SLEEC 0.1257 0.3859 0.6674 0.6112 0.5433 0.8461 0.9152 0.6191 SEM 0.1941 0.3980 0.6727 0.6162 0.5652 0.8339 0.8971 0.6188 SEM-K 0.1675 0.3886 0.6658 0.6112 0.5807 0.8494 0.9188 0.6269\nntrain NNML BMLPL REMBED SLEEC SEM SEM-K 439 327.57 10.29 2.07 16.11 0.60 1.50 1756 1333.91 20.35 3.02 57.16 2.41 4.29 3073 2363.02 48.2 4.14 145.88 4.36 6.99 4391 3264.79 41.72 5.45 227.76 6.65 10.10 8781 4428.09 84.09 10.83 815.66 12.29 21.73 13172 5170.00 119.09 17.04 1041.07 18.39 26.49 17563 5170.17 185.05 20.90 1692.7 24.22 42.21 21954 5297.75 225.96 44.20 1772.52 30.10 50.64 26344 5947.94 235.93 52.75 1985.82 35.95 59.42 30735 6604.93 275.06 58.74 2181.48 41.37 61.30\nnetwork approach to multi-label learning costs the most. In the other five algorithms, the latent spac dimensionality (r) is set to be 50. SLEEC is expensive because it constructs the nearest neighbc graph among training data and computes the top r eigenvectors of the corresponding similarity matri which costs O(n2r + d2r). REmbed is efficient because its main cost is to find the singular vector of a c (r + q) matrix (here c is the number of labels and q is a small integer), but its performanc is inferior to SEM-K. The BMLPL code provided by the author applies SVD to the training data t initialize than model parameters and then uses conjugate gradient to update the parameters, thus costs much more than REmbed and our proposed methods.\nWe proposed using SEM to fit marginals rather than the entire label distribution when c is large. for computational efficiency. To judge the effectiveness of this proposal, we compare the accuracy and running times of the SEM and SEM-K models with baselines on EurLex-des and Wiki10K, twc datasets with c > 1000. As baselines, we use REmbed and SLEEC in accordance with the above discussion which showed that these two methods are efficient and/or have good performance..\nThe hyperparameters in SLEEC were set according to the original authors' code: r for EurLex-des and Wiki10K is 100 and 75 respectively, and 3 clusters are used for Eurlex-des and 5 are used for Wiki10K. To fit the SEM models, we used the same value of r as SLEEC on these two datasets and used 10 training epochs. For REmbed, the latent space size r was tuned via cross-validation; r = 300. for Eurlex-des and r = 150 for Wiki10K. The number of Random Fourier Features is 2000 for both REmbed and SEM-K. The latent space size r in SEM is same with SLEEC. The mini-batch sizes and. number of epochs are set to be 200 and 10 respectively when fitting the SEM models. The number of threads is set to be 8 for all methods..\nTable 4: The classification performance of five methods (REmbed, SLEEC and the proposed SEM. and SEM-K with two values of 3) on the Eurlex-des and Wiki10K datasets. The best and second bes results are respectively bolded and underlined for each evaluation metric..\nREmbed SLEEC SEM SEM-K SEM-K ( = 500) ( = 10) ( = 60) P@1 0.7299 0.8017 0.7107 0.8024 0.8135 Eurlex-des P@3 0.6064 0.6539 0.5874 0.6621 0.6714 P@5 0.5060 0.5375 0.4916 0.5493 0.5563 REmbed SLEEC SEM SEM-K SEM-K ( = 1200) ( = 10) ( = 100) P@1 0.6963 0.8554 0.8517 0.8582 0.8671 Wiki10K P@3 0.5790 0.7359 0.7133 0.7278 0.7385 P@5 0.4929 0.6310 0.6171 0.6236 0.6353\nTable 5: The running times, in seconds, of five methods (REmbed, SLEEC and the proposed SEM and SEM-K for two values of 3) on the Eurlex-des and wiki10K datasets.\nCWOVaUCSOO on lne Eurlex-aes and Wlkllo dalasels. REmbed SLEEC SEM SEM-K SEM-K ( = 500) ( = 10) (B = 60) Eurlex-des 358.63 1571.30 1210.30 167.10 250.77 Wikil0K 2858.96 2497.00 2003.43 646.48 769.18\nFigure|1|illustrates the impact of the choice of on the prediction performance (in terms of P@ 1 of SEM and SEM-K. The performances of SLEEC and REmbed are included for comparison. The hyperparameters of SLEEC, REmbed and SEM were set as in Section4.4\nTable4|compares the classification performances of the methods on these two datasets. It is clear that. SEM-K with a small set of negative labels obtains better performance than both REmbed and SLEEC Table|5[shows that, additionally, the SEM-K models are fit much faster than than the other models.\nIt is evident that the performance of SEM increases significantly in a monotonic fashion with . However, SEM-K is insensitive to once it passes a dataset-dependent threshold (e.g., = 60\n0.8 0.85 0.75 0.8 0.7 0.65 0.7 P 0.6 0.55 REmbed 0.6 -REmbed 0.5 SLEEC -SLEEC SEM *-SEM 0.45 I SEM-K +SEM-K 0.5 0 100 200 300 400 500 600 0 200 400 600 800 1000 1200 Negative sampling rate ( Negative sampling rate () (a) Eurlex-des (b) Wiki10K\nFigure 1: The P@1 performance of SEM and SEM-K as a function of , in comparison to the performances of SLEEC and REmbed on the (a) Eurlex-des and (b)Wiki10K datasets.\nfor Eurlex-des and = 100 for Wikil0K). Note that on Wikil0K, even the simpler direct SEN outperforms REmbed when there are sufficient negative labels.\nFigure|2|illustrates the effect of on the running times of SEM and SEM-K. Note that the additiona time to fit the classifier in the semantic space required by SEM-K is negligible compared to the time it takes to first fit the direct SEM model..\n1000 SEM SEM SEM-K 2000 SEM-K 800 1500 600 1000 400 200 500 10 20 40 6080100120200300400500600 10 20 40 608010020040060080010001200 Negative sampling rate () Negative sampling rate () (a) Eurlex-des (b) Wiki10K\nFigure 2: Running time of SEM-K under varying"}, {"section_index": "6", "section_name": "5 CONCLUSION", "section_text": "There are other important ways in which the proposed SEM methods can be compared to the baseline multi-label learning methods, including their performance as a function of the latent space dimensionality and as a function of the amount of training. Due to space constraints, a discussion of these two concerns and the convergence behavior of Algorithm[1is provided in the Supplementary material.\nWe proposed a new semantic embedding mode1 (SEM) for handling the multi-label learning task. A framework based on Gauss-Siedel mini-batched adaptive gradient descent was proposed for efficiently. solving the non-convex optimization problem required to learn the SEM parameters. For large label. sets, we proposed fitting the SEM to marginal distributions rather than the full label distribution. A. series of experiments on eight real-world datasets empirically demonstrated that the proposed method. is superior to state-of-the-art methods in terms of prediction performance and running time..\nW. Bi and J. Kwok. Efficient multi-label classification with many labels. In Proc. of ICML, 2013\nA. Rahimi and B. Recht. Random features for large-scale kernel machines. In Proc. of NIPs, 2007\nC. Boutsidis, M. Mahoney, and P. Drineas. An improved approximation algorithm for the column subset selection problem. In Proc. of ACM SODA, 2009. Y. Chen and H. Lin. Feature-aware label space dimension reduction for multi-label classification. In Proc. of NIPS, 2012. M. Cisse, M. Al-Shedivat, and S. Bengio. ADIOS: Architectures Deep in Output Space. In Proc. of ICML, 2016. J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning and Research, 12:2121-2159, 2011. D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In Proc. of NIPS, 2009. L. Jing, L. Yang, J. Yu, and M. Ng. Semi-supervised low-rank mapping learning for multi-label classification. In Proc. of CVPR, 2015. Z. Lin, G. Ding, M. Hu, and J. Wang. Multi-label classification via feature-aware implicit label space encoding. In Proc. of ICML, 2014. P. Mineiro and N. Karampatziakis. Fast label embeddings via randomized linear algebra. In Proc. of ECML, 2015. S. Mohamed, Z. Ghahramani, and K. A. Heller. Bayesian Exponential Family PCA. In Proc. of NIPS, 2009. J. Nam, J. Kim, E. Mencia, I. Gurevich, and J. Furnkranz. Large-scale multi-label text classification - revisiting neural networks. In Proc. of ECML, 2014. Y. Pawitan. In All Likelihood: Statistical Modeling and Inference Using Likelihood. Oxford University Press, 2001. Y. Prabhu and M. Varma. Fastxml: a fast. accurate and stable tree-classifier for extreme multi-label learning. In Proc. of ACM SIGKDD, 2014."}, {"section_index": "7", "section_name": "A. Effect of Latent Space Dimensionality", "section_text": "It can be seen that the latent space dimensionality r plays an important role to learn latent factors V and a feature mapping matrix W in our proposed methods, as it does in the three baselines BMLPI REmbed and SLEEC. In order to investigate this dependence, we conducted a series of experiment on the training data sets using 5-fold cross-validation, comparing BMLPL, REmbed, SLEEC and ou proposed SEM and SEM-K.\n0.4 0.65 0.38 0 d8 0.36 0.6 Miir 0.34 O-BMLPL O-BMLPL 0.55 -REmbed -REmbed 0.32 SLEEC -SLEEC SEM SEM O-SEM-K 0.3 O-SEM-K 0.5 5 100 200 300 400 450 5 100 200 300 400 450 Latent space dimensionality (r) Latent space dimensionality (r) (a) Delicious-P@1 (b) Delicious-MiF1\n0.4 0.65 0.38 8 0.36 0.6 Mir P 0.34 O-BMLPL O-BMLPL 0.55 -REmbed -REmbed x-SLEEC 0.32 -SLEEC SEM SEM OSEM-K 0.3 O-SEM-K 0.5 1 5 100 200 300 400 450 5 100 200 300 400 450 Latent space dimensionality (r Latent space dimensionality (r\nFigure 3: The effect of the latent space dimensionality r on BMLPL, REmbed, SLEEC, SEM anc SEM-K in terms of MiF1 and P@ 1 on the Delicious dataset.\nIn this experiment, we take Delicious dataset as an example. The training data is separated into five. folds where four folds are used as training and one fold as validating, and the averaged results ir. terms of P@1 and MiF1 are given by Figure[3] It can be seen that their performances usually improve with increasing r until they reach an optimum value. However, once r becomes too large, thei. performances degrade. This is reasonable: when r is too small, the learned parameters cannot full. characterize the hidden semantic structure in the classification problem, while when r is too large, the. benefits of dimensionality reduction are lost, as the model begins to over-fit to the idiosyncrasies of the training data rather than capturing the semantic structure common to both the training and validatior data. Usually, these methods could obtain good performance at small r, say 45 for Delicious datasel\n0.9 0.7 0.88 0.65 0.86 0.84 MMir 0.6 P *-NNML *-NNML 0.82 O-BMLPL O-BMLPL 0.8 -REmbed -REmbed 0.55 xSLEEC SLEEC 0.78 +SEM +-SEM O-SEM-K O-SEM-K 0.76 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.20.3 0.4 0.5 0.6 0.7 Training data size (ratio over whole data) Training data size (ratio over whole data)\n0.9 0.7 0.88 0.65 0.86 0.84 ? 0.6 20.82 - -NNML NNML f BMLPL O-BMLPL 0.8 -REmbed -REmbed 0.55 SLEEC SLEEC 0.78 -SEM SEM O-SEM-K O-SEM-K 0.76 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Training data size (ratio over whole data) Training data size (ratio over whole data) (a) P@1 (b) MiF1\nFigure 4: Effect of varying the training data size, as a fraction of the combined test and training data on five multi-label learning methods in terms of P@1 and MiF1 on the Mediamill dataset.\nMeanwhile, we studied the label prediction performance as a function of the amount of labeled. training data. In this experiment, we fixed the testing data size, and randomly selected training data from the training set so that the training data size varies from 1% to 70% of the combined training. and testing data. In order to avoid the presence of empty categories and instances with no labels, at. least one instance is kept for each label and at least one label is kept for each instance during this sampling process. For each fixed size of the training set, the desired amount of data is randomly\nsampled ten times, and the resulting average P@1 and MiF1 on the testing data are recorded. Durin. training, the latent dimensionality parameter r is selected via 5-fold cross-validation."}, {"section_index": "8", "section_name": "C. Convergence", "section_text": "In order to demonstrated the convergence of the proposed method, we show the value of objective function (4) (at r = 45) via Figure 5(a) and the prediction result (P@1) via Figure 5(b) along with the number of passes to the dataset (i.e., t in Algorithm[1). It can be seen that SEM could be convergent and the prediction performance becomes stable in less than 50 epochs, which will leverage SEM dealing with large-scale data.\n9.510 0.7 9 0.65 8.5 ld 0.6 7.5 0.55 6.5 0.5+ 60 50 100 150 200 250 300 0 50 100 150 200 250 300 Iteration Iteration (a) Convergence curve (b) P@1 curve\nFigure 5: Performance of the proposed SEM method (with r = 45, p = 0.1) on the Delicious dataset a) objective function value to minimum and b) prediction result in terms of P@1, where x-axis represents the number of passes to the dataset..\nFigure4|shows these results for the Mediamill dataset which contains the largest number of instances As expected, the performance of all the methods is positive correlated with the size of the training data set, and we also see that the proposed SEM-K uniformly outperforms the other methods regardless of the training data size. As it is often expensive to obtain large labeled data sets in real applications this observation suggests that SEM-K is a better choice for these situations.\n9.5 X10 0.7 9 ***** +*+*++++*+*+*+ 0.65 8.5 8 ld 0.6 oobeeeree 7.5 0.55 6.5 6 50 100 150 200 250 300 50 100 150 200 250 300 Iteration Iteration"}]
HJtN5K9gx
[{"section_index": "0", "section_name": "LEARNING DISENTANGLED REPRESENTATIONS IN DEEP GENERATIVE MODELS", "section_text": "N. Siddharth, Brooks Paige, Alban Desmaison, Frank Wood & Philip Torr\nDeep generative models provide a powerful and flexible means to learn com- plex distributions over data by incorporating neural networks into latent-variable models. Variational approaches to training such models introduce a probabilistic encoder that casts data, typically unsupervised, into an entangled representation space. While unsupervised learning is often desirable, sometimes even necessary, when we lack prior knowledge about what to represent, being able to incorporate domain knowledge in characterising certain aspects of variation in the data can often help learn better disentangled representations. Here, we introduce a new formulation of semi-supervised learning in variational autoencoders that allows precisely this. It permits flexible specification of probabilistic encoders as directed graphical models via a stochastic computation graph, containing both continuous and discrete latent variables, with conditional distributions parametrised by neural networks. We demonstrate how the provision of dependency structures, along with a few labelled examples indicating plausible values for some components of the latent space, can help quickly learn disentangled representations. We then evalu- ate its ability to do so, both qualitatively by exploring its generative capacity, and quantitatively by using the disentangled representation to perform classification, on a variety of models and datasets."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Reasoning in complex perceptual domains such as vision often requires the ability to effectivel learn flexible representations of high-dimensional data, interpret the representations in some form and understand how the representations can be used to reconstruct the data. The ability to lear representations is a measure of how well one can capture relevant information in the data. Being able to interpret the learned representations is a measure of extracting consistent meaning in ar effort to make sense of them. Having the ability to reliably reconstruct the data, a tool for predictive synthesis, can aid in model diagnosis, enable successful transfer learning, and improve generality Such tasks are typically best addressed by generative models, as they exhibit the flexibility requirec to satisfy all three facets. Discriminative models primarily attend to the first two, learning flexibl representations and conforming to some interpretable space (e.g. classification domain) but don' perform the predictive synthesis task.\nProbabilistic graphical models (Koller & Friedman2009f Murphy[2012) are a framework for gen erative modelling that enables specifying a joint probability distribution on a richly semantic repre sentation space. As good a fit as they are for specification and representation, the learning process for both the analysis and synthesis tasks typically suffers in complex perceptual domains such as vision. This is because constructing a generative model requires explicitly specifying the condi- tional distribution of the observed data given latent variables of interest. In practice, designing such\nNoah D. Goodman Department of Psychology. Stanford University. CA 94305, USA"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "likelihood functions by hand is incredibly challenging, and applying generative models to visio data often requires extensive and significant feature engineering to be successful. One approac to alleviate some of this hardship involves the development of deep generative models: generatiy models that employ neural networks to learn, automatically from data, the unknown conditional di tribution in the model. They function as flexible feature learners, where the features are encoded i the posterior distribution over the latent variables in the model. Recent work exploring the effe tiveness of such models (e.g.Kingma & Welling (2014); Kulkarni et al.(2015b);Goodfellow et a (2014)) has shown considerable promise in being able to address the fundamental issues in pe forming this task. These models however are typically unsupervised, learning representations tha are not directly amenable to human interpretation. Any interpretability or disentanglement of th learned representation is observed or extracted after learning has been performed, by exploring th latent space along its non-specific axes of variation. A more recent approach by Chen et al.(2016 involves imposition of information-theoretic constraints to better separate factors of variation, bi here too, any interpretability is only established post facto.\nWhile such approaches have considerable merit, particu-. larly when faced with the absence of any information about the data, when there are aspects of variation in the data that. can be characterised effectively, using and being able to. express these can often be desirable. For example, when. learning representations for images of house numbers, hav-. ing an explicit \"digit' latent variable helps capture a mean-. ingful axis of variation, independent of other aspects. We. also often want to interpret the same data in different ways. depending on context: for a given image of a person, do we. care about the identity, lighting, or indeed any other facets. of the scene (c.f. Figure 1). In these situations, not being. able to enforce context is something of a handicap..\nVariational autoencoders (Kingma & Welling2014]|Rezende et al.f2014) simultaneously train both a probabilistic encoder and decoder for a dataset x. The central idea is that an encoding z can be. considered a latent variable which allows describing a decoder as a conditional probability density pe(x|z). This is typically a distribution with parameters defined as the output of a determinis-. tic multi-layer neural network (itself with parameters 0) which takes z as input. Placing a weak prior over z, the corresponding probabilistic encoder can be interpreted as the posterior distribution. pe(z | x) pe(x | z)p(z). Estimating parameters 0 in this model is challenging, as is performing. the posterior inference necessary to encode data. The variational Bayes approach learns an approx-. imate encoder qo(z | x), called an \"inference network\"' or a \"recognition network\", which aims to. approximate the posterior distribution pe(z x). Then, rather than fitting parameters 0 by maxi-. mizing the marginal likelihood pe(x), the variational approach maximizes an evidence lower bound. (ELBO) (, 0; x) log pe(x), defined with respect to both decoder 0 and encoder parameters..\n(o,0;x) =Eqs(z|x)[logpe(x,z)-logq(z|x)]\nOne line of work to embed structure into the latent space z such that it exhibits disentangled fea tures, is through partial supervision. This is either in terms of labelled data (Sohn et al.]2015].\nFigure 1: Variation along (top) light Ing and (bottom) identity axes.\nIn this paper, we seek to combine the best of both worlds: providing the facility to describe the struc-. tural constraints under which we would like to interpret the data, while using neural nets to capture. variation for aspects we cannot, or choose not to, explicitly model. By structural constraints, we re-.. fer to the (arbitrary) dependencies one would like to employ in the recognition model, particularly ir. regard to there being consistent interpretable semantics of what the variables in the model represent.. In particular, we set up our framework in the context of variational autoencoders (VAE Kingma &. Welling(2014); Rezende et al.(2014)), as a means for semi-supervised learning in deep generative models (Kingma et al.||2014). We provide an alternate formulation of the variational objective and a. modified training procedure which permits us to explore a wide space of recognition networks to use. as probabilistic encoders. In particular we make no mean-field assumptions for our recognition net works, allowing arbitrary hierarchical and structured-graphical-model representations, employing. both continuous and discrete latent variables that can be alternately observed, or left unobserved..\nor curriculum-learning schemes (Kulkarni et al.|2015b) which explicitly disentangle different fac tors.Kingma et al.(2014) explore semi-supervised learning in the VAE setting by factoring the. latent space to learn a joint classification model qo(y x) and recognition model qs(z x). This. is done by separating the latent space into structured, interpretable components y and unstructurec components z, analytically marginalising variables out where discrete. Sohn et al.(2015) perforn fully-supervised learning in VAEs by transforming an unconditional objective into one where the data conditions both the (unstructured) latent and the (structured) labels. In contrast to|Kingma et al (2014), the learning objective is a lower bound on the conditional marginal likelihood pe(x y). conditioning the learned VAE on the values of the labelled data. Both of these approaches effec tively require the label space y to be discrete and finite. Kulkarni et al.(2015b) attend to weakly supervised learning with VAEs through a novel training procedure that uses data clustered intc. equivalence classes along different axes of variation. They then constrain different parts of the laten space to account for changes along a single axis, by training with data from a particular equivalenc. class. An advantage of this approach is not requiring any explicit labels on the latent space, though i. does require independence assumptions on structured components, as well as carefully curated data.\nAn alternative approach biases towards interpretable representations by introducing structure in the prior distribution over the latent space p(z). Johnson et al.(2016) explore the combination of graph ical models and VAEs using classical conjugate exponential family statistical models as structured priors over the latent space. They consider relaxation of conjugacy constraints in the likelihood model using neural network approximations, with a training scheme resembling traditional mean field coordinate ascent algorithms. The recognition network, rather than proposing values outright proposes parameters of a conjugate-likelihood approximation to the true non-conjugate likelihood.\nOur method synthesises the semi-supervised and structured-graphical-model approaches. LikeJohn son et al.(2016), we incorporate graphical model structures, however rather than placing them within the generative model pe(z, x), we incorporate them into the encoder model qo(z x). For many perceptual problems in domains such as vision, complex dependencies arise in the posterior due to deterministic interactions during rendering. A mean-field approximation in qo(z x) is a poor fit. even in situations where all the interpretable latent variables are a priori independent. This is an important reason for our choice of where we embed structure. The use of a structured, multilevel probabilistic model to define the encoder can also be interpreted as a hierarchical variational model (Ranganath et al.2015). Interpretability is enforced by occasionally supplying labels to latent vari- ables expected to have a interpretable meaning in the final encoded representation.\nOur framework provides an embedded domain- specific language (EDSL) in Torch (Collobert et al.2011), that can be used to specify a wide va- riety of graphical models in the form of a stochas- tic computation graph (Schulman et al.[2015). An example is shown in Figure 2[ These graphical models describe the structure of latent. observable. and partially observable random variables which exist in an idealized representation space. Specif- ically, we assume a model structure of the form pe(x, z,y) = pe(x z,y)p(z,y) where the like lihood pe(x z, y) of the data x is conditioned on a set of structured variables y and unstructured variables z, for which we define some appropri-\nFrom a specific-instance perspective,Eslami et al.(2016) use a recurrent neural network (RNN) coupled with a spatial transformer network (STN, Jaderberg et al. (2015)) inducing a particular. state-space representation with the approximation distribution of a VAE to parse images into scene constituents. Kulkarni et al.[(2015a) also explore a specific instance related to a 3D graphics engine by having a programmatic description provide structure using neural networks as surrogates for the. perceptual-matching problem. Andreas et al.(2016) explore a more general formulation of structure with compositional neural network models derived from linguistic dependency parses.\nFigure 2: Example graphical model and its ex- pression in our framework. Further details in the Appendix.\nately structured prior p(z, y). The likelihood itself is typically unstructured (e.g. a multivariate normal distribution). This model structure allows us to optimize the parameters 0 learning a likeli- hood function constrained by the structured latents, but crucially does not require that these latents completely explain the data. The approximation to the true posterior is nominally taken to be of the form of the prior distribution qo(z, y | x), with parameters but can often include additional struc- ture and alternate factorisations as appropriate. Models with such factoring are useful for situations where interpretability is required, or informative, for some axes of variation in the data. It is also useful when we wish to interpret the same data from different contexts and when we cannot con- ceivable capture all the variation in the data due to its complexity, settling for particular restrictions, as is often the case with real world data.\nA particular challenge here lies in choosing a manner for incorporating labelled data for some of the y into a training scheme. For example, choosing qo(z,y x) = qo, (z y, x)qox (y x), de- composes the problem into simultaneously learning a classifier qox (y x) alongside the generative model parameters 0 and encoder qoz (z|x, y). In the fully unsupervised setting, the contribution of a particular data point x' to the ELBO can be expressed, with minor adjustments ofEquation (1)] as\npe(x|z,y)p(z|yi) (zx,y qqz(z|x',yi)\nAn alternative approach involves extending the model using an auxiliary variable y. Defining. p(y,y,zx) = p(yy)p(x,y,z) and q(y,y,zx) = p(y y)q(y,zx), with likelihood p(y y) = d(y), we obtain a model for which marginalization over y reproduces the ELBO in Equation (2)] and treating y as observed gives the supervised objective.\nZ.V x)qo,(zy,x? z,y')p(z,y 0X x')qoz(z|y',x' z;x',y) + logp(y) - log qo.\nThis formulation enables a range of capabilities for semi-supervised learning in deep generative. models. To begin with, it extends the ability to partially-supervise latent variables to those that. have continuous support. This effectively learns a regressor instead of a classifier in the same for-. mulation. Next, it automatically balances the trade-off between learning a classifier/regressor and. learning the parameters of the generative model and the remainder of the recognition network. This. is due to the fact that the classifier qox (y | x) is always present and learned, and is contrast to the. hyperparameter-driven approach in |Kingma et al.(2014). Finally, it allows for easy automatic im-. plementation of a wide variety of models, separating out the labelled and unlabelled variables, to. derive a unified objective over both the supervised and unsupervised cases. When unsupervised, the. value of the label y' is sampled from qoy (y | x) and scored in that distribution, and when super-. vised. it is set to the given yalue. and scored in the same distribution. This is in the same spirit as a\n(x|z,y)p(z,y) Oe (0,$;x') = Eqg(z,y|xi) 1og qpz(z,y|xi)\nBy contrast, in the fully supervised setting the values y are treated as observed and become fixed. inputs into the computation graph, instead of being sampled from qo. When the label y is ob-. served along with the data, for fixed (x', y) pairs, the lower bound on the conditional log-marginal likelihood log pe(xy) is\nThis quantity can be optimized directly to learn model parameters 0 and $z simultaneously via SGD. However, it does not contain the encoder parameters y. This difficulty was also encountered in a related context byKingma et al.(2014). Their solution was to augment the loss function by including an explicit additional term for learning a classifier directly on the supervised points.\nnumber of approaches such as Automatic Differentiation (AD) and Probabilistic Program inference. where the choice of representation enables ease of automation for a great variety of different cases\nPlug-in estimation for discrete variables. In targeting a general class of models, another par ticular difficulty is the ubiquity of discrete latent variables. To obtain a differentiable objective. one can either marginalize over discrete variables directly (as done by Kingma et al.(2014) anc in the STAN probabilistic programming system (Stan Development Team2013)), which doesn't scale over numbers of variables, or use a REINFORCE-style estimator (Williams[1992] Mnih & Gregor2014), which tends to have high variance. A third approach, related to[Bengio et al.[(2013). is to represent discrete latent variables defined on a finite domain using a one-hot encoding, then relaxing them to a continuous probability simplex when used as an input to a recognition network. For example, when y is a one-hot encoding of a discrete value used in a recognition network which factors as qo(y x)qo(z y,x), then qo(y x) is itself a discrete distribution with a probability vector p = go(x) for some deterministic function go. The value y is itself an input to a second function h(x, y) producing the parameters for qo(z | y,x). Instead of evaluating hg(x, y) at a sampled value y (or enumerating over the entire domain), we simply evaluate it at the single point p. noting that p = Eq+(y|x)[y]. This may seem a crude approximation, replacing integration with a single evaluation, claiming Eq(y|x)[hg(x, y)] ~ hg(x, Eq+(y|x)[y]), which is not true in general for h(-). However, if p is actually a one-hot encoding, i.e., when Eq(y|x) [y] has a single non-zero value, they are in fact equal. For our experiments we employ this plug-in estimator where applicable. although our framwork can express any of the above methods."}, {"section_index": "3", "section_name": "4 EXPERIMENTS", "section_text": "We evaluate our framework on along a number of different axes, pertaining to its ability to (i) learn disentangled representation from a little supervision, (ii) demonstrate capability at a relevant clas- sification/regression task, (iii) successfully also learn the generative model, and (iv) admit the use of latent spaces of varying dimensionality Note that we do not set out to build the best possible classifier in these tasks. Instead, the classification task is a means to the end of demonstrating that the learned representation is indeed disentangled, often with minimal supervision. Also, details of neural network architectures, graphical models for the recognition networks, dataset characteristics, and hyper-parameter settings are provided in the Appendix."}, {"section_index": "4", "section_name": "4.1 MNIST AND SVHN", "section_text": "To begin with, we explore the facets of our model in the. standard MNIST and Google Street-View House Numbers. (SVHN) datasets. We use this example to highlight how the. provision of even the slightest structure, coupled with minimal. supervision, in often sufficient to induce the emergence of dis-. entangled representations in the recognition network. [Figure 3. shows the structure of the generative and recognition models for this experiment.\nSupervision rate. While learning with this objective, we observe data in batches that are either wholly supervised, or wholly unsupervised. This typically obviates the need to construct compli cated estimators for the partially observed cases, while also helping reduce variance in general over the learning and gradient computation (details of which are provided in the Appendix). Doing so also presents a choice relating to how often we observe labelled data in a complete sweep through the dataset, referred to as the supervision rate r. Practically, the rate represents a clear trade-off in learning the generative and recognition-network parameters under interpretability constraints. If the rate is too low, the supervision can be insufficient to help with disentangling representation in the recognition network, and if too high, the generative model can overfit to just the (few) supervised data points. The rate also has a natural relation to the variance of the objective function and its gra- dients. As can be seen from[Equation (4)] an evaluation of the objective for a given y' involves the unsupervised estimation of the conditional ELBO Lx|y. The rate implicitly affects the number of such estimations for any given y' and thus the variance of the objective with respect to that label y? The same argument applies for the gradients of the objective.\nn n d x x\nFigure 3: (left) Generative and (right) recognition model with. digit d and style n..\n(a) (b) (d)\nFigure 4: (a) Visual analogies for the MNIST data, with inferred style latent variable fixed and the label varied. (b) Exploration in \"style\"' space for a 2D latent gaussian random variable. Visual analogies for the SVHN data when (c) fully supervised, and (d) supervised with just 100 labels/digit\nFigure 4[a) and (c) show the effect of first transforming a given input (leftmost column) into the disentangled latent space, and with the style latent variable fixed, manipulating the digit through the generative model to produce appropriately modified reconstructions. These were both derived with full supervision over a 50 and 100 dimensional Gaussian latent space for the styles, respectively Figure 4(b) shows the transformation for a fixed digit, when the style latent is varied. This was derived with a simple 2D Gaussian latent space for the style. The last part, Figure 4(d) shows the ability of the network to begin disentangling the latent space with just 100 labelled samples per digit (training dataset size is 73o00 points). Separation between style and class is clearly evident even with such little supervision.\nWe compute the classification accuracy of the label-prediction task with this model for both datasets and the results are reported in the bottom of Figure 5] The results are compared to those reportec in |Kingma et al.(2014). For the MNIST dataset, we compare against model M2 as we run directly on the data, without performing a preliminary feature-extraction step. For the SVHN dataset, we compare against model M1+M2 even though we run directly on the data, using a CNN to simultane ously learn to extract features. Confidence estimates for both were computed off of 10 runs. We note that we fare comparably with these models, and in particular, when employing a CNN for feature extraction for the SVHN dataset, comfortably exceed them.\nClassification Error over supervision rates r and supervised set sizes l Classification Error over supervision rates r and supervised set sizes l 100 100 0=10 P=100 =100 =300 F= 300 80 =60 80 60 60 40 40 20 20 0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Supervision Rate (r) Supervision Rate (r) MNIST SVHN Ours Kingma et al.(2014) Ours Kingma et al.(2014 10 12.2 ( 1.38) 11.97 ( 1.71) 605.28 ( 0.76) 4.94 ( 0.13) 100 4.23 ( 0.68) 3.60 ( 0.56) 30.32 ( 2.74) 36.02 ( 0.10) 300 3.94 ( 0.77) 3.92 ( 0.63) 23.98 ( 1.83)\nFigure 5: (Top) Classification error graphs over different labelled set (per class) sizes and supervision rates for MNIST (left) and SVHN (right). Note the steep drop in error rate with just a handful of labels per class (l), seen just a few times (r). (Bottom) Classification error rates for different (per- class) labelled-set sizes l over different runs..\nOurs (Full Supervision) Ours (Semi-Supervised) Jampani et al.(2015 Identity 4.2 ( 0.84) 10.3 ( 2.36) ~ 30 Lighting 14.2 ( 1.12) 28.4 ( 4.12) ~ 10\nFigure 7: (Top) Exploring the generative capacity of the model. Column 1: input image. Col- umn 2: reconstruction. Columns 3-7: reconstructions with fixed (inferred) lighting and varying identities. (Bottom) Classification and regression error rates for the identity and lighting latent vari- ables, fully-supervised, and semi-supervised with 20 distinct labelled example per variation axis (60 total). Classification is a direct 1-out-of-38 choice, whereas for the comparison, error is a nearest- neighbour loss based on the inferred reflectance. Regression loss for lighting is measured as cosine angle distance. Results for Jampani et al.(2015) are estimated from plot asymptotes.\nFigure 5 shows the effect of the supervision rate r on the error rate. As evident from the graph, the rate has a strong affect on how quickly one learns an effective classifier. This indicates that wher labels are sparse or hard to come by, a training regime that runs largely unsupervised, even only oc casionally looking at the supervised data, still learns to disentangle the latent-space representations"}, {"section_index": "5", "section_name": "4.2 INTRINSIC FACES", "section_text": "We next move to a harder problem involving a generative model of faces, attempting to highlight. how the introduction of stronger dependency structures in the recognition model helps disentangle. latents, particularly when the generative model assumes conditional independence between the la- tents. Here, we use the \"Yale B\" dataset as processed byJampani et al.(2015) to train the models. shown in Figure 6 The primary tasks we are interested in here are (i) the ability to manipulate the inferred latents to evaluate if they qualitatively achieve semantically meaningful disentangled repre-. sentations, (ii) the classification of person identity, and (iii) the regression for lighting direction.s.\nFigure 7|presents both qualitative and quantitative eval-. uation of the framework to jointly learn both the struc-. tured recognition model, and the generative model pa. rameters. A particular point of note is that we explic. itly encode \"identity\" as a categorical random variable. since we have knowledge about the domain and the rel-. evant axis to explore. Since we also learn the generative model. which in the domain of the actual dataset is sim. ply the expression (n.l) r + e, we can afford to weakly. specify the structure allowing for some neural-network. component to take up the requisite slack in order to re-. construct the input. This allows us to directly address. the task of predicting identity, instead of approaching. it through surrogate evaluation methods (e.g. nearest-. neighbour classification based on inferred reflectance).\nWhile this formulation allows us to to perform the identity classification task, the fact that our recognition model never supervises the reflectance means that the variable can typically absorb some of the representational power of other, semi-supervised nodes. This is particularly the case when dealing with high-dimensional latent spaces as for reflectance and shading..\nx C X\nS x X\nFigure 6: (Top) Generative and (Bottom recognition model with identity i, light ing l, reflectance r, and shading s.\nFigure 8: Generative (1) and recognition (m) model with digit d, style n, canvas c, and count K"}, {"section_index": "6", "section_name": "4.3 MULTI-MNIST", "section_text": "We observe that we are indeed able to reliable learn to count, at least within the limits of upto 3. digits in the multi-mnist dataset. The dataset was generated directly from the MNIST dataset by ma- nipulating the scale and positioning of the standard digits into a combined canvas, evenly balanced across the counts and digits. The results across different supervised set sizes and supervision rates are shown in the table inFigure 8"}, {"section_index": "7", "section_name": "DISCUSSION AND CONCLUSION", "section_text": "In this paper, we introduce a general framework for semi-supervised learning in the VAE setting tha. allows incorporation of graphical models to specify a wide variety of structural constraints on the recognition network. We demonstrate its flexibility by applying it to a variety of different tasks in the. visual domain, and evaluate its efficacy at learning disentangled representations in a semi-supervised. manner, showing strong performance..\nThis framework ensures that the recognition network learns to make predictions in an interpretabl and disentangled space, constrained by the structure provided by the graphical model. The structure form of the recognition network also is typically a better fit for vision models, as it helps bette capture complexities in the likelihood (usually the renderer). Given that we encode graphical model in the recognition network, and Johnson et al.[(2016) encode it in the generative model in concer with VAEs, a natural extension would be the exploration of the ability to learn effectively whe. specifying structure in both by means of graphical models. This is a direction of future work we ar interested in, particularly in context of semi-supervised learning..\nThe framework is implemented as a Torch library (Collobert et al.]2011), enabling the construction. of stochastic computation graphs which encode the requisite structure and computation. This pro-. vides another direction to explore in the future - the extension of the stochastic computation graph framework to probabilistic programming [Goodman et al.]2008] [Wingate et al.]2011]Wood et al. 2014). Probabilistic programs go beyond the presented framework to include stochastic inference. and the ability to specify arbitrary models of computation. The combination of such frameworks with neural networks has recently been studied in Ritchie et al. (2016); Le et al.[(2016), and indi- cates a promising avenue for further exploration.."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016..\nsize rate (%) error rate (%) nk dk Ck Unsup 0 32.25 ( 12.97 500 1 6.42 ( 2.15) nk dk 500 10 k 4.21 ( 1.29) Kmax 1000 1 4.72 ( 1.60) K 1000 10 2.98 ( 0.93) X X K\nFinally, we run an experiment to test the ability of our framework to handle models that induce latent. representations of variable dimension. We extend the simple model from the MNIST experiment by composing it with a stochastic sequence generator, to test its ability to count the number of digits in a given input image, given its ability to encode and reconstruct the digits in isolation. The graphical. models employed are depicted inFigure 8\nYoshua Bengio, Nicholas Leonard, and Aaron Courville. Estimating or propagating gradient through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.\nRonan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environmen for machine learning. In BigLearn, NIPS Workshop, 2011..\nND Goodman, VK Mansinghka, D Roy, K Bonawitz, and JB Tenenbaum. Church: A language fo. generative models. In Uncertainty in Artificial Intelligence, pp. 220-229, 2008\nMax Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Ad vances in Neural Information Processing Svstems. pp. 2017-2025. 2015.\nVarun Jampani, S. M. Ali Eslami, Daniel Tarlow, Pushmeet Kohli, and John Winn. Consensus mes. sage passing for layered graphical models. In International Conference on Artificial Intelligence and Statistics, pp. 425-433, 2015.\nMatthew J. Johnson, David K. Duvenaud, Alex B. Wiltschko, Sandeep R. Datta, and Ryan P. Adams Composing graphical models with neural networks for structured representations and fast infer-. ence. In Advances in Neural Information Processing Systems, 2016..\nDiederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the 2nd International Conference on Learning Representations. 2014\nDiederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems pp. 3581-3589, 2014.\nTejas D Kulkarni, Pushmeet Kohli, Joshua B Tenenbaum, and Vikash Mansinghka. Picture: A probabilistic programming language for scene perception. In Proceedings of the IEEE Conference. on Computer Vision and Pattern Recognition, pp. 4390-4399, 2015a\nTuan Anh Le, Atilim Gunes Baydin, and Frank Wood. Inference compilation and universal proba bilistic programming. arXiv preprint arXiv:1610.09900, 2016.\nKevin P Murphy. Machine learnin ective. MIT press, 2012 probabilistic\nRajesh Ranganath, Dustin Tran, and David M Blei. Hierarchical variational models. arXiv preprin arXiv:1511.02386. 2015\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980,2014. URLhttp://arxiv.0rg/abs/1412.6980\nAndriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1791- 1799, 2014.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of The 31st International Con ference on Machine Learning, pp. 1278-1286, 2014.\nJohn Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pp. 3510- 3522, 2015.\nKihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, pp. 3465-3473, 2015.\nThe Stan Development Team. Stan modeling language user's guide and reference manual. http://mc stan.0rg/, 2013.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992.\nFrank Wood, Jan Willem van de Meent, and Vikash Mansinghka. A new approach to probabilistic programming inference. In Artificial Intelligence and Statistics, pp. 1024-1032. 2014."}, {"section_index": "9", "section_name": "FORMULATION", "section_text": "Gradient estimates for 0 and .. portional to the gradients of the conditional ELBO as 2re\nVe L(0,$;x) 9 Lx|y 7o. L(0\nwhile the gradient with respect to the \"classifier' parameters y takes a different form. Applying the product rule toEquation (4)|we have.\nDavid Wingate, Andreas Stuhlmueller, and Noah D Goodman. Lightweight implementations of probabilistic programming languages via transformational compilation. In International Confer- ence on Artificial Intelligence and Statistics. pp. 770-778. 2011.\nGradients of the Variational Objective: We consider the gradients of the form in|Equation (4) with respect to 0, z, and y. In particular, note that for both 0 and $z the gradient is the same as the gradient with respect to the conditional ELBO Lx|y, up to a per-datapoint scaling factor. q(y' x'). For continuous latent variables, as well as for many discrete random variables, the. expectations over z can be reparameterized into a form where the gradients can be approximated. with a single sampled value. Evaluating|Equation (4)|at this point yields estimators for the ELBO and conditional ELBO Lx|y, as well as corresponding single-sample gradient estimates VL and V Lx|y for each set of parameters.\nx|y + logp(y') - log qoy (y' | qox(y|x')Vgx logqox(y $y Yox(y X Lx|y +logp(y)-logqy(y|x) -1VoyYoy(yi (y'|x)|L-1|Vgy logqpy(y'|x)\nlog qoy (yx)Vox log qox 10g log qox"}, {"section_index": "10", "section_name": "MODEL AND NETWORK PARAMETERS", "section_text": "We note for that all the experiments, save the one involving Street-View House Numbers (SVHN). were run using a 2-3 layer MLP with 512 nodes and using a Bernoulli loss function. For SVHN, we. additionally employed a two stage convolutional and a 2 stage deconvolutional network to effectively extract features for the standard MLP model for the recognition network and the generative model respectively; training the entire network end-to-end. For learning, we used AdaM (Kingma & Ba. 2014) with a learning rate of O.001 (0.0003 for SVHN) and momentum-correction terms set to their default values. As for the minibatch sizes, they varied from 80-500 depending on the dataset being. used and its size."}, {"section_index": "11", "section_name": "MODELS", "section_text": "The syntax of our computation graph construction is such that the first call instantiates the compu tation, and the second instantiates the node and its connections. For specified random variables, the first set of parameters defines the prior and second set the parameters for the proposal distributions In all our models, we extract the common, feature-extraction portions of the recognition model qo into a simple pre-encoder. Parameters and structure for this are specified above.\nThe class-conditional model for MNIST and SVHN\nThe model used for the multi-mnist dataset"}]
SkJeEtclx
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Deep neural architectures have led to remarkable progress in computer vision and natural language. understanding problems. Image captioning is one such application that has seen the combination. of convolutional structures (Krizhevsky et al.]2012] LeCun et al.]1998), which have been shown to be very effective for problems like object detection, with sequential recurrent structures, which. have been shown to be very effective for problems like machine translation (Sutskever et al.2014). One of the key modelling paradigms shared by most models for image captioning is the notion of an. attention mechanism that guide the model to attend to certain parts of the image while generating\nThe attention models used for problems such as image captioning typically depend on the single. image under consideration and the partial output generated so far, jointly capturing one region of ar image and the words being generated. However, such models cannot capture the temporal reasoning necessary to effectively produce words that refer to actions and events taking place over multiple. frames in a video. For example, in a video depicting \"someone waving a hand', the \"waving\" actior. can start from any frame and can continue on for a variable number of following frames. More. importantly, it is likely in a given video quite few frames do not contain any useful information or. motion in regard to a given task. Given this, it is not surprising that even with recent advancements in image captioningXu et al.(2015a); Johnson et al.(2016);[Vinyals et al.(2015), video captioning has remained challenging.\nMotivated by these observations, we introduce a memory-based attention mechanism for video cap-. tioning and description. Our model utilizes memories of past attention in the video when reasoning. about where to attend to in a current time step. This allows the model to not only effectively lever age local attention, but also to consider the entire video as it generates each word. This mechanism. is similar to the proposed central executive system in human cognition, which is thought to permit. human performance on two simultaneous tasks (e.g., seeing and saying) using two separate percep."}, {"section_index": "1", "section_name": "MEMORY-AUGMENTED ATTENTION MODELLING FOR VIDEOS", "section_text": "'Microsoft Research TUniversity of Texas at Arlington. +T Google rasool.fakoor @ mavs.uta.edu, + {asamir, singbing.kang, pkohli} @ microsoft.com ** mmitchellai @ google.con\nrasool.fakoor@ mavs.uta.edu, t { asamir, singbing.kang, pkohli} @ microsoft.com *' mmitchellai@ google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Recent works on neural architectures have demonstrated the utility of attention. mechanisms for a wide variety of tasks. Attention models used for problems such. as image captioning typically depend on the image under consideration, as well. as the previous sequence of words that come before the word currently being gen-. erated. While these types of models have produced impressive results, they are. not able to model the higher-order interactions involved in problems such as videc. description/captioning, where the relationship between parts of the video and the. concepts being depicted is complex. Motivated by these observations, we propose. a novel memory-based attention model for video description. Our model utilizes. memories of past attention when reasoning about where to attend to in the current time step, similar to the central executive system proposed in human cognition. [Baddeley & Hitch[1974). This allows the model to not only reason about lo. cal attention more effectively, it allows it to consider the entire sequence of video. frames while generating each word. Evaluation on the challenging and popular. MSVD and Charades datasets show that the proposed architecture outperforms all. previously proposed methods and leads to a new state of the art results in the video. description.\ntual domains (e.g., visual and linguistic) by binding information from both sources into coherer structure that enables coordination, selective attention, and inhibition.\nOur work shares the same goals as recent work on attention mechanisms for sequence-to-sequence. architectures, such as Rocktaschel et al.(2016) and[Yang et al.(2016). However, there are majo. differences between this work and our current work. Rocktaschel et al.[(2016) considers the domair. of entailment relations, where the goal is to determine entailment given two input sentences. They. propose a soft attention model that is not only focused on the current state, but the previous as. well. In our model, we explicitly store all previous attention into memory. In addition, our memory. memorizes the encoded version of the input videos conditioned on previously seen words. Ever. though Yang et al.(2016) and our work both try to solve the problem of locality of attention, ou. work is different from them in how the memory architecture is modelled. More specifically, the. incorporate discriminative supervision into their ''reviewer' mechanism, which is not the case in ou model. Further, their model is applied to image caption generation, which is to some extent simple. than video caption generation because there is no temporal structure to model..\nWe apply our model on the video captioning problem and evaluate it on the MSVD (Chen & Dolan 2011) and the Charades (Sigurdsson et al.2016) datasets. Experimental results show that our pro- posed architecture outperforms all previous methods and leads to new state of the art results. While we have chosen the video captioning problem for our experiment, the model is general enough that it can be applied to other problems where attention models are used.\nOne of the primary challenges in learning a mapping from a visual space (i.e., video or image) to a language space is learning a representation that not only effectively represents each of these modalities, but is also able to translate a representation from one space to the other.Rohrbach et al. (2013) developed a model that generates a semantic representation of visual content that can be used as the source language for the language generation module. Venugopalan et al.(2015b) proposed a deep method to translate a video into a sentence where an entire video is represented with a single vector based on the mean pool of frame features. However, representing a video by an average of its frames misses the temporal structure of the video. To address this problem, recent work (Yao et al. 2015] Pan et al.]2016a] Venugopalan et al.]2015a] [Andrew Shin]2016]Pan et al.[[2016bfXu et al. 2015bf Ballas et al.||2016f Yu et al.|2016) proposed methods to model temporal structure of video as well as language.\nThe majority of these methods are inspired by sequence to sequence (Sutskever et al.[2014) and. attention (Bahdanau et al.[2015) models. Sequence learning (Sutskever et al.2014) was originally proposed to map the input sequence of a source language to a target language. Even though applying. this method with a combination of attention to the problem of translating a video to a description. shows promising results, there are some shortcomings. First of all, modelling the video content with. a fixed-length vector in order to map it to a language space is a much harder problem than mapping. from a language to a language given the complexity of visual content. Since not all frames in a video. are equally salient for a short description, and an event can happen in multiple frames, it is important. for a model to identify which frames are most salient. Further, the model should be able to focus on. points of interest within these frames to select what to talk about. Even using a variable-length vector. to represent a video using attention (Yao et al.|2015) can have some problems. More specifically. current attention methods are local|Yang et al.(2016), since the attention mechanism works in a sequential structure, and lacks the ability to capture global structure. Moreover, combining a video. and a language description as a sequence to sequence is usually done by some variant of a recurrent. neural network (RNN) (Hochreiter & Schmidhuber1997). Given the limited capacity of a recurrent network to model very long sequences, memory networks (Weston et al.]2014) Sukhbaatar et al.. 2015) have been introduced to help the RNN memorize sequences. However, one problem these memory networks suffer from is the difficulty in training the model. The model in Weston et al.. (2014) requires supervision at each layer which makes training with backpropagation a challenging. task. Even though Sukhbaatar et al.(2015) proposed a memory network that can be trained end-. to-end, working with memory is still a challenging problem in deep learning especially with write. operation (Graves et al.2014).\nHAM Decoder ho h1 ht'-1 ht A walking into glass\nFigure 1: Our proposed architecture. Each component of our model is described in|3.1|through 3..\nTo address these problems, we propose a memory-based attention sequence-to-sequence model that. not only can learn hierarchical attention relationships, but provides a simple and effective memory structure. In the next section, we explain our model in more detail..\nOur goal is to design an architecture that learns where to look and what to look for in a video, in order to talk about it in the description. To achieve this goal, we formulate the problem as sequence learning to maximize the probability of generating a correct description given a video:\nO* = argmax log p(S[f1, f2,..., fn; O (S,f1,f2,....,fN)\nwhere S is the description, f1, f2, ..., fv are the input video frames, and O is the model parameter. vector. The main modelling challenges in video description are to develop a system that can mode. the temporal structure of the video, learn to attend to the important parts of a video, how to memorize. the video that was described given all the generated words so far, then generate a new word by. looking at the entire video. To address these issues, we propose an end-to-end network that has. three components (Figure 1): Temporal Model (TEM), Hierarchical Attention/Memory (HAM) and the Decoder. The goal in TEM is to capture the temporal structure and track motion in a video. The HAM component acts as a hierarchical attention or memory between an input video and the. description. More specifically, the HAM learns a hierarchical attention structure that learns where. to attend in video given all previously generated words and previous states. The HAM can be. interpreted as a memory structure as well, where it learns to memorize an encoded version of a. video with language. HAM provides the decoder with the ability to look at an entire video plus all. the previously generated words before generating any new words. This is important because a single. action normally exhibits multiple frames in the input video. By employing the HAM, the model can. effectively model the action over these frames. One of the main contributions of this work is to use a global state to generate any new word. This global state aggregates information from previously. generated words and all input frames. We will first describe each component of our model, then. explain details of training and inference.."}, {"section_index": "3", "section_name": "3.1 TEMPORAL MODELER (TEM)", "section_text": "One important question is how to encode the temporal structure of the input video for caption gener. ation. Recently, it has been shown that Recurrent Neural Networks (RNN) has the ability to model the temporal structure in sequential data such as video (Ballas et al.]2016) Sharma et al.]2015 Venugopalan et al.]2015a) and speech (Graves & Jaitly] 2014). Moreover, since frame-to-frame temporal variation tend to be local (Brox & Malik2011) and critical in the motion modeling (Bal-. las et al.]2016), it is important to consider a frame representation that can preserve frame-to-frame temporal variation. Even though using features extracted from the fully connected layers of Con- volutional Neural Networks (CNNs) have shown state of the art results in image classification and"}, {"section_index": "4", "section_name": "recognition (Simonyan & Zisserman). 2014] He et al.2016), these features tend to discard the low level information useful in modeling the motion in the video (Ballas et al.]2016)", "section_text": "To address the temporal modeling and video representation problems, we use an RNN to model the temporal structure of the video where at each time step, a frame encoding with size of RD is used as an input to the RNN. Instead of extracting features from a top layer of the pretrained CNN, intermediate convolutional maps have been extracted for the video frames. Specifically, for a given video, X, with N frames X = [X1, X2, .:- , XN], N convolutional maps of size RLD are extracted where D is dimension of a feature corresponding to L locations in the input frame.\nIn order to let the network selectively focus on these L locations of each frame given the hidden state of RNN, we apply a soft attention model Bahdanau et al.(2015);Xu et al.(2015a); Sharma et al.|(2015), called \"Location Attention (Latt)\". More specifically, by using a softmax, each hidden state produces L probabilities to specify which part of the input is more important, and then creates an input map for the RNN using these probabilities. The fLatt is defined as follow:\nexp((ht-1) I Wj 0; k=1 exp (hy Wk L Ft=pjXj j=1\nwhere ht-1 E RK is hidden state of RNN at t - 1, W, E RKL, and Ft E RD. At each time. step, TEM learns a vector representation for each frame, looking at the frame convolution map, and applying the location attention on this map conditioned on all previously seen frames..\nFt = fLatt(Xt,ht-1;Wp ht= fv(Ft, ht-1;O\nwhere fy can be a vanilla RNN, LSTM, or GRU and O, is the parameters of the fy. Due to the fact that vanilla RNNs have gradient vanishing and exploding problems (Pascanu et al.]2013), we. use gradient clipping to deal with gradient exploding, and an LSTM with the following flow to deal. with the gradient vanishing problem:\nit =o(FtWx; + ht-1Wni) ft = o(FtWx + ht-1Wn ot = o(F +ht-1w gt = tanh(Ft ct = ft ht = ot O tanh(ct)\nwhere Wh*.. E RKXK Wc* and we define D {Wn*, Wx*}\nOne problem with using sequence-to-sequence style architectures (Sutskever et al.|2014), to model. a task such as video language description, is how to find a mapping from a video space to a language. space that can capture the relationship between a word and video, or more specifically, the con-. nection between an entire video and an entire sentence where there might not be a clear alignment. between the two sequences, as opposed to machine translation and speech recognition. Furthermore,. the model should be able to identify which part of the video is more relevant to the description be-. cause captions normally focus on a tiny fraction of the facts present in the video. More importantly. once the model starts generating the description, it should still be reminded with the video frames to generate meaningful descriptions. In order to address these problems, we propose a memory-based. attention that encodes a video into memory, built as a function of the state of the language genera-. tion network (a.k.a. Decoder) and the state of the TEM network. More specifically, our Hierarchical. Attention/Memory can be formulated as two following steps:.\ntention updateT QA = tanh(H,Wv + Ht'-1y + Ht'-1W. Qt = softmax(UT QA) F = aTH\nht' = fm(ht'-1, F;Om)"}, {"section_index": "5", "section_name": "3.3 DECODER", "section_text": "In order to generate a new word conditioned on all previous words and HAM states, a recurrent structure is modelled as follows:\nht = St' = softmax(Weht)"}, {"section_index": "6", "section_name": "3.4 TRAINING AND OPTIMIZATION", "section_text": "The goal in our network is to predict the next word given all previously seen words and an input video. In order to optimize our network parameters, O = {Wp, Ov, Oa, Om, We}, we minimize a. negative log likelihood loss function, formulated as follow:."}, {"section_index": "7", "section_name": "4 EXPERIMENTS", "section_text": "DaTAseT We evaluate our proposed model on the Charades (Sigurdsson et al.]2016) dataset. and the Microsoft Video Description Corpus (MSVD) (Chen & Dolan2011). Charades contains 9,848 videos (in total) and provides 27, 847'|video descriptions, with 7569 training, 1, 863 test,. 400 for the validation. We follow the same split (i.e. training and test splits) as Sigurdsson et al.. (2016). It is worth noting that one major difference between this dataset and others is that they use a. \"Hollywood in Homes\"' approach to collecting the data (Sigurdsson et al.]2016), where \"actors\" are. crowdsourced, yielding a diverse scene and actor videos. One reason that we report results on this\n1Only 16087 out of 27, 847 are used as captions for our evaluation since the 27, 847 refers to script of the video as well as captions.\nwhere st' is a word vector at t', We E RKC and C is the vocabulary size. In addition, St' assigns a probability to each word in the language. We use LSTMs for both fm and fg.\nT |V| L(S,x;o)=-) 'Sj,i log($i,i) + X || O II? J\nwhere |V|is the dictionary size. We fully train our network in an end-to-end fashion using first-order stochastic gradient-based optimization method with an adaptive learning rate. More specifically, in order to optimize our network parameters, we use Adam Kingma & Ba (2015) with learning rate 2 10-5 and set 1, 32 to 0.8 and 0.999, respectively. At the training, we use a batch size of 16..\nMSVD is a set of Youtube videos that are annotated by a Mechanical Turker|who was asked tc pick a clip from a video that represents an activity. In this dataset, each clip is annotated by multiple workers with a single sentence. This dataset contains 1, 970 videos and about 80, O00 descriptions where 1,200 of the videos are training data, 670 are test data, and the rest (i.e., 100 videos) are assigned for validation. In order to make the results comparable with other papers, we follow the exact training/validation/test split provided byVenugopalan et al.(2015b).\nEVALUATION METRICS Below, we report results on the video caption generation task. In order. to evaluate captions generated by our model, we use model-free automatic evaluation metrics. We. adopt METEOR, BLEU@N, and CIDEr metrics available from the Microsoft COCO Caption Eval- uation code|to score the system.\nVIDEO AND CAPTION PREPROCESsING We preprocess the captions for both datasets using th Natural Language Toolkit (NLTK)4 Beyond this, no other type of preprocessing is used.\nWe extract sample frames for each video and pass each frame through VGGnet (Simonyan & Zisser. man2014) without any fine-tuning. For the experiments in this paper, we use the feature maps fron conv5_3 layer after applying ReLU. The feature map in this layer is 14 14 512. Our TEM com. ponent operates on the flattened 196 512 of this feature cubes. For the ablation studies, features. from fully connected layer are used as well where the features in this layer have 4096 dimension..\nHYPER-PARAMETER OPTIMIZATION We use random search (Bergstra & Bengio2012) on vali dation set to select hyper-parameters on both datasets. The word-embedding size, hidden layer size (for both TEM and Decoder), and memory size of the best model on Charades are: 237, 1316, anc 437, respectively. These values are 402, 1479, and 797 for the model on MSVD dataset. A stack of two LSTMs are used in the Decoder and TEM."}, {"section_index": "8", "section_name": "ABLATION ANALYSIS", "section_text": "We first perform a series of ablation studies in order to show the contributions of the differeni components of our model. Specifically, we show that the importance of each components in our model in caption generation task on MSvD dataset. One ablation (denoted as Att + No TEM. corresponds to a simpler version of our model in which we remove the TEM component and instead we pass each frame of a video through a CNN and extract features from the last fully-connected hidden layer (e.g., fc7). In addition, we replace our HAM component with a simpler version where the model only memorizes the current step instead of all previous steps. In another ablation (denoted as No HAM + TEM), we remove the HAM component from our model and keep the rest of our model as it is. In the next variation (denoted as HAM + No TEM), we remove the TEM component and calculate features for each frame, similar to Att + No TEM. Finally, the last row in the table is our proposed model (denoted HAM + TEM) with all its components..\nTable[1reports the result of this study. In this experiment, we sample 40 frames per video and use them as the inputs to a network. As the results show, HAM plays a critical role in our proposed model, and removing it causes a drop in performance. On the other hand, removing TEM by itseli does not drop performance as much as dropping the HAM. When we put the two together, they complement one another, resulting in better performance.\nhttps://www.mturk.com/mturk/welcome\nWe first present an ablation analysis to elucidate the contribution of the different components of our proposed model. Next, we compare the overall performance of our model on video caption generation task to other models.\nTable 1: Ablation of our model with and without the HAM component on the test set of 670 videos\nNext, to extensively evaluate our model, we compare our model with state-of-the-art models and baselines for the video caption generation task on the MSVD dataset. In this experiment, we use 8 frames per video as the inputs to the TEM module. Table|2P|shows the results for this experiment. As the results show, our model gets state-of-the-art scores either in BLEU-4 or METEOR, compared to other methods. This is particularly noteworthy because we do not use external features for the video such as Optica1 Flow (Brox et al.[ 2004) (denoted as Flow in table), 3-Dimensional Convolutional Network features (Tran et al.|2015) (denoted as C3D), or fine-tuned CNN features (denoted as FT) on the action recognition task with dataset such as UCF-101. The only exception happens when we compare our model with (Yu et al.|2016), who uses C3D features. In this method, adding C3D features leads to a huge improvement in their results (compare row 4 with 11 in Table2. On the other hand, our method without using any external features can achieve better results in comparison with all other methods. This is important because our proposed architecture can alone not only learn a representation for video that can model the temporal structure of a video sequence, but also a representation that can effectively map visual space to the language space.\nTable 2: Video captioning evaluation on the test set of 670 videos in MSVD\nMethod METEOR BLEU@1 BLEU@2 BLEU@3 BLEU@4 CIDEr Venugopalan et al. 2015b 27.7 Venugopalan et al. 2015aj 29.2 Pan et al.(2016b) 29.5 74.9 60.9 50.6 40.2 Yu et al.(2016) 31.10 77.30 64.50 54.60 44.30 Pan et al.(2016a) 33.10 79.20 66.30 55.10 43.80 Our Model 31.80 79.40 67.10 56.80 46.10 62.70 Yao et al.(2015) + C3D 29.60 41.92 51.67 Venugopalan et al. (2015a) 29.8 + Flow Ballas et al.(2016) + FT 30.75 49.0 59.37 Pan et al.(2016b) + C3D 31.0 78.80 66.0 55.4 45.3 Yu et al.(2016) + C3D 32.60 81.50 70.40 60.4 49.90"}, {"section_index": "9", "section_name": "QUALITATIVE RESULTS", "section_text": "We show some captions generated by our model in[2] The model mostly generates correct captions for cases where content and ground truth captions are consistent. There are some cases in which our model makes some mistakes. For example, in a 'a dog is on a trampoline' video, our model generated \"a man is washing a bath' as a caption. This is interesting because the 'man' object only appears in a few frames (1 or 2), but our model can still recognize the man object in the video.\nMethod METEOR BLEU@1 BLEU@2 BLEU@3 BLEU@4 CIDEr Att + No TEM 31.20 77.90 65.10 55.3 44.90 63.90 No HAM + TEM 30.5 78.10 65.20 55.10 44.60 60.50 HAM + No TEM 31.0 78.70 66.90 57.40 47.0 62.10 HAM + TEM 31.70 79.0 66.20 56.0 45.6 62.20\nIn addition, we report results on the Charades dataset for video caption generation. This dataset is challenging because only a few captions per video (about 2 per video) are available. In this experi ment, we use 16 frames per video as the inputs to the TEM module. Table|3 shows the performance of our method on this dataset. Our method can achieve 10% improvement over|Venugopalan et al. (2015a) in the caption generation task. It is worth noting that a human can only achieve a score of 24 in METEOR for this dataset, which illustrated the level of difficulty in this dataset.\nTable 3: Video captioning evaluation on the test set of 1863 videos in Charades\nMethod METEOR BLEU@1 BLEU@2 BLEU@3 BLEU@4 CIDE Human(Sigurdsson et al.2016) 24 62 43 29 20 53 Sigurdsson et al.(2016 16 49 30 18 11 14 Our Model 17.6 50 31.1 18.8 11.5 16.7 Our Captions Ground Truth A group of A group of people are young children. dancing performing together A person is A woman is cutting the cutting garlic vegetable A man is A man is playing a playing the guitar guitar A young girl is. A little girl is playing the talking on a flute cordless telephone A woman is A woman is pouring eggs pouring into a bowl ingredients into. a bowl A man is A man is playing a flute. playing a large. flute A woman is. A woman is applying putting on makeup makeup A man is A guy is cutting a gun. shooting a gun. A man is A dog is on a. washing a bath. trampoline A cat is eating. Hamsters are. eating\nMethod METEOR BLEU@1 BLEU@2 BLEU@3 BLEU@4 CIDEr Human(Sigurdsson et al.2016) 24 62 43 29 20 53 Sigurdsson et al.2016 16 49 30 18 11 14 Our Model. 17.6 50 31.1 18.8 11.5 16.7 Our Captions Ground Truth A group of A group of people are young children. dancing performing together A person is A woman is cutting thed cutting garlic vegetable A man is. A man is playing a playing thee guitar guitar A young girl is A little girl is. playing the talking on a flute cordless telephone A woman is. A woman is pouring eggs pouring into a bowl. ingredients into. a bowl A man is. A man is. playing a flute. playing a large. flute A woman is. A woman is. applying putting on makeup makeup A man is. A guy is cutting a gun shooting a gun A man is A dog is on a washing a bath. trampoline A cat is eating. Hamsters are eating\nFigure 2: Example captions generated by our model on the test video for MSVD. Incorrect captior cases are shown in red."}, {"section_index": "10", "section_name": "5 CONCLUSION", "section_text": "We introduce an end-to-end memory-based attention model to describe an input video using natura language description, similar to the central executive system proposed in human cognition. Ou. model utilizes memories of past attention when reasoning about where to attend to in the curren time step. This allows the model to not only reason about local attention more effectively, bu also allows it to consider the entire sequence of video frames while generating each word. Ou. experiments have confirmed that the memory components in our architecture play a significant rol in improving the performance of the entire network. It is worth noting that in this paper, we conside the problem of video caption generation, but our architecture can be applied to any sequence learning. problem, which we hope to explore in the future.."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Tatsuya Harada Andrew Shin, Katsunori Ohnishi. Beyond caption to narrative: Video captionin with multiple sentences. ICIP, 2016.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.\nNicolas Ballas, Li Yao, Chris Pal, and Aaron C. Courville. Delving deeper into convolutional net works for learning video representations. In ICLR, 2016\nJames Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. J. Mach Learn. Res., 13:281-305. 2012\nAlex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In ICML-14, pp. 1764-1772, 2014.\nAlex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401 2014.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. In CVPR, June 2016\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735 1780, November 1997. ISSN 0899-7667.\nJustin Johnson, Andrej Karpathy, and Li Fei-Fei. Densecap: Fully convolutional localization net works for dense captioning. In CVPR, 2016..\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nAlex Krizhevsky. Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo lutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (eds.). NIPS, pp. 1097-1105. 2012.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.\nPingbo Pan, Zhongwen Xu, Yi Yang, Fei Wu, and Yueting Zhuang. Hierarchical recurrent neura. encoder for video representation with application to captioning. In CVPR, June 2016a\nYingwei Pan, Tao Mei, Ting Yao, Houqiang Li, and Yong Rui. Jointly modeling embedding and translation to bridge video and language. CVPR, 2016b\nTim Rocktaschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom Reasoning about entailment with neural attention. In ICLR, 2016..\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neura networks. 1CML-13, 28:1310-1318, 2013\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In NIPS, pp. 3104-3112. 2014.\nDu Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spa tiotemporal features with 3d convolutional networks. In ICCV, 2015.\nSubhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, anc Kate Saenko. Sequence to sequence - video to text. In ICCV, 2015a.\nJason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. CoRR, abs/1410.3916, 2014\nKelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML-15, pp. 2048-2057, 2015a.\nRan Xu, Caiming Xiong, Wei Chen, and Jason J. Corso. Jointly modeling deep video and composi tional text to bridge vision and language in a unified framework. In AAAI, 2015b\nZhilin Yang, Ye Yuan, Yuexin Wu, Ruslan Salakhutdinov, and William W. Cohen. Encode, review and decode: Reviewer module for caption generation. CoRR, abs/1605.07912, 2016\nHaonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, and Wei Xu. Video paragraph captioning using hierarchical recurrent neural networks. In CVPR, June 2016..\nGunnar A. Sigurdsson, Gul Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta Hollywood in homes: Crowdsourcing data collection for activity understanding. In ECCV, 2016\nSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks NIPS, 2015.\nLi Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. Describing videos by exploiting temporal structure. In ICCV, 2015.."}]
r1R5Z19le
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks have been shown to perform very well on various classification problems. often yielding state-of-the-art results. Key motivation for the use of these models, is the assumption of hierarchical nature of the underlying problem. This assumption is reflected in the structure of NNs, composed of multiple stacked layers of linear transformations followed by non-linear activa- tion functions. The NN final layer is usually a softmax activated linear transformation indicating the likelihood of each class, which can be trained by cross-entropy using the known target of each sam-. ple, and back-propagated to previous layers. The hierarchical property of NNs has been observed to. yield high-quality, discriminative representations of the input in intermediate layers. These repre- sentative features, although not explicitly part of the training objective, were shown to be useful in subsequent tasks in the same domain as demonstrated by Razavian et al.[(2014). One serious prob- lem occurring in neural network is their susceptibility to overfit over the training data. Due to this fact, a considerable part of modern neural network research is devoted to regularization techniques and heuristics such as Srivastava et al.(2014); Ioffe & Szegedy(2015);Wan et al.(2013); Szegedy et al.(2015), to allow the networks to generalize to unseen data samples. The tendency to overfit. is most apparent with problems having a very small number of training examples per class, and these are considered ill-suited to solve with neural network models. Because of this property, semi-. supervised regimes in which most data is unlabeled, are considered hard to learn and generalize with NNs."}, {"section_index": "1", "section_name": "SEMI-SUPERVISED DEEP LEARNING BY METRIC EM- BEDDING", "section_text": "Nir Ailon\nTechnion - Israel Institute of Technology Haifa, Israel"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this work we will consider a new training criterion designed to be used with deep neural net- works in semi-supervised regimes over datasets with a small subset of labeled samples. Instead of a usual cross-entropy between the labeled samples and the ground truth class indicators, we will use the labeled examples as targets for a metric embedding. Under this embedding, which is the map- ping of a parameterized deep network, the features of labeled examples will be grouped together in euclidean space. In addition, we will use these learned embeddings to separate the unlabeled examples to belong each to a distinct cluster formed by the labeled samples. We will show this constraint translates to a minimum entropy criterion over the embedded distances. Finally, because of the use of euclidean space interpretation of the learned features, we are able to use a subsequent nearest-neighbor classifier to achieve state-of-the-art results on problems with small number of la beled examples.\nPrevious works have shown the possible use of neural networks to learn useful metric embedding. One kind of such metric embedding is the \"Siamese network\" framework introduced byBromley. et al.(1993) and later used in the works of[Chopra et al.(2005). One use for this methods is when the number of classes is too large or expected to vary over time, as in the case of face verification,. where a face contained in an image has to compared against another image of a face. This problem was recently tackled bySchroff et al. (2015) for training a convolutional network model on triplets. of examples. Learning features by metric embedding was also shown by Hoffer & Ailon(2015. to provide competitive classification accuracy compare to conventional cross-entropy regression This work is also related to|Rippel et al.[(2015), who introduced Magnet loss - a metric embedding. approach for fine-grained classification. The Magnet loss is based on learning the distribution of. distances for each sample, from K clusters assigned for each classified class. It then uses an inter mediate k-means clustering, to reposition the different assigned clusters. This proved to allow better accuracy than both margin-based Triplet loss, and softmax regression. Using metric embedding with neural network was also specifically shown to provide good results in the semi-supervised learning. setting as seen inWeston et al.(2012).\nAs stated before, a key approach to generalize from a small training set, is by regularizing the learned model. Regularization techniques can often be interpreted as prior over model parameters or structure, such as L, regularization over the network weights or activations. More recently, neural network specific regularizations that induce noise within the training process such as|Srivastava et al. (2014); [Wan et al.(2013); Szegedy et al.(2015) proved to be highly beneficial to avoid overfitting Another recent observation by Goodfellow et al.(2015) is that training on adversarial examples inputs that were found to be misclassified under small perturbation, can improve generalization This fact was explored byFeng et al.(2016) and found to provide notable improvements to the semi supervised regime byMiyato et al.(2015).\nRecently, a stacked set of denoising auto-encoders architectures showed promising results in both semi-supervised and unsupervised tasks. A stacked what-where autoencoder byZhao et al.(2015 computes a set of complementary variables that enable reconstruction whenever a layer implements a many-to-one mapping. Ladder networks byRasmus et al.[(2015) - use lateral connections to allow higher levels of an auto-encoder to focus on invariant abstract features by applying a layer-wise cost function.\nGenerative adversarial network (GAN) is a recently introduced model that can be used in an unsu. pervised fashion Goodfellow et al.(2014). Adversarial Generative Models use a set of networks. one trained to discriminate between data sampled from the true underlying distribution (e.g., a se. of images), and a separate generative network trained to be an adversary trying to confuse the firs. network. By propagating the gradient through the paired networks, the model learns to generat. samples that are distributed similarly to the source data. As shown by Radford et al.(2015), thi. model can create useful latent representations for subsequent classification tasks. The usage fo. these models for semi-supervised learning was further developed by Springenberg(2016) and Sal imans et al.(2016), by adding a N + 1 way classifier (number of classes + and additional *fake. class) to the discriminator. This proved to allow excellent accuracy with only a small subset c labeled examples.\nAnother technique for semi-supervised learning introduced by Grandvalet & Bengio[(2004) is con cerned with minimizing the entropy over expected class distribution for unlabeled examples. Reg ularizing for minimum entropy can be seen as a prior which prefers minimum overlap between observed classes. This can also be seen as a generalization of the \"self-training\"' wrapper method\nComparing with previous works such as Chopra et al.(2005) and Weston et al.(2012), this work. uses a novel objective which is composed of a distance ratio measure (unlike the contrastive, hinge based loss used before), and an entropy minimization on the distance measure to labeled samples.. Although distance ratio loss (Hoffer & Ailon (2015)) and entropy minimization (Grandvalet & Ben-. gio (2004)) are not new, this is the first attempt to our knowledge, of combining these ideas for. semi-supervised metric learning.\n3 OUR CONTRIBUTION: NEIGHBOR EMBEDDING FOR SEMI-SUPERVISED LEARNING\nWe will make a couple of assumptions regarding the given data\nWe will now define our learning setting on the semi-labeled data, using a neural network model denoted as F(x: 0) where x is the input fed into the network, and 0 are the optimized parameters (dropped henceforward for convenience). The output of the network for each sample is a vector of features of D dimensions F(x) E RD which will be used to represent the input.\nOur two training objectives which we aim to train our embedding networks by are\nAs the defined objectives create embeddings that target a nearest-neighbor classification with regarc to the labeled set, we will refer to it as \"Neighbor embedding\".\nWe will define a discrete distribution for the embedded distance between a sample x E X , and c labeled examples z1, ..., c E Xt, each belonging to a different class:.\n-l|F(x)-F(z)||2 P(x; Z1, ...,\ndescribed by Triguero et al.[(2015), in which unlabeled examples are re-introduced after being la beled with the previous classification of the model. This is also related to the \"Transductive suport vector machines\" (TSVM)Vapnik & Vapnik|(1998) which introduces a maximum margin objective over both labeled and unlabeled examples.\nIn this work we are concerned with a semi-supervised setting, in which learning is done on data of which only a small subset is labeled. Given observed sets of labeled data Xt = {(x, y)}-1 and unlabeled data Xy = {x}=l+1 where x E X, y E C, we wish to learn a classifier f : -> C to have a minimum expected error on some unseen test data Xtest.\nThe number of labeled examples is small compared to the whole observed set l < n. Structure assumption - samples within the same structure (such as a cluster or manifold are more likely to share the same label. This assumption is shared with many other semi supervised approaches as discussed in Chapelle et al.(2009) Weston et al.(2012)\nUsing these assumptions, we are motivated to learn a metric embedding that forms clusters such that samples can be classified by their L2 distance to the labeled examples in a nearest-neighbor procedure.\n(i) Create feature representation that form clusters from the labeled examples {(x, y)} E Xj such that two examples x1, x2 sharing the same label y1 = y2 will have a smaller embedded distance than any third example x3 with a different label y1 y3 F(x1)-F(x2)2 <F(x1) - F(x3)2 (ii) For each unlabeled example, its feature embedding will be close to the embeddings of one specific label occurring in L: For all x E Xu, z E XL, there exists a specific class l E C such that F(x)- F(zt)|2 <|F(x)- F(zk)2 where zt is any labeled example of class l and zk is any example from class k E C {l}\nF(x1)F(x2)2<F(x1)F(x3)["}, {"section_index": "3", "section_name": "4.1 DISTANCE RATIO CRITERION", "section_text": "Addressing objective (i), we will use a sample xt E X1 from the labeled set belonging to class k E C, and another set of sampled labeled examples z1, ..., c E Xt. In this work we will sample in. uniform over all available samples for each class..\nDefining the class-indicator I(x) as\nwe will minimize the cross-entropy between I(xp) and the distance-distribution of x with respect tc Z1 : :.\nThis is in fact a slightly modified version of distance ratio loss introduced in Hoffer & Ailon (2015)\ne-l|F(xi)-F(zk)|l2 L(xt, Z1, ..., Zc)L = - log -l|F(xi)-F(zi)||2\nThis loss is aimed to ensure that samples belonging to the same class will be mapped to have a small embedded distance compared to samples from different classes"}, {"section_index": "4", "section_name": "4.2 MINIMUM ENTROPY CRITERION", "section_text": "Another part of the optimized criterion, inspired by Grandvalet & Bengio (2004), is designed t reduce the overlap between the different classes of the unlabeled samples\nWe will promote this objective by minimizing the entropy of the underlying distance distribution oi x, again with respect to labeled samples 1, .., z\ne-|F(x)-F(zi)|l2 e-l|F(x)-F(zi)|2 L(x, Z1, .., zc)U = 10g j e-|F(x)-F(zj)|l2 e-l|F(x)-F(zj)|l2\nWe note that entropy is lower if the distribution 1is sparse, and higher if the distribution is dense. and this intuition is compatible with our objectives..\nOur final objective will use a sampled set of labeled examples, where each class is represented {z1, ..., zc} and additional labeled x1 and unlabeled xy examples, combining a weighted sum o1 both|3 and 5 to form:\nL(xt,xu,{Z1,..., zc}) = L(xt, Z1,...,Zc)L + uL(xu,Z1,..., Zc\nThis loss is differentiable and hence can be used for gradient-based training of deep models by. existing optimization approaches and back-propagation (Rumelhart et al.) through the embedding. neural network. The optimization can further be accelerated computationally by using mini-batches of both labeled and unlabeled examples..\nThis definition assigns a probability P(x; 1, ..., c): for sample x to be classified into class i, under. a 1-nn classification rule, when z1, .., c neighbors are given. It is similar to the stochastic-nearest-. neighbors formulation of Goldberger et al. (2004), and will allow us to state the two underlying objectives as measures over this distribution..\nif i = k lx1 otherwise\nL(xt, Z1, ..., zc)L = H (I(xt), P(xt; Z1,..., zc)\nL(x, z1,..., zc)U = H(P(x; Z1,..., zc)\nWe will now discuss some observed properties of neighbor embeddings, and their usefulness to semi-supervised regimes using neural network models."}, {"section_index": "5", "section_name": "5.1 REDUCING OVERFIT", "section_text": "Usually, when using NNs for classification, a cross-entropy loss minimization is employed by us ing a fixed one-hot indicator (similar to2) as target for each labeled example, thus maximizing a log-likelihood of the correct label. This form of optimization over a fixed target tend to cause an overfitting of the neural-network, especially on small labeled sets. This was lately discussed anc addressed by Szegedy et al.(2015) using added random noise to the targets by sampling uniformly from the set of classes, effectively smoothing the cross-entropy target distribution. This regulariza tion technique was shown empirically to yield better generalization by reducing the overfitting ovei the training set. Training on distance ratio comparisons, as shown in our work, provides a natural alternative to this Rv settingthe et tohethe 3b0dd1 of 1obeled c\nTraining on distance ratio comparisons, as shown in our work, provides a natural alternative to this problem. By setting the optimization target to be the embeddings of labeled examples, we create a continuously moving target that is dependent on the current model parameters. We speculate that this reduces the model's ability to overfit easily on the training data, allowing very small labeled datasets to be exploited."}, {"section_index": "6", "section_name": "5.2 EMBEDDING INTO EUCLIDEAN SPACE", "section_text": "By training the model to create feature embedding that are discriminative with respect to thei distance in euclidean space, we can achieve good classification accuracy using a simple nearest neighbor classifier. This embedding allows an interpretation of semantic relation in euclidean space which can be useful for various tasks such as information retrieval, or transfer learning."}, {"section_index": "7", "section_name": "5. 3 COMBINING SUPERVISED AND UNSUPERVISED OBJECTIVES", "section_text": "Neighbor embedding is composed of both supervised 3 and unsupervised 5 objectives that are weighted by AL, Ay coefficients. This can be used to balance or possibly anneal over time (Zamora- Martinez et al.(2016)) to adapt for the availability of labeled samples. This form of balancing was found previously to allow for better representation learning using unlabeled data.\nWe also note that prior knowledge about a problem at hand can be incorporated into the expectec measures with respect to the distance distribution 1] E.g, knowledge of relative distance betweer classes can be used to replace I(x) as target distribution in eq. 3|and knowledge concerning overlap. between classes can be used to relax the constraint in eq.5\nAll experiments were conducted using the Torch7 framework by Collobert et al.(2011) Code reproducing these results will by available at https://github. com/eladhoffer/ SemiSupContrast For every experiment we chose a small random subset of examples, with a balanced number from each class and denoted by X1. The remaining training images are used without their labeled to form Xu. Finally, we test our final accuracy with a disjoint set of examples Xtest. No data augmentation was applied to the training sets.\nIn each iteration we sampled uniformly a set of labeled examples 1, ...zicj E Xt. In addition,. batches of uniformly sampled examples were also sampled again from the labeled set Xt, and the unlabeled set Xu .\nA batch-size of b = 32 was used for all experiments, totaling a sampled set of 2 : b + C| examples. for each iteration, where [C = 10 for both datasets. We used|6|as optimization criterion, where L = Xy = 1. Optimization was done using the Accelerated-gradient method byNesterov(1983).\nTable 1: Results for MNIST. Using 100 labeled examples, no data-augmentation\nModel Test error % EmbedCNN|Weston et al.2012 7.75 SWWAE|Zhao et al.(2015 9.17 Ladder network Rasmus et al.. (2015 0.89 ( 0.50) Conv-CatGAN I Springenberg 2016) 1.39 ( 0.28) Ours 0.78 ( 0.3)\nwith an initial learning rate of lro = 0.1 which was decreased by a factor of 10 after every 30 epochs Both datasets were trained on for a total of 90 epochs. Final test accuracy results was achieved by using a k-NN classifier with best results out of k = {1,3, 5}. These results were average over 10 random subsets of labeled data. The choices for and k-NN parameters were made using validation set. We did not found any substantial difference between the values we explored, so they were usually left as the default value for simplicity.\nAs the embedding model was chosen to be a convolutional network, the spatial properties of input space are crucial. We thus omit results on permutation-invariant versions of these problems, noting they usually tend to achieve worse classification accuracies. We also note that the neural network models themselves are very simple to ensure performance achieved is due to the proposed objective and not the network architecture.\nThe MNIST database of handwritten digits introduced by LeCun et al.(1998) is one of the most studied dataset benchmark for image classification. The dataset contains 60,O00 examples of hand written digits from O to 9 for training and 10,000 additional examples for testing, where each sample. is a 28 x 28 pixel gray level image\nWe followed previous works ((Weston et al.2012),(Zhao et al.]2015),Rasmus et al.(2015)) and used semi-supervised regime in which only 100 samples (10 for each class) were used as Xt along. with their labels. For the embedding network, we used a convolutional network with 5-convolutional layers, where each layer is followed by a ReLU non-linearity and batch-normalization layer|Ioffe & Szegedy[(2015). The full network structure is described in Appendix table[3] Results are displayec. in table|1and reflect that our approach yields state-of-the-art results in this regime..\nWe also attempted to visualize the outcome of using this method, by training an additional mode with a final 2-dimensional embedding. Figure 1b|shows the final embeddings, where labeled exam ples are marked in color with their respective class, and unlabeled examples are marked in gray. We can see that, in accordance with our objectives, the labeled examples formed clusters in euclidear space separate by their labels, while unlabeled examples were largely grouped to belong each to one of these clusters.\nCifar-10 introduced by |Krizhevsky & Hinton (2009) is an image classification benchmark dataset. containing 50, 000 training images and 10, 000 test images. The image sizes 32 32 pixels, with. color. The classes are airplanes, automobiles, birds, cats, deer, dogs, frogs, horses, ships and trucks.\nFollowing a commonly used regime, we trained on 4000 randomly picked samples (400 for each class). As the convolutional embedding network, we used a network similar to that ofLin et al. (2013) which is described in table[3] The test error results are brought in table[2\nAs can be observed, we achieve competitive results with state-of-the-art in this regime. We also note that current best results are from generative models such as|Springenberg (2016) and|Salimans et al.. (2016) that follow an elaborate and computationally heavy training procedure compared with our. approach.\n1 5 8 79 6 23 0 4 5\nTable 2: Results for Cifar-10. Using 4000 labeled samples, no data-augmentation\nModel Test error % Spike-and-Slab Sparse CodingGoodfellow et al.(2012 31.9 View-Invariant k-means|Hui (2013 27.4 ( 0.7) Exemplar-CNNDosovitskiy et al.2014 23.4 ( 0.2) Ladder network|Rasmus et al.(2015 20.04 ( 0.47) Conv-CatGan Springenberg[2016 19.58 ( 0.58) ImprovedGan Salimans et al.(2016 18.63 ( 2.32) Ours 20.3 ( 0.5)"}, {"section_index": "8", "section_name": "7 CONCLUSIONS", "section_text": "In this work we have shown how neural networks can be used to learn in a semi-supervised settin. using small sets of labeled data, by replacing the classification objective with a metric embeddin. one. We introduced an objective for semi-supervised learning formulated as minimization of entrop. over a distance encoding distribution. This objective is compliant with standard techniques of trair. ing deep neural network and requires no modification of the embedding model. Using the method i. this work, we were able to achieve state-of-the-art results on MNIST with only 100 labeled example. and competitive results on Cifar10 dataset. We speculate that this form of learning is beneficial t neural network models by decreasing their tendency to overfit over small sets of training data. Th. objectives formulated here can potentially leverage prior knowledge on the distribution of classes c. samples, as well as incorporating this knowledge in the training process. For example, utilizing th. learned embedded distance, we speculate that a better sampling can be done instead of a uniforr. one over the entire set.\nFurther exploration is needed to apply this method to large scale problems, spanning a large number of available classes, which we leave to future work\n9 4 1 5 2 8 79 6 23 0 4 (a) (b)\n9 2 8 23 8 0 4\nFigure 1: MNIST 2d visualization before (a) and after (b) training. 100 colored labeled samples unlabeled samples marked in gray\nThe research leading to these results has received funding from the European Research Council un der European Unions Horizon 2020 Program, ERC Grant agreement no. 682203 '\"SpeedInfTrade Off"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Jane Bromley, James W Bentz, Leon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduarc Sackinger, and Roopak Shah. Signature verification using a siamese time delay neural network nternational Journal of PatternReco onition and A rtificial Intellioence. 7(04):669-688.1993\nRonan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environment. for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.\nAlexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discrimina- tive unsupervised feature learning with convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 766-774, 2014.\nJiashi Feng, Tom Zahavy, Bingyi Kang, Huan Xu, and Shie Mannor. Ensemble robustness of dee learning algorithms. arXiv preprint arXiv:1602.02389, 2016.\nJacob Goldberger, Geoffrey E Hinton, Sam T Roweis, and Ruslan Salakhutdinov. Neighbourhoo. components analysis. In Advances in neural information processing systems, pp. 513-520, 2004\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair. Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems, pp. 2672-2680, 2014.\nIan J. Goodfellow, Aaron Courville, and Yoshua Bengio. Large-scale feature learning with spike and-slab sparse coding. 2012\nKa Y Hui. Direct modeling of complex invariances for visual object features. In Proceedings of the 30th International Conference on Machine Learning (ICML-13). pp. 352-360. 2013\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. ICLR2014, 2013\nTakeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing by virtual adversarial examples. stat, 1050:2, 2015.\nYurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2) 1983.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by re ducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine Learning, pp. 448-456. 2015.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016\nFlorian Schroff. Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815-823, 2015.\nVladimir Naumovich Vapnik and Vlamimir Vapnik. Statistical learning theory, volume 1. Wile New York, 1998\nJason Weston, Frederic Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639-655. Springer, 2012\nFrancisco Zamora-Martinez, Javier Munoz-Almaraz, and Juan Pardo. Integration of unsupervisec and supervised criteria for deep neural networks training. In International Conference on Artificia Neural Networks, pp. 55-62. Springer, 2016\nJunbo Zhao. Michael Mathieu. Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders arXiv preprint arXiv:1506.02351, 2015.\nOren Rippel, Manohar Paluri, Piotr Dollar, and Lubomir Bourdev. Metric learning with adaptive density discrimination. stat, 1050:18, 2015.\nTable 3: Convolutional models - (feature-maps, kernel, stride, padding) for each layer. Convolu tional layers are each followed by ReLU and Batch-norm..\nModel MNIST Cifar-10 Input: 28 28 monochrome Input: 32 32 RGB Conv-ReLU-BN (16, 5x5, 1x1, 1x1) Conv-ReLU-BN (192, 5x5, 1x1, 2x2) Max-Pooling (2x2, 2x2) Conv-ReLU-BN (160, 1x1, 1x1) Conv-ReLU-BN (32, 3x3,1x1, 1x1) Conv-ReLU-BN (96, 1x1, 1x1) Conv-ReLU-BN (64, 3x3,1x1, 1x1) Max-Pooling (3x3, 2x2) Conv-ReLU-BN (64, 3x3, 1x1, 1x1) Conv-ReLU-BN (96, 5x5,1x1, 2x2) Max-Pooling (2x2, 2x2) Conv-ReLU-BN (192, 1x1, 1x1) Conv-ReLU-BN (128,3x3,1x1,1x1) Conv-ReLU-BN (192, 1x1, 1x1) Avg-Pooling (6x6, 1x1) Max-Pooling (3x3, 2x2) Conv-ReLU-BN (192, 3x3, 1x1, 1x1) Conv-ReLU-BN (192, 1x1, 1x1) Avg-Pooling (7x7, 1x1)"}]
S1J0E-71l
[{"section_index": "0", "section_name": "SURPRISAL-DRIVEN FEEDBACK IN RECURRENT NET WORKS", "section_text": "Kamil Rocki"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Based on human performance on the same task, it is believed that an important ingredient which is missing in state-of-the-art variants of recurrent networks is top-down feedback. Despite evidence of its existence, it is not entirely clear how mammalian brain might implement such a mechanism. It is important to understand what kind of top-down interaction contributes to improved prediction capability in order to tackle more challenging AI problems requiring interpretation of deeper con- textual information. Furthermore, it might provide clues as what makes human cognitive abilities so unique. Existing approaches which consider top-down feedback in neural networks are primar- ily focused on stacked layers of neurons, where higher-level representations constitute a top-down signal source. In this paper, we propose that the discrepancy between most recent predictions and observations might be effectively used as a feedback signal affecting further predictions. It is very common to use such a discrepancy during learning phase as the error which is subject to minimiza tion, but not during inference. We show that is also possible to use such top-down signal without losing generality of the algorithm and that it improves generalization capabilities when applied to. Long-Short Term Memory (Hochreiter & Schmidhuber! 1997) architecture. It is important to point out that the feedback idea presented here applies only to temporal data..\nThe main contributions of this work are."}, {"section_index": "2", "section_name": "1.2 RELATED WORK", "section_text": "There exist other approaches which attempted to introduce top-down input for improving predic tions. One such architecture is Gated-Feedback RNN (Chung et al.2015). An important difference between architecture proposed here and theirs is the source of the feedback signal. In GF-RNN it is assumed that there exist higher level representation layers and they constitute the feedback source"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Recurrent neural nets are widely used for predicting temporal data. Their inher ent deep feedforward structure allows learning complex sequential patterns. It is believed that top-down feedback might be an important missing ingredient which n theory could help disambiguate similar patterns depending on broader context. In this paper, we introduce surprisal-driven recurrent networks, which take into account past error information when making new predictions. This is achieved by continuously monitoring the discrepancy between most recent predictions and the actual observations. Furthermore, we show that it outperforms other stochas tic and fully deterministic approaches on enwik8 character level prediction task achieving 1.37 BPC.\nthe introduction of a novel way of incorporating most recent misprediction measure as an additional input signal extending state-of-the-art performance on character-level text modeling using Hutter Wikipedia dataset.\nOn the other hand, here, feedback depends directly on the discrepancy between past predictions and current observation and operates even within a single layer. Another related concept is Ladder Net- works (Rasmus et al.| 2015), where top-down connections contribute to improved semi-supervised learning performance.\nFigure 1: Illustration of st signal on a typical batch of 16 sequences of length 100 from enwik8 dataset. y-axis is negative log probability in bits. Intuitively surprise signal is low when a text fragment is highly predictable (i.e. in the < timestamp > part - sequence no 10, the tag itself is highly predictable, whereas the exact date cannot be predicted and should not be the focus of attention). The main idea presented in this paper is that feedback signal st should be able to help in distinguishing predictable and inherently unpredictable parts during the inference phase.\n5 M AM A 0 nd the supervision of taxation by regional authorities. The federal government controls more than 9 A to Lucius Tarrutius of Firmum [[Romulus and Remus]] were conceived in the womb on the 23rd day of t 10 I5o Northern Ireland]] == External links == * [http://www.enniskillen.com Enniskillen.Com] * [http:/. 150 by]] *[[Alan Colmes]] *[[Janice Dean]] *[[Laurie Dhue]] *[[Steve Doocy]] *[[Donna Fiducia]] *[ [Da 10 i5o 5 sity]] #[[Denison University]] #[[Des Moines Area Community College]] ([[Des Moines, Iowa|Des Moine. 42 [[af:4 Augustus]] [[ar:4 0g3.03]] [[an:4 d'agosto]] [[ast:4 d'agostu]] [[bg:4 DD2p3N#N#N#]] 20 10 0 [http://images.google.n1/images?q=Herman+Brood+art&amp;h1=nl&amp;lr=&amp;ie=UTF-8&amp;sa=N&amp;t 42 0 >Church of England</title> <id>5955</id> <revision> <id>42087195</id> <timestam 21 A A AA4 0 ding-&quot;4&quot; cellspacing=&quot;0&quot; style-&quot;margin: 1em 1em 1em 0; background: #f9f9f9 42 10 V 0 n> <id>35151715</id> <timestamp>2006-01-14T15:11:15z</timestamp> <contributor> 10 I5o 11 AA ([[1992 in film|1992]]) *''Guncrazy'' ([[1992 in film|1992]]) *''No Place to Hide'' ([[1993 in fi 20 12 A 10 0 *[[1975]] - [[Barbara Walters]] signs a five-year $5 million contract with the American Broadcast. 10 13 5 ^ A AA 0 th the [[theory of evolution]] by [[natural selection]]. This conflict is most prevalent in the [[U. 10 14 M 5 0 y known as the Gibraltar Housewives Association, and subsequently, in the early eighties. it was cha 10 15 5o ;&amp;#1489;&amp;#1500;&amp;#1512;]] [[id:Assembler]] [[lt:Asembleris]] [[pl:Asembler]] [[ru:&amp;# 5 16 M A M M A A 0 at, sandy soils. Granites sometimes occur in circular depressions surrounded by a range of hills, f."}, {"section_index": "4", "section_name": "2.1 NOTATION", "section_text": "The following notation is used throughout the section\nx - inputs h - hidden units y - outputs p - output probabilities (normalized y) s - surprisal t - time step W - feedforward x -> h connection matrix U - recurrent h -> h connection matrix V - feedback s -> h connection matrix S - truncated BPTT length M - number of inputs N - number of hidden units\nIn case of LSTM, the following concatenated representations are used\nFirst, we show a simple recurrent neural network architecture without feedback which serves as a. basis for demonstrating our approach. It is illustrated in Fig.2 and formulated as follows:\nFigure 2: Simple RNN; h - interna1 (hidden) states; x are inputs. are optional outputs to be emitted\nit B2 ft b f b = U. W gt = V Ot bo bu W Ut\nht =tanh(W.xt+U.ht-1+b) Yt-1 Yt W y U ht-1 tanh ht internal state W Xt feedforward input\nht=tanh(Wxt+Uht-1+b\nFigure 3: Surprisal-Feedback RNN; st represents surprisal (in information theory sense) - the dis crepancy between prediction at time step t - 1 and the actual observation at time step t; it constitutes additional input signal to be considered when making a prediction for the next time step..\nFigure [3|presents the main idea of surprisal-driven feedback in recurrent networks. In addition tc. feedforward and recurrent connections W and U, we added one additional matrix V. One more input signal, namely V . st is being considered when updating hidden states of the network. We. propose that the discrepancy st between most recent predictions pt-1 and observations xt might. be effectively used as a feedback signal affecting further predictions. Such information is usually . used during learning phase as an error signal, but not during inference. Our hypothesis is that it. represents an important source of information which can be used during the inference phase, should. be used and that it bring benefits in the form of improved generalization capability. Figure|1|presents. examples of feedback signal being considered. Intuitively, when surprisal is near zero, the sum oi. input signals is the same as in a typical RNN. Next subsections provide mathematical description. of the feedback architecture in terms of forward and backward passes for the Back Propagatior. Through Time (BPTT) (Werbos1990) algorithm.\net ho, co to zero and po to uniform distribution or carry over the last state to emulate full BPTT\n1 iE{0,1,..,M-1},t=0 P M\nft=WfXt+Ufht-1+VfSt+bf\nit=0(W.xt+U.ht-1+ViSt+bi\nerror feedback V St prediction Yt-1 Yt tanh ht internal state W Xt feedforward input\n1 i E{0,1,,M-1},t=0 (3) M for t = 1:1:S-1 1. Surprisal part log pt-1 O x (4) Ha. Computing hidden activities, Simple RNN. ht=tanh(Wxt+U.ht-1+VSt+b) (5) Hb. Computing hidden activities, LSTM (to be used instead of IIa) ft=0(WfXt+Ufht-1+VfSt+bf (6) it=0(Wixt+Ui.ht-1+ViSt+bi) (7)\nSt =-> log r\nht=tanhWxt+Uht-1+Vst+b\nft=(WfXt+Ufht-1+VfSt+bf it=0(Wxt+U.ht-1+ViSt+bi)\nOt =(WoXt + Uo.ht-1+ VSt+ bo\nCt =(1-ft)O Ct-1+itO Ut\nD eyt\nBackprop through softmax, cross-entropy error, accumulate\ndE dE dE aw M dE dE dEi db. 0b dyt i=1\nM dE dE DEt dby db dyt i=1\nHa. Backprop through hidden nonlinearity (simple RNN version)\nIb. Backprop through c, h, q (LSTM version)\nOt =(WoXt+ Uoht-1+ VoSt+ bo) (8) Ut =tanh(Wu.xt+ Uu.ht-1+ Vu.St+ bu) (9) (10) Ct=(1-ft)O Ct-1+itO Ut ct = tanh(ct) (11) ht = Ot O Ct (12) III. Outputs yt=Wy.ht+by (13) Softmax normalization eyt pt (14) eyt 2.5BACKWARD PASS for t = S-1:-1:1 I. Backprop through predictions Backprop through softmax, cross-entropy error, accumulate. dEt dEt +Pt-1-Xt (15) dyt dyt 8y->8Wy,8by dE dE dEt (16) 0Wy 0Wy dyt M aEt dE dE Loy (17) dby Oby i=1 oy -> dh 0Et.wT dEt 0Et (18) Oht Oht dyt Ha. Backprop through hidden nonlinearity (simple RNN version) 0Et 0Et dEt O tanh'(ht) (19) Oht Oht dht 0Et dEt (20) dgt Oht\nUt= tanh(Wuxt + Uu. ht-1 + V. St+ by.\nht =Ot O Cp\n=Wy.ht+ by Yt\ndEt OEt FPt-1 -Xt dyt Oyt\ndE dE 0Et aw aw dyt\ndEt 8Et 0Et Oht Oht y\ndEt dEt 8Et tanh'(ht) dht Oht Oht\ndEt 8Et dqi Oht\nBackprop through mem. gradients from the previous iteration) cells. (kee\nd0t dEt 8Et O ut O 0'(it) dit dct 8Et OEt Ct-1O '(ft) Oft dct dEt 8Et D it O tanh'(ut) dut dct\ndEt 0Et U+ dit d\ndE dE dEt av av dgt\ndEt dEt Xt dpt- dst\nM 0Et OEt 8Et Pt- Opt-1 i=1\ndEt dEt 0Et Ot O tanh'(ct) dct dct Oht\ndEt dEt dEt O ot O tanh'(ct) (21) dct dct Oht 8Et 0Et dEt 0Et O (1- ft) (22) Oct-1 dct-1 dct ate error through the gates dEt dEt O ct O 0'(0t) (23) dot Oht dEt dEt O ut O 0'(it) (24) dit dct 0Et c dEt O Ct-1 O 0'(ft) (25) Oft dct dEt 8Et O it O tanh'(ut) (26) dut dct error over to aht-1 8Et dEt 0Et UI (27) Oht-1 dgt ckprop through linearities N 0Et 0Et dEt (28) db dgt db i=1 dE dE dEt + hT_ (29) au au dgt dE 0Et dE (30) aw aw dgt dE dE 0Et . wT (31) + dx dx dgt rprisal part dE dE dEt + sT . (32) av av dgt dE dE vI (33) dst dgt 0Et 0xt dEt (34) Opt-1 dst E., according to the sum of gradients and carry over to dEt 0 dyt-1 0pt-1\n8Et 8Et 8Et dc dct\ndEt dEt dot Oht\ndEt dEt it O tanh'(ut) dut dct\ndEt dEt Oht-1 dgt\n8Et N dEt 8Et db db dgt i=1\ndEt dE db db dgt i=1 dE dE 0Et au au dgt dE dE 0Et aw aw dgt dE dE 8Et dx dx dgt\ndE dE dst\nFigure 4: Training progress on enwik8 corpus, bits/character"}, {"section_index": "5", "section_name": "3 EXPERIMENTS", "section_text": "We ran experiments on the enwik8 dataset. It constitutes first 10 bytes of English Wikipedia dump (with all extra symbols present in XML), also known as Hutter Prize challenge dataset?. First 90% of each corpus was used for training, the next 5% for validation and the last 5% for reporting test accuracy. In each iteration sequences of length 1oo00 were randomly selected. The learning algo rithm used was Adagrad'|with a learning rate of O.0o1. Weights were initialized using so-called Xavier initialization Glorot & Bengio(2010). Sequence length for BPTT was 100 and batch size 128, states were carried over for the entire sequence of 10000 emulating full BPTT. Forget bias was set initially to 1. Other parameters set to zero. The algorithm was written in C++ and CUDA 8 and ran on GTX Titan GPU for up to 10 days. Table[1|presents results comparing existing state-of-the- art approaches to the introduced Feedback LSTM algorithm which outperforms all other methods despite not having any regularizer.\nTable 1: Bits per character on the Hutter Wikipedia dataset (test data)\nBPC mRNN(Sutskever et al 2011 1.60 GF-RNN (Chung et al. 2015 1.58 Grid LSTM (Kalchbrenner et al..) 2015 1.47 Standard LSTM4 1.45 MI-LSTM (Wu et al. 2016 1.44 Recurrent Highway Networks (Zilly et al.. 2016 1.42 Array LSTM (Rock1 2016 1.40 Feedback LSTM 1.39 Hypernetworks (Ha et al.. 2016 1.38 Feedback LSTM + Zoneout (Krueger et al 2016 1.37"}, {"section_index": "6", "section_name": "4 SUMMARY", "section_text": "We introduced feedback recurrent network architecture, which takes advantage of temporal nature of the data and monitors the discrepancy between predictions and observations. This prediction error\n1.8 LSTM 0.8 Standard LSTM Standard (test) Feedback LSTM Feedback LSTM (test) 1.7 Standard LSTM (train) Feedback LSTM 0.9 (train) 1.6 1.5 1 .1 1.4 8 1 O 2 1.3 808 1.3 1.2 888o 1.4 8 1.1 0g8 1.5 000 1 .0 1.6 00 000000 0.9 1.7 4h 8h 16h 24h 32h 40h 48h 60h 72h 1.75 1.7 1.65 1.6 1.55 1.5 1.45 1.4 Time Test Bi+s\ninformation, also known as surprisal, is used when making new guesses. We showed that combining commonly used feedforward, recurrent and such feedback signals improves generalization capabil- ities of Long-Short Term Memory network. It outperforms other stochastic and fully deterministic. approaches on enwik8 character level prediction achieving 1.37 BPC."}, {"section_index": "7", "section_name": "ACKNOWLEDGEMENTS", "section_text": "This work has been supported in part by the Defense Advanced Research Projects Agency (DARPA)"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv j print arXiv:1609.09106, 2016\nNal Kalchbrenner. Ivo Danihelka. and Alex Graves. Grid long short-term memory. CoRR, abs/1507.01526.2015. URLhttp://arxiv.0rg/abs/1507.01526\nDavid Krueger, Tegan Maharaj, Janos Kramar, Mohammad Pezeshki, Nicolas Ballas, Nan Rose. mary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron C. Courville, and Chris Pal Zoneout: Regularizing rnns by randomly preserving hidden activations. CoRR, abs/1606.01305,. 2016. URLhttp://arxiv.0rg/abs/1606.01305\nKamil Rocki. Recurrent memory array structures. arXiv preprint arXiv:1607.03085, 2016\nJulian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnk, and Jrgen Schmidhuber. Recurrent high way networks, 2016.\nIt is still an open question what the feedback should really constitute as well as how it should. interact with lower-level neurons (additive, multiplicative or another type of connection). Further. improvements may be possible with the addition of regularization. Another research direction is incorporating sparsity in order improve disentangling sources of variation in temporal data."}]
Skq89Scxx
[{"section_index": "0", "section_name": "SGDR: STOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS", "section_text": "Ilya Loshchilov & Frank Hutter\nilya, fh}@cs.uni-freiburg.de"}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "Deep neural networks (DNNs) are currently the best-performing method for many classificatio problems, such as object recognition from images (Krizhevsky et al.| 2012aDonahue et al. 2014 or speech recognition from audio data (Deng et al. 2013). Their training on large datasets (wher DNNs perform particularly well) is the main computational bottleneck: it often requires severa days, even on high-performance GPUs, and any speedups would be of substantial value.\nThe training of a DNN with n free parameters can be formulated as the problem of minimizing a function f : Rn -> IR. The commonly used procedure to optimize f is to iteratively adjust xt E IRn. (the parameter vector at time step t) using gradient information ft(xt) obtained on a relatively. small t-th batch of b datapoints. The Stochastic Gradient Descent (SGD) procedure then becomes an extension of the Gradient Descent (GD) to stochastic optimization of f as follows:\nXt+1=xt-ntVft(xt)\nwhere nt is a learning rate. One would like to consider second-order information\nxt+1=xt-ntH--Vft(xt);\nbut this is often infeasible since the computation and storage of the inverse Hessian H- is in tractable for large n. The usual way to deal with this problem by using limited-memory quasi Newton methods such as L-BFGS (Liu & Nocedal|1989) is not currently in favor in deep learning, not the least due to (i) the stochasticity of ft(xt), (ii) ill-conditioning of f and (iii) the presence of saddle points as a result of the hierarchical geometric structure of the parameter space (Fukumizu & AmariJ 2ooo). Despite some recent progress in understanding and addressing the latter problems (Bordes et al.[2009[Dauphin et al.[2014] Choromanska et al.]2014}Dauphin et al.]2015), state-of- the-art optimization techniques attempt to approximate the inverse Hessian in a reduced way, e.g., by considering only its diagonal to achieve adaptive learning rates. AdaDelta (Zeiler2012) and Adam (Kingma & Bal2014) are notable examples of such methods."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Restart techniques are common in gradient-free optimization to deal with multi nodal functions. Partial warm restarts are also gaining popularity in gradient- oased optimization to improve the rate of convergence in accelerated gradieni schemes to deal with ill-conditioned functions. In this paper, we propose a sim- le warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its per- formance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate ts advantages on a dataset of EEG recordings and on a downsampled version of he ImageNet dataset. Our source code is available at\nLearning rate schedule 100 Default, Ir=0.1 - Default, Ir=0.05 To = 50, Tmult = 1 10 T = 100, Tmult A = 1 fane raannne To = 200, Tm mult =1 10 T=1, Tmul = 2 mult = 2 mult 10 10~4 20 40 60 80 100 120 140 160 180 200 Epochs\nFigure 1:Alternative schedule schemes of learning rate nt over batch index t: default schemes with no = 0.1 (blue line) and no = 0.05 (red line) as used byZagoruyko & Komodakis (2016) warm restarts simulated every To = 50 (green line), To = 100 (black line) and To = 200 (grey line epochs with nt decaying during i-th run from nmax = 0.05 to nmin = 0 according to eq. (5); warm restarts starting from epoch To = 1 (dark green line) and To = 10 (magenta line) with doubling (Tmult = 2) periods T, at every new warm restart.\nVt+1 = tVt-ntVft(xt) Xt+1=Xt+Vt+1;\nwhere v is a velocity vector initially set to 0, nt is a decreasing learning rate and t is a momentun rate which defines the trade-off between the current and past observations of Vft(x). The mair difficulty in training a DNN is then associated with the scheduling of the learning rate and the amount of L2 weight decay regularization employed. A common learning rate schedule is to use a constant learning rate and divide it by a fixed constant in (approximately) regular intervals. The blue line in Figure[1shows an example of such a schedule, as used byZagoruyko & Komodakis(2016 to obtain the state-of-the-art results on CIFAR-10, CIFAR-100 and SVHN datasets.\nIn this paper, we propose to periodically simulate warm restarts of SGD, where in each restart the. learning rate is initialized to some value and is scheduled to decrease. Four different instantiations. of this new learning rate schedule are visualized in Figure[1 Our empirical results suggest that SGD with warm restarts requires 2 to 4 fewer epochs than the currently-used learning rate schedule schemes to achieve comparable or even better results. Furthermore, combining the networks ob tained right before restarts in an ensemble following the approach proposed by|Huang et al.[(2016a]. improves our results further to 3.14% for CIFAR-10 and 16.21% for CIFAR-100. We also demon strate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNe dataset.\nIntriguingly enough, the current state-of-the-art results on CIFAR-10, CIFAR-100, SVHN, Ima. geNet, PASCAL VOC and MS COCO datasets were obtained by Residual Neural Networks. [He et al.[2015] Huang et al.2016c[ He et al.]2016f Zagoruyko & Komodakis]2016) trained with out the use of advanced methods such as AdaDelta and Adam. Instead, they simply use SGD with momentum\nWhen optimizing multimodal functions one may want to find all global and local optima. The. tractability of this task depends on the landscape of the function at hand and the budget of func tion evaluations. Gradient-free optimization approaches based on niching methods (Preuss2015 usually can deal with this task by covering the search space with dynamically allocated niches of. local optimizers. However, these methods usually work only for relatively small search spaces.. e.g., n < 10, and do not scale up due to the curse of dimensionality (Preuss2010). Instead, the. current state-of-the-art gradient-free optimizers employ various restart mechanisms (Hansen]2009 Loshchilov et al.]2012). One way to deal with multimodal functions is to iteratively sample a large. number A of candidate solutions, make a step towards better solutions and slowly shape the sampling. distribution to maximize the likelihood of successful steps to appear again (Hansen & Kern2004). The larger the X, the more global search is performed requiring more function evaluations. In order. to achieve good anytime performance, it is common to start with a small X and increase it (e.g., by. doubling) after each restart. This approach works best on multimodal functions with a global funne. structure and also improves the results on ill-conditioned problems where numerical issues migh. lead to premature convergence when A is small (Hansen. 2009)."}, {"section_index": "3", "section_name": "2.2 RESTARTS IN GRADIENT-BASED OPTIMIZATION", "section_text": "Gradient-based optimization algorithms such as BFGS can also perform restarts to deal with mul timodal functions (Ros! 20o9). In large-scale settings when the usual number of variables n is on the order of 103 _ 10, the availability of gradient information provides a speedup of a factor of n w.r.t. gradient-free approaches. Warm restarts are usually employed to improve the convergence rate rather than to deal with multimodality: often it is sufficient to approach any local optimum to a given precision and in many cases the problem at hand is unimodal.Fletcher & Reeves(1964 proposed to flesh the history of conjugate gradient method every n or (n + 1) iterations.Powell (1977) proposed to check whether enough orthogonality between f(xt-1) and V f(xt) has been lost to warrant another warm restart. Recently,O'Donoghue & Candes[(2012) noted that the iterates of accelerated gradient schemes proposed by Nesterov[(1983f 2013) exhibit a periodic behavior if momentum is overused. The period of the oscillations is proportional to the square root of the local condition number of the (smooth convex) objective function. The authors showed that fixed warn restarts of the algorithm with a period proportional to the conditional number achieves the optima linear convergence rate of the original accelerated gradient scheme. Since the condition number is an unknown parameter and its value may vary during the search, they proposed two adaptive warm restart techniques (O'Donoghue & Candes]2012):\nSmith (2015] 2016) recently introduced cyclical learning rates for deep learning, his approach is closely-related to our approach in its spirit and formulation but does not focus on restarts.\nYang & Lin (2015) showed that Stochastic subGradient Descent with restarts can achieve a linear convergence rate for a class of non-smooth and non-strongly convex optimization problems where the epigraph of the objective function is a polyhedron. In contrast to our work, they never increase the learning rate to perform restarts but decrease it geometrically at each epoch. To perform restarts, they periodically reset the current solution to the averaged solution from the previous epoch..\nThe function scheme restarts whenever the objective function increases. The gradient scheme restarts whenever the angle between the momentum term and tl negative gradient is obtuse, i.e, when the momentum seems to be taking us in a bad dire tion, as measured by the negative gradient at that point. This scheme resembles the one Powell(1977) for the conjugate gradient method."}, {"section_index": "4", "section_name": "STOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS (SGDR)", "section_text": "The existing restart techniques can also be used for stochastic gradient descent if the stochasticity is taken into account. Since gradients and loss values can vary widely from one batch of the data to another, one should denoise the incoming information: by considering averaged gradients anc losses, e.g., once per epoch, the above-mentioned restart techniques can be used again.\nIn this work, we consider one of the simplest warm restart approaches. We simulate a new warm started run / restart of SGD once T; epochs are performed, where i is the index of the run. Impor tantly, the restarts are not performed from scratch but emulated by increasing the learning rate nt while the old value of xt is used as an initial solution. The amount of this increase controls to which extent the previously acquired information (e.g., momentum) is used.\nWithin the i-th run, we decay the learning rate with a cosine annealing for each batch as follows\nnt = Tmin + cos mar Ti\nare ranges for the learning rate, and Tcur accounts for how many epochs. Imax have been performed since the last restart. Since Tcur is updated at each batch iteration t, it can take discredited values such as 0.1, 0.2, etc. Thus, nt = nmax when t = 0 and Tcur = 0. Once Tcur = T,, the cos function will output -1 and thus nt = nmin. The decrease of the learning rate is shown in Figure1 for fixed T, = 50, T; = 100 and T; = 200; note that the logarithmic axis obfuscates the typical shape of the cosine function.\nIn order to improve anytime performance, we suggest an option to start with an initially small T, and increase it by a factor of Tmult at every restart (see, e.g., Figure1for To = 1, Tmult = 2 and To = 10, Tmult = 2). It might be of great interest to decrease nmax and nmin at every new restart. However, for the sake of simplicity, here, we keep nmax and nmin the same for every i to reduce the. number of hyperparameters involved.\nSince our simulated warm restarts (the increase of the learning rate) often temporarily worsen per-. formance, we do not always use the last x, as our recommendation for the best solution (also called. the incumbent solution). While our recommendation during the first run (before the first restart) is indeed the last xt, our recommendation after this is a solution obtained at the end of the last per-. formed run at nt = min. We emphasize that with the help of this strategy, our method does not. require a separate validation data set to determine a recommendation.."}, {"section_index": "5", "section_name": "4.1 EXPERIMENTAL SETTINGS", "section_text": "We consider the problem of training Wide Residual Neural Networks (WRNs; seeZagoruyko & Komodakis (2016) for details) on the CIFAR-10 and CIFAR-100 datasets (Krizhevsky2009). We will use the abbreviation WRN-d-k to denote a WRN with depth d and width k.Zagoruyko & Komodakis (2016) obtained the best results with a WRN-28-10 architecture, i.e., a Residual Neural Network with d = 28 layers and k = 10 times more filters per layer than used in the original Residual Neural Networks (He et al.]2015 2016).\nFor training,Zagoruyko & Komodakis(2016) used SGD with Nesterov's momentum with initial. learning rate set to no = 0.1, weight decay to 0.0005, dampening to 0, momentum to 0.9 and. minibatch size to 128. The learning rate is dropped by a factor of 0.2 at 60, 120 and 160 epochs. with a total budget of 200 epochs. We reproduce the results of Zagoruyko & Komodakis|(2016) with. the same settings except that i) we subtract per-pixel mean only and do not use ZCA whitening; ii). we use SGD with momentum as described by eq. (3|4) and not Nesterov's momentum.\nThe CIFAR-10 and CIFAR-100 datasets (Krizhevsky2009) consist of 3232 color images drawn from 10 and 100 classes, respectively, split into 50,000 train and 10,000 test images. For image preprocessingZagoruyko & Komodakis(2016) performed global contrast normalization and ZCA whitening. For data augmentation they performed horizontal flips and random crops from the image padded by 4 pixels on each side, filling missing pixels with reflections of the original image..\nWRN-28-10 on CIFAR-10 WRN-28-10 on CIFAR-100 25 50 - - Default, Ir=0.1 Default, Ir=0.05 T = 50, Tmult = 1 20 401 = 200, T (%) mult 15 30 error = 10, T errrrr est restt 10 20 5 10 0 0 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on CIFAR-10 WRN-28-10 on ClFAR-100 5 21 20.5 4.5 (%) errr ero 20 error A 19.5 restt 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-20 on CIFAR-10 WRN-28-20 on CIFAR-100 5 21 20.5 4.5 (%) (%) 20 error error 19.5 restt eet 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs\nFigure 2: Test errors on CIFAR-10 (left column) and CIFAR-100 (right column) datasets. Note that for SGDR we only plot the recommended solutions. The top and middle rows show the same results on WRN-28-10, with the middle row zooming into the good performance region of low test error. The bottom row shows performance with a wider network, WRN-28-20.. The results of the default learning rate schedules ofZagoruyko & Komodakis(2016) with no = 0.1 and no = 0.05 are depicted by the blue and red lines, respectively. The schedules of nt used in SGDR are shown with i) restarts every To = 50 epochs (green line); ii) restarts every To = 100 epochs (black line); iii) restarts every To = 200 epochs (gray line); iv) restarts with doubling (Tmult = 2) periods of restarts starting from the first epoch (To = 1, dark green line); and v) restarts with doubling (Tmult = 2) periods of restarts starting from the tenth epoch (To = 10, magenta line).\nThe schedule of nt used byZagoruyko & Komodakis|(2016) is depicted by the blue line in Figure[1 The same schedule but with no = 0.05 is depicted by the red line. The schedule of nt used in SGDR is also shown in Figure[1] with two initial learning rates To and two restart doubling periods\nTable 1: Test errors of different methods on CIFAR-10 and CIFAR-100 with moderate data aug. mentation (flip/translation). In the second column k is a widening factor for WRNs. Note that the computational and memory resources used to train all WRN-28-10 are the same. In all other case. they are different, but WRNs are usually faster than original ResNets to achieve the same accuracy (e.g., up to a factor of 8 according toZagoruyko & Komodakis (2016)). Bold text is used only tc. highlight better results and is not based on statistical tests (too few runs).."}, {"section_index": "6", "section_name": "4.2 SINGLE-MODEL RESULTS", "section_text": "Table 1 shows that our experiments reproduce the results given byZagoruyko & Komodakis(2016 for WRN-28-10 both on CIFAR-10 and CIFAR-100. These \"default' experiments with no = 0.1 and no = 0.05 correspond to the blue and red lines in Figure2 The results for no = 0.05 show better performance, and therefore we use no = 0.05 in our later experiments.\nSGDR with To = 50, To = 100 and To = 200 for Tmult = 1 perform warm restarts every 50, 100 and 200 epochs, respectively. A single run of SGD with the schedule given by eq. (5) for To = 200 shows the best results suggesting that the original schedule of WRNs might be suboptimal w.r.t. the test error in these settings. However, the same setting with To = 200 leads to the worst anytime performance except for the very last epochs.\nSGDR with To = 1, Tmult = 2 and To = 10, Tmult = 2 performs its first restart after 1 and 10 epochs, respectively. Then, it doubles the maximum number of epochs for every new restart. The main purpose of this doubling is to reach good test error as soon as possible, i.e., achieve good anytime performance. Figure|2|shows that this is achieved and test errors around 4% on CIFAR-10 and around 20% on CIFAR-100 can be obtained about 2-4 times faster than with the default schedule used byZagoruyko & Komodakis(2016).\ndepth-k # params # runs CIFAR-10 CIFAR-100 original-ResNet (He et al.]2015] 110 1.7M mean of 5 6.43 25.16 1202 10.2M mean of 5 7.93 27.82 stoc-depth (Huang et al.). 2016c 110 1.7M 1 run 5.23 24.58 1202 10.2M 1 run 4.91 n/a pre-act-ResNet (He et al. 2016 110 1.7M med. of 5 6.37 n/a 164 1.7M med. of 5 5.46 24.33 1001 10.2M med. of 5 4.62 22.71 WRN (Zagoruyko & Komodakis. 2016 16-8 11.0M 1 run 4.81 22.07 28-10 36.5M 1 run 4.17 20.50 with dropout 28-10 36.5M 1 run n/a 20.04 WRN (ours) default with no = 0.1. 28-10 36.5M med. of 5 4.24 20.33 default with no = 0.05 28-10 36.5M med. of 5 4.13 20.21 To = 50, Tmult = 1 28-10 36.5M med. of 5 4.17 19.99 To = 100, Tmult = 1 28-10 36.5M med. of 5 4.07 19.87 To = 200, Tmult = 1 28-10 36.5M med. of 5 3.86 19.98 To = 1, Tmult = 2 28-10 36.5M med. of 5 4.09 19.74 To = 10, Tmult = 2 28-10 36.5M med. of 5 4.03 19.58 default with no = 0.1. 28-20 145.8M med. of 2 4.08 19.53 default with no = 0.05 28-20 145.8M med. of 2 3.96 19.67 To = 50, Tmult = 1 28-20 145.8M med. of 2 4.01 19.28 To = 100, Tmult = 1 28-20 145.8M med. of 2 3.77 19.24 To = 200, Tmult = 1 28-20 145.8M med. of 2 3.66 19.69 To = 1, Tmult = 2 28-20 145.8M med. of 2 3.91 18.90 To = 10, Tmult = 2 28-20 145.8M med. of 2 3.74 18.70\nSince SGDR achieves good performance faster, it may allow us to train larger networks. We there fore investigated whether results on CIFAR-10 and CIFAR-100 can be further improved by makin WRNs two times wider, i.e., by training WRN-28-20 instead of WRN-28-10. Table 1 shows tha the results indeed improved, by about 0.25% on CIFAR-10 and by about 0.5-1.0% on CIFAR-100 While network architecture WRN-28-20 requires roughly three-four times more computation thai. WRN-28-10, the aggressive learning rate reduction of SGDR nevertheless allowed us to achieve . better error rate in the same time on WRN-28-20 as we spent on 200 epochs of training on WRN 28-10. Specifically, Figure[2|(right middle and right bottom) show that after only 50 epochs, SGDF (even without restarts, using To = 50, Tmult = 1) achieved an error rate below 19% (whereas none. of the other learning methods performed better than 19.5% on WRN-28-10). We therefore have hop that -- by enabling researchers to test new architectures faster - SGDR's good anytime performance may also lead to improvements of the state of the art..\nIn a final experiment for SGDR by itself, Figure7|in the appendix compares SGDR and the de- fault schedule with respect to training and test performance. As the figure shows, SGDR optimizes training loss faster than the standard default schedule until about epoch 120. After this, the default schedule overfits, as can be seen by an increase of the test error both on CIFAR-10 and CIFAR-100 (see, e.g., the right middle plot of Figure[7). In contrast, we only witnessed very mild overfitting for SGDR."}, {"section_index": "7", "section_name": "4.3 ENSEMBLE RESULTS", "section_text": "Our initial arXiv report on SGDR (Loshchilov & Hutter2016) inspired a follow-up study byHuang et al.[(2016a) in which the authors suggest to take M snapshots of the models obtained by SGDR (in their paper referred to as cyclical learning rate schedule and cosine annealing cycles) right before M last restarts and to use those to build an ensemble, thereby obtaining ensembles \"for free\"' (in contrast to having to perform multiple independent runs). The authors demonstrated new state-of-\nMedian test error (%) of ensembles on CIFAR-10 Median test error (%) of ensembles on ClFAR-100 19.5 W) (W) 3.9 3 3.51%3.29% 3.25% 3.23% 3.15% 3.14% ) unn nnd dunnadeus nr uuqntr unn nnd dunyadeur nr uuqwen 317.75%16.84% 16.64% 16.48% 16.29% 16.21% 19 3.8 18.5 3.7 3.6 18 2 3.61%3.41%3.29% 3.29%3.21% 3.21% 218.27%17.31%16.97%16.78%16.45% 16.31% 3.5 17.5 3.4 17 4.03% 3.63%3.51%3.44%3.28%3.27% 3.3 119.57%18.16%17.58%17.32%16.78% 16.50% 16.5 3.2 2 4 8 16 2 3 4 8 16 Number of runs (N) Number of runs (N)\nFigure 3:Test errors of ensemble models built from N runs of SGDR on WRN-28-10 with M. model snapshots per run made at epochs 150. 70 and 30 (right before warm restarts of SGDR as suggested byHuang et al.(2016a)). When M=1 (respectively, M=2), we aggregate probabilities of softmax layers of snapshot models at epoch index 150 (respectively, at epoch indexes 150 and 70)\nthe-art results on CIFAR datasets by making ensembles of DenseNet models (Huang et al.]2016b) Here, we investigate whether their conclusions hold for WRNs used in our study. We used WRN 28-10 trained by SGDR with To = 10, Tmult = 2 as our baseline model.\nFigure3Jand Table 2 aggregate the results of our study. The original test error of 4.03% on CIFAR-10 and 19.57% on CIFAR-100 (median of 16 runs) can be improved to 3.51% on CIFAR-10 and 17.75% on CIFAR-100 when M = 3 snapshots are taken at epochs 30, 70 and 150: when the learning rate of SGDR with To = 10, Tmult = 2 is scheduled to achieve O (see Figure1) and the models are used with uniform weights to build an ensemble. To achieve the same result, one would have to aggregate N = 3 models obtained at epoch 150 of N = 3 independent runs (see N = 3, M = 1 in Figure[3). Thus, the aggregation from snapshots provides a 3-fold speedup in these settings because. additional (M > 1-th) snapshots from a single SGDR run are computationally free. Interestingly. aggregation of models from independent runs (when N > 1 and M = 1) does not scale up as well. as from M > 1 snapshots of independent runs when the same number of models is considered: the case of N = 3 and M = 3 provides better performance than the cases of M = 1 with N = 18 and N = 21. Not only the number of snapshots M per run but also their origin is crucial. Thus. naively building ensembles from models obtained at last epochs only (i.e., M = 3 snapshots at. epochs 148, 149, 150) did not improve the results (i.e., the baseline of M = 1 snapshot at 150). thereby confirming the conclusion of|Huang et al.[(2016a) that snapshots of SGDR provide a useful diversity of predictions for ensembles.\nThree runs (NV = 3) of SGDR with M = 3 snapshots per run are sufficient to greatly improve the. results to 3.25% on CIFAR-10 and 16.64% on CIFAR-100 outperforming the results of Huang et al. (2016a). By increasing N to 16 one can achieve 3.14% on CIFAR-10 and 16.21% on CIFAR-100. We believe that these results could be further improved by considering better baseline models than WRN-28-10 (e.g., WRN-28-20)."}, {"section_index": "8", "section_name": "4.4 EXPERIMENTS ON A DATASET OF EEG RECORDINGS", "section_text": "To demonstrate the generality of SGDR, we also considered a very different domain: a dataset of electroencephalographic (EEG) recordings of brain activity for classification of actual right and left. hand and foot movements of 14 subjects with roughly 1000 trials per subject. The best classification results obtained with the original pipeline based on convolutional neural networks [R. Schirrmeister et al. Convolutional neural networks for EEG analysis: Design choices, training strategies, and feature visualization., under review at Neuroimage] were used as our reference. First, we compared the baseline learning rate schedule with different settings of the total number of epochs and initial. learning rates (see Figure 4). When 30 epochs were considered, we dropped the learning rate by. a factor of 10 at epoch indexes 10, 15 and 20. As expected, with more epochs used and a similar (budget proportional) schedule better results can be achieved. Alternatively, one can consider SGDR. and get a similar final performance while having a better anytime performance without defining the. total budget of epochs beforehand.\nSimilarly to our results on the CIFAR datasets, our experiments with the EEG data confirm that. snapshots are useful and the median reference error (about 9%) can be improved i) by 1-2% when model snapshots of a single run are considered, and ii) by 2-3% when model snapshots from both hyperparameter settings are considered. The latter would correspond to N = 2 in Section (4.3)..\nIn order to additionally validate our SGDR on a larger dataset, we constructed a downsampled version of the ImageNet dataset [P. Chrabaszcz, I. Loshchilov and F. Hutter. A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets., in preparation]. In contrast to earlier attempts (Pouransari & Ghili2015), our downsampled ImageNet contains exactly the same images from 1000 classes as the original ImageNet but resized with box downsampling to 32 32 pixels. Thus, this dataset is substantially harder than the original ImageNet dataset because the average number of pixels per image is now two orders of magnitude smaller. The new dataset is also more difficuli than the CIFAR datasets because more classes are used and the relevant objects to be classified often cover only a tiny subspace of the image and not most of the image as in the CIFAR datasets.\nMedian Results on 14 datasets, Ir=0.025 Median Results on 14 datasets, Ir=0.05 baseline n..=30 (%) baseline ne. =30 ep ep baseline n. =60 baseline n.. =60 12 Error ep 12 ep baseline n. =120 baseline n.. =120 10 ep ep baseline n. 10 =240 baseline ner =240 ep ernrneeee a ereereeee ep baseline n =480 baseline ne. 8 =480 8 ec ep SGDR SGDR 4 O 0 2 2 101 102 103 101 102 103 Epochs Epochs Median Results on 14 datasets Mean Results on 14 datasets 3 3 (%) - Ir=0.05 Ir=0.05 Ir=0.025 Ir=0.025 Frron r Ir=0.05 with M=nr/2 Ir=0.05 with M=nr/2 Ir=0.025 with M=nr/2 Ir=0.025 with M=nr/2 Ir=0.05 & Ir=0.025 with M=nr/2 Ir=0.05 & Ir=0.025 with M=nr/2 fennreeee - ereeeeeee 2 3 3 101 102 103 101 102 103 Epochs Epochs\nFigure 4: (Top) Improvements obtained by the baseline learning rate schedule and SGDR w.r.t. the best known reference classification error on a dataset of electroencephalographic (EEG) recording. of brain activity for classification of actual right and left hand and foot movements of 14 subjects. with roughly 1o00 trials per subject. Both considered approaches were tested with the initial learn. ing rate lr = 0.025 (Top-Left) and lr = 0.05 (Top-Right). Note that the baseline approach i considered with different settings of the total number of epochs: 30, 60, . .., 480. (Bottom) SGDR with lr = 0.025 and lr = 0.05 without and with M model snapshots taken at the last M = nr/2 restarts, where nr is the total number of restarts..\nWe benchmarked SGD with momentum with the default learning rate schedule, SGDR with To = 1, Tmult = 2 and SGDR with To = 10, Tmult = 2 on WRN-28-10, all trained with 4 settings of the initial learning rate nmax: 0.050, 0.025, 0.01 and 0.005. We used the same data augmentation procedure as for the CIFAR datasets. Similarly to the results on the CIFAR datasets, Figure|5|shows that SGDR demonstrates better anytime performance. SGDR with To = 10, Tmult = 2, nmax 0.01 achieves top-1 error of 39.24% and top-5 error of 17.17% matching the original results by AlexNets (40.7% and 18.2%, respectively) obtained on the original ImageNet with full-size images of ca. 50 times more pixels per image (Krizhevsky et al.|2012b). Interestingly, when the dataset is permuted only within 10 subgroups each formed from 100 classes, SGDR also demonstrates better results (see Figure|8lin the Supplementary Material). An interpretation of this might be that while the initial learning rate seems to be very important, SGDR reduces the problem of improper selectior of the latter by scanning / annealing from the initial learning rate to O.\nClearly, longer runs (more than 40 epochs considered in this preliminary experiment) and hyperpa rameter tuning of learning rates, regularization and other hyperparameters shall further improve the results.\nWRN-28-10 on downsampled 32x32 ImageNet WRN-28-10 on downsampled 32x32 ImaqeNet 60 40 Default Default SGDR T = 1, Tmut = 2 mu 55 SGDR T. = 10, T. mult = 35 (%) eonee aee (%) donne aeeg g-do 50 30 -do] 45 25 40 20 35 5 10 15 20 25 30 35 5 10 15 20 25 30 35 Epochs Epochs\nFigure 5: Top-1 and Top-5 test errors obtained by SGD with momentum with the default learning rate schedule, SGDR with To = 1, Tmult = 2 and SGDR with To = 10, Tmult = 2 on WRN-28-1C trained on a version of ImageNet, with all images from all 1000 classes downsampled to 32 3 pixels. The same baseline data augmentation as for the CIFAR datasets is used. Four settings of th initial learning rate are considered: 0.050, 0.025, 0.01 and 0.005."}, {"section_index": "9", "section_name": "5 DISCUSSION", "section_text": "Our results suggest that even without any restarts the proposed aggressive learning rate schedule given by eq. (5) is competitive w.r.t. the default schedule when training WRNs on the CIFAR- 10 (e.g., for To = 200, Tmult = 1) and CIFAR-100 datasets. In practice, the proposed schedule requires only two hyper-parameters to be defined: the initial learning rate and the total number of epochs.\nWe found that the anytime performance of SGDR remain similar when shorter epochs are considere. (see section|8.1|in the Supplemenary Material).\nOne should not suppose that the parameter values used in this study and many other works witl. (Residual) Neural Networks are selected to demonstrate the fastest decrease of the training erroi. Instead. the best validation or / and test errors are in focus. Notably. the validation error is rarel used when training Residual Neural Networks because the recommendation is defined by the fina. solution (in our approach, the final solution of each run). One could use the validation error t. determine the optimal initial learning rate and then run on the whole dataset; this could furthe. improve results.\nThe main purpose of our proposed warm restart scheme for SGD is to improve its anytime perfor. mance. While we mentioned that restarts can be useful to deal with multi-modal functions, we do not claim that we observe any effect related to multi-modality.\nAs we noted earlier, one could decrease nmax and n? nmin at every new warm restart to control the amount of divergence. If new restarts are worse than the old ones w.r.t. validation error, then one might also consider going back to the last best solution and perform a new restart with adjusted hyperparameters.\nOur results reproduce the finding by Huang et al.(2016a) that intermediate models generated by SGDR can be used to build efficient ensembles at no cost. This finding makes SGDR especially attractive for scenarios when ensemble building is considered.\nIn this paper, we investigated a simple warm restart mechanism for SGD to accelerate the training of DNNs. Our SGDR simulates warm restarts by scheduling the learning rate to achieve competitive results on CIFAR-10 and CIFAR-100 roughly two to four times faster. We also achieved new state-. of-the-art results with SGDR, mainly by using even wider WRNs and ensembles of snapshots from\nSGDR's trajectory. Future empirical studies should also consider the SVHN, ImageNet and MS COCO datasets, for which Residual Neural Networks showed the best results so far. Our preliminary results on a dataset of EEG recordings suggest that SGDR delivers better and better results as we carry out more restarts and use more model snapshots. The results on our downsampled ImageNe dataset suggest that SGDR might also reduce the problem of learning rate selection because the annealing and restarts of SGDR scan / consider a range of learning rate values. Future work should consider warm restarts for other popular training algorithms such as AdaDelta (Zeiler2012) and Adam (Kingma & Ba2014).\nAlternative network structures should be also considered; e.g., soon after our initial arXiv report (Loshchilov & Hutter2016),Zhang et al.(2016);Huang et al.(2016b); Han et al.(2016) reported that WRNs models can be replaced by more memory-efficient models. Thus, it should be tested whether our results for individual models and ensembles can be further improved by using their networks instead of WRNs. Deep compression methods (Han et al.]2015) can be used to reduce the. time and memory costs of DNNs and their ensembles..\nThis work was supported by the German Research Foundation (DFG), under the BrainLinksBrain Tools Cluster of Excellence (grant number EXC 1086). We thank Gao Huang, Kilian Quirin Wein berger, Jost Tobias Springenberg, Mark Schmidt and three anonymous reviewers for their helpful comments and suggestions."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Anna Choromanska. Mikael Henaff. Michael Mathieu. Gerard Ben Arous. and Yann LeCun. Th loss surface of multilayer networks. arXiv preprint arXiv:1412.0233, 2014..\nYann N Dauphin, Harm de Vries, Junyoung Chung, and Yoshua Bengio. Rmsprop and equilibratec. adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390. 2015.\nL. Deng, G. Hinton, and B. Kingsbury. New types of deep neural network learning for speech recognition and related applications: An overview. In Proc. of ICASsP'13, 2013.. J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In Proc. of ICML'14, 2014.\nReeves Fletcher and Colin M Reeves. Function minimization by conjugate gradients. The compute. journal, 7(2):149-154, 1964\nKenji Fukumizu and Shun-ichi Amari. Local minima and plateaus in hierarchical structures of multilayer perceptrons. Neural Networks, 13(3):317-327, 2000.\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.\nNikolaus Hansen. Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed. In Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computa tion Conference: Late Breaking Papers, pp. 2389-2396. ACM, 2009.\nAntoine Bordes, Leon Bottou, and Patrick Gallinari. Sgd-qn: Careful quasi-newton stochastic gra dient descent. The Journal of Machine Learning Research, 10:1737-1754, 2009\nYann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op. timization. In Advances in Neural Information Processing Systems, pp. 2933-2941, 2014.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.\nGao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, and Kilian Q. Weinberger Snapshot ensembles: Train 1, get m for free. ICLR 2017 submission, 2016a..\nAlex Krizhevsky. Learning multiple layers of features from tiny images. 2009\nDong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization Mathematical programming, 45(1-3):503-528, 1989.\nYurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2) In Soviet Mathematics Doklady, volume 27, pp. 372-376, 1983.\nYurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013.\nBrendan O'Donoghue and Emmanuel Candes. Adaptive restart for accelerated gradient schemes arXiv preprint arXiv:1204.3982. 2012\nHadi Pouransari and Saman Ghili. Tiny imagenet visual recognition challenge. CS231 course at STANFORD. 2015\nMike Preuss. Niching the CMA-ES via nearest-better clustering. In Proceedings of the 12th annua. conference companion on Genetic and evolutionary computation. pp. 1711-1718. ACM. 2010\nMike Preuss. Niching methods and multimodal optimization performance. In Multimodal Optimiza tion by Means of Evolutionary Algorithms, pp. 115-137. Springer, 2015..\nRaymond Ros. Benchmarking the bfgs algorithm on the bbob-2009 function testbed. In Proceed- ings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Con- ference: Late Breaking Papers, pp. 2409-2414. ACM. 2009\nGao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks arXiv preprint arXiv:1608.06993, 2016b\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014.\nLeslie N Smith. No more pesky learning rate guessing games. arXiv preprint arXiv:1506.01186 2015.\nTianbao Yang and Qihang Lin. Stochastic subgradient methods with linear convergence for polyhe dral convex optimization. arXiv preprint arXiv:1510.01444, 2015\nCIFAR-10 30 Default SGDR 25 (%) errrr eror 20 15 10 5 0 20 40 60 80 100 Epochs\nFigure 6: The median results of 5 runs for the best learning rate settings considered for WRN-28-1"}, {"section_index": "11", "section_name": "8.1 50K VS 1OOK EXAMPLES PER EPOCH", "section_text": "Our data augmentation procedure code is inherited from the Lasagne Recipe code for ResNets where. flipped images are added to the training set. This doubles the number of training examples per epoch. and thus might impact the results because hyperparameter values defined as a function of epoch index have a different meaning. While our experimental results given in Table 1 reproduced the. results obtained byZagoruyko & Komodakis(2016), here we test whether SGDR still makes sense. for WRN-28-1 (i.e., ResNet with 28 layers) where one epoch corresponds to 50k training examples.. We investigate different learning rate values for the default learning rate schedule (4 values out of. [0.01, 0.025, 0.05, 0.1]) and SGDR (3 values out of [0.025, 0.05, 0.1]). In line with the results given. in the main paper, Figure|6|suggests that SGDR is competitive in terms of anytime performance..\nWRN-28-10 on CIFAR-10 WRN-28-10 on ClFAR-100 Default, Ir=0.1 Default, Ir=0.05 0.8 0.8 = 200, T mult 0.6 = 1, T =2 mult 0.6 I + errrrenrrnr ^ T. = 10, T mult 0.4 0.4 0.2 0.2 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on ClFAR-10 WRN-28-10 on CIFAR-100 0.2 0.19 SsoL .1 0.18 0.17 0.9 0.16 0.8 0.15 0.7 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on CIFAR-10 WRN-28-10 on CIFAR-100 5 21 20.5 4.5 (%)errrrero (%) 20 error 4 19.5 eest 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs\nFigure 7: Training cross-entropy + regularization loss (top row), test loss (middle row) and tes. error (bottom row) on CIFAR-10 (left column) and CIFAR-100 (right column).\nWRN-28-10 on downsampled 32x32 ImageNet 100 95 (%)eoeee eeee geo 90 85 Default, Ir=0.050 80 Default. Ir=0.015 Default, Ir=0.005 SGDR, Ir=0.050 75 SGDR. Ir=0.015 SGDR. Ir=0.005 70 5 10 15 20 25 30 35 40 Epochs\nFigure 8: Top-5 test errors obtained by SGD with momentum with the default learning rate schedule and SGDR with To = 1,Tmult = 2 on WRN-28-10 trained on a version of ImageNet, with all images from all 1000 classes downsampled to 32 32 pixels. The same baseline data augmentation as for the CIFAR datasets is used. Three settings of the initial learning rate are considered: O.050. 0.015 and O.005. In contrast to the experiments described in the main paper, here, the dataset is permuted only within 10 subgroups each formed from 100 classes which makes good generalization much harder to achieve for both algorithms. An interpretation of SGDR results given here might be that while the initial learning rate seems to be very important, SGDR reduces the problem of improper selection of the latter by scanning / annealing from the initial learning rate to O."}]
Sk2iistgg
[{"section_index": "0", "section_name": "NON-LINEAR DIMENSIONALITY REGULARIZER FOR SOLVING INVERSE PROBLEMS", "section_text": "Ravi Garg\nUniversity of Adelaide\nravi.garg@adelaide.edu.au\nian.reid@adelaide.edu.au\nConsider an ill-posed inverse problem of estimating causal factors from observa. tions, one of which is known to lie near some (unknown) low-dimensional, non linear manifold expressed by a predefined Mercer-kernel. Solving this problem re-. quires simultaneous estimation of these factors and learning the low-dimensional. representation for them. In this work, we introduce a novel non-linear dimension- ality regularization technique for solving such problems without pre-training.. We re-formulate Kernel-PCA as an energy minimization problem in which low. dimensionality constraints are introduced as regularization terms in the energy. To the best of our knowledge, ours is the first attempt to create a dimensionality. regularizer in the KPCA framework. Our approach relies on robustly penalizing. the rank of the recovered factors directly in the implicit feature space to create their low-dimensional approximations in closed form.. Our approach performs robust KPCA in the presence of missing data and noise. We demonstrate state-of-the-art results on predicting missing entries in the stan-. dard oil flow dataset. Additionally, we evaluate our method on the challenging. problem of Non-Rigid Structure from Motion and our approach delivers promis-. ing results on CMU mocap dataset despite the presence of significant occlusions\nOur approach performs robust KPCA in the presence of missing data and noise We demonstrate state-of-the-art results on predicting missing entries in the stan- dard oil flow dataset. Additionally, we evaluate our method on the challenging problem of Non-Rigid Structure from Motion and our approach delivers promis- ing results on CMU mocap dataset despite the presence of significant occlusions and noise."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Dimensionality reduction techniques are widely used in data modeling, visualization and unsuper vised learning. Principal component analysis (PCAJolliffe(2002)), Kernel PCA (KPCAScholkopf et al.(1998)) and Latent Variable Models (LVMsLawrence(2005)) are some of the well known techniques used to create low dimensional representations of the given data while preserving its significant information.\nOne key deployment of low-dimensional modeling occurs in solving ill-posed inference problems Assuming the valid solutions to the problem lie near a low-dimensional manifold (i.e. can be parametrized with a reduced set of variables) allows for a tractable inference for otherwise under constrained problems. After the seminal work of |Candes & Recht](2009); Recht et al.(2010) or guaranteed rank minimization of the matrix via trace norm heuristics Fazel (2002), many ill-posec computer vision problems have been tackled by using the trace norm - a convex surrogate of the rank function - as a regularization term in an energy minimization frameworkCandes & Rech (2009);Zhou et al.(2014). The flexible and easy integration of low-rank priors is one of key factors for versatility and success of many algorithms. For example, pre-trained active appearance models Cootes et al.(2001) or 3D morphable models Blanz & Vetter (1999) are converted to robust featur trackingPoling et al.(2014), dense registration[Garg et al.(2013b) and vivid reconstructions of natu ral videos Garg et al.(2013a) with no a priori knowledge of the scene. Various bilinear factorizatior problems like background modeling, structure from motion or photometric stereo are also addressec with a variational formulation of the trace norm regularization Cabral et al. (2013).\nAnders Eriksson\nQueensland University of Technology. anders.eriksson@qut.edu.au\nQueensland University of Technology\nanders.eriksson@qut.edu.au"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "On the other hand, although many non-linear dimensionality reduction techniques - in particulai KPCA - have been shown to outperform their linear counterparts for many data modeling tasks they are seldom used to solve inverse problems without using a training phase. A general (discrim- inative) framework for using non-linear dimensionality reduction is: (i) learn a low-dimensional representation for the data using training examples via the kernel trick (ii) project the test exam ples on the learned manifold and finally (iii) find a data point (pre-image) corresponding to each projection in the input space.\nThis setup has two major disadvantages. Firstly, many problems of interest come with corrupted observations - noise, missing data and outliers - which violate the low-dimensional modeling assumption.Secondly, computing the pre-image of any point in the low dimensional feature subspace is non-trivial: the pre-image for many points in the low dimensional space might not even exis because the non linear feature mapping function used for mapping the data from input space to the feature space is non-surjective\nGenerative models like LVMs Lawrence (2005) are often used for inference by searching the low. dimensional latent space for a location which maximizes the likelihood of the observations. Prob- lems like segmentation, tracking and semantic 3D reconstruction|Prisacariu & Reid(2011); Dame et al.(2013) greatly benefit from using LVM. However, the latent space is learned a priori with clean training data in all these approaches.\nAlmost all non-linear dimensionality reduction techniques are non-trivial to generalize for solving ill-posed problems (See section4.2) without a pre-training stage. Badly under-constrained problems require the low-dimensional constraints even for finding an initial solution, eliminating applicability of the standard \"projection + pre-image estimation\"' paradigm. This hinders the utility of non- linear dimensionality reduction and a suitable regularization technique to penalize the non-linear dimensionality is desirable.\nSum and Substance: A closer look at most non-linear dimensionality reduction techniques reveals that they rely upon a non-linear map- ping function which maps the data from in- put space to a (usually) higher dimensional fea- ture space. In this feature space the data is as- sumed to lie on a low-dimensional hyperplane\nnon-linear dimensionality reduction techniques reveals that they rely upon a non-linear map- ping function which maps the data from in- put space to a (usually) higher dimensional fea- ture space. In this feature space the data is as- sumed to lie on a low-dimensional hyperplane thus, linear low-rank prior is apt in the fea ture space. Armed with this simple observa- tion, our aim is to focus on incorporating the advances made in linear dimensionality reduc tion techniques to their non-linear counterparts while addressing the problems described above Figure[1explains this central idea and proposed dimensionality regularizer in a nutshell with\nOur Contribution: In this work we propose a unified for simultaneous robust KPCA and pre image estimation while solving an ill-posed in ference problem without a pre-training stage\nIn particular we propose a novel robust en- ergy minimization algorithm which handles the implicitness of the feature space to directly penalize its rank by iteratively: (i) creating robust low-dimensional representation for the\nPreviously, extensions to KPCA like Robust KPCA (RKPCANguyen & De la Torre (2009)) and probabilistic KPCA (PKPCASanguinetti & Lawrence(2006)) with missing data have been proposed to address the first concern, while various additional regularizers have been used to estimate the pre-image robustly Bakir et al.[(2004); Mika et al.(1998); Kwok & Tsang(2004); Abrahamsen & Hansen(2009).\nCausal Factors 3D shapes (S) and the projection matrices (R) Wj =RjSi R1 R2 Observations RN 2D projections (W) in the image W1 W2 S2 SN WN Si ->$(Si) Input Space Feature Space (RKHS) (Span of aligned 3D shapes) (Span of non-linearly transformed shapes). Dimensionality Regularizer Penalizing rank in the feature space Dimensionality of S ~I|$(S)|l*\nFigure 1: Non-linear dimensionality regularizer for NRSfM. The top part of the figure explains the ill-posed inverse problem of recovering the causal factors (1); projection matrices R; and 3D structures S,, from 2D image observations (2) Wi's, by minimizing the image reprojection error f(W, R, S) = , |W, RS||2. Assuming that the recovered 3D structures (S;'s) lies near an unknown non-linear manifold (represented by the blue curve) in the input space, we propose to regu- larize the dimensionality of this manifold (3) - span of the non-linearly transformed shape vectors $(S)'s - by minimizing (S)l[*. The non-linear transformation is defined implicitly with a Mercer kernel and maps the non-linear manifold to a linear low rank subspace (shown in blue line) of RKHS\ndata given the kernel matrix in closed form and (ii) reconstructing the noise-free version of the data (pre-image of the features space projections) using the estimated low-dimensional representations in a unified framework\nThe proposed algorithm: (i) provides a novel closed form solution to robust KPCA; (ii) yields state of-the-art results on missing data prediction for the well-known oil flow dataset; (iii) outperforms state-of-the-art linear dimensionality (rank) regularizers to solve NRSfM; and (iv) can be trivially generalized to incorporate other cost functions in an energy minimization framework to solve various ill-posed inference problems.\nIf, f(W, S) = 0 is ill-conditioned (for example when ) < ), we want to recover matrix S under the assumption that the columns of it lie near a low-dimensional non-linear manifold. This can be done by solving a constrained optimization problem of the following form:.\nmin rank((S)) S s.t. f(W,S)<e\nwhere (S) = [(s1), (s2), ... , $(sn)] e H N is the non-linear mapping of matrix S from the input space ' to the feature space H (also commonly referred as Reproducing Kernel Hilbert Space), via a non-linear mapping function $ : -> H associated with a Mercer kernel K such that K(S)i,j = $(si)T$(sj)\nIn this paper we present a novel energy minimization framework to solve problems of the gener. form (1).\nAs our first contribution, we relax the problem (1) by using the trace norm of (S) - the convex. surrogate of rank function - as a penalization function. The trace norm ||M||* =: , ,(M) of a matrix M is the sum of its eigenvalues ,(M) and was proposed as a tight convex relaxatior'oi the rank(M) and is used in many vision problems as a rank regularizerFazel(2002). Although. the rank minimization via trace norm relaxation does not lead to a convex problem in presence ol a non-linear kernel function, we show in|3.2|that it leads to a closed-form solution to denoising a kernel matrix via penalizing the rank of recovered data (S) directly in the feature space..\nWith these changes we can rewrite (1) as\nwhere t is a regularization strength2\nIt is important to notice that although the rank of the kernel matrix K(S) is equal to the rank of (S), K(S)* is merely (S)|[?. Thus, directly penalizing the sum of the singular values of. K(S) will not encourage low-rank in the feature space|3.\nAlthough we have relaxed the non-convex rank function, (2) is in general difficult to minimize due to the implicitness of the feature space. Most widely used kernel functions like RBF do noi have a explicit definition of the function . Moreover, the feature space for many kernels is high (possibly infinite-) dimensional, leading to intractability. These issues are identified as the main\n21/t can also be viewed as Lagrange multiplier to the constraints in (1). 3 Although it is clear that relaxing the rank of kernel matrix to ||K(S)||+ is suboptimal, works likeHuang et al.(2012); Cabral et al.[(2013) with a variational definition of nuclear norm, allude to the possibility of kernelization. Further investigation is required to compare this counterpart to our tighter relaxation.\nThis paper focuses on solving a generic inverse problem of recovering causal factor S = . sN] E X N from N observations W = [w1, W2, : S1,S2, wvE V x N such that f(W, S) = 0. Here function f(observation,variable), is a generic loss function which aligns the observations W with the variable S (possibly via other causal factors. e.g. R or Z in Section|4.1 and4.2).\nnin f(W,S) +t[(S)] S\n'More precisely, ||M||* was shown to be the tight convex envelope of rank(M)/l|M|s, where ||M||. epresent spectral norm of M.\nbarriers to robust KPCA and pre-image estimationNguyen & De la Torre(2009). Thus, we have tc reformulate (2) by applying kernel trick where the cost function (2) can be expressed in terms of the kernel function alone.\nThe key insight here is that under the assumption that kernel matrix K(S) is positive semidefinite we can factorize it as: K(S) = CT C. Although, this factorization is non-unique, it is trivial to show. the following:\nwhere X,(.) is the function mapping the input matrix to its ith largest eigenvalue\nmin f(W,S) + [K(S) - CTClI? + t|[C|l S,C\nBefore moving on, we would like to discuss some alternative interpretations of (5) and its rela tionship to previous work - in particular LVMs. Intuitively, we can also interpret (5) from the probabilistic viewpoint as commonly used in latent variable model based approaches to define ker nel function Lawrence(2005). For example a RBF kernel with additive Gaussian noise and inverse width can be defined as: K(S), = e-ylls,-s,|l? + e, where e ~ N(0, o). In other words, with a finite p, our model allows the data points to lie near a non-linear low-rank manifold instead o on it. Its worth noting here that like LVMs, our energy formulation also attempts to maximize the likelihood of regenerating the training data W, (by choosing f(W, S) to be a simple least squares cost) while doing dimensionality reduction.\nNote that in closely related work Geiger et al.(2o09), continuous rank penalization (with a loga rithmic prior) has also been used for robust probabilistic non-linear dimensionality reduction anc model selection in LVM framework. However, unlike Geiger et al.(2009); Lawrence(2005) where the non-linearities are modeled in latent space (of predefined dimensionality), our approach directly penalizes the non-linear dimensionality of data in a KPCA framework and is applicable to solve inverse problems without pre-training.\nWe approach the optimization of (5) by solving the following two sub-problems in alternation:\nX;(K(S)) = X(C) =X((S)\nC* = (S)ll* V C: CTC = K(S)\ns.t. K(S) = cTc\nThe above minimization can be solved with a soft relaxation of the manifold constraint by assuming that the columns of S lie near the non-linear manifold.\nAs p -> oo, the optimum of (5) approaches the optimum of (4) . A local optimum of (4) can be achieved using the penalty method of|Nocedal & Wright (2006) by optimizing (5) while iteratively increasing p as explained in Section|3\nf(W,S) K(S) - cTc? min S 2 p min r[C[* K(s) - cTc|l? 2 C\nAlgorithm[1 outlines the approach and we give a detailed description and interpretations of both sub-problems (7) and (6) in next two sections of the paper.\nSubproblem (6) can be seen as a generalized pre-image estimation problem: we seek the factor si, which is the pre-image of the projection of $(s;) onto the principle subspace of the RKHS stored in\nAlgorithm 1: Inference with Proposed Regularizer.\nInput: Initial estimate So of S.. Output: Low-dimensional S and kernel representation C. Parameters: Initial p and maximum pmax penalty, with scale Ps.. - S = So, p = po ; while p Pmax do while not converged do. - Fix S and estimate C via closed-form solution of (7) using Algorithm - Fix C and minimize (6) to update S using LM algorithm;. - p = PPs ;\nNotice that (6) only computes the pre-image for the feature space projections of the data points witl. which the non-linear manifold (matrix C) is learned. An extension to our formulation is desirabl if one wants to use the learned non-linear manifold for denoising test data in a classic pre-image. estimation framework. Although a valuable direction to pursue, it is out of scope of the presen. paper.\nOne can interpret sub-problem (7) as a robust. form of KPCA where the kernel matrix has been corrupted with Gaussian noise and we. want to generate its low-rank approximation. Although (7) is non-convex we can solve it in. closed-form via singular value decomposition This closed-form solution is outlined in Algo. rithm2land is based on the following theorem:\nTheorem1. With Sn 3 A > 0 let A = UU7 denote its singular value decomposition. Then\nA - LTLI?+rL] n1n L n i=1\nset of candidates for the corresponding eigenvalue o. e best one from this set is obtained by. choosing the value which minimizes (9) (see Algorithm[2) As elaborated in Section[2] problem (7) can be seen as regularizing sum of square root (L1/2 norm) of the eigenvalues of the matrix K(S). In a closely related workZongben et al.(2012), authors. advocate L1/2 norm as a better approximation for the cardinality of a vector then the more commonly. used L1 norm. A closed form solution for L1/2 regularization similar to our work was outlined in. Zongben et al.(2012) and was shown to outperform the L1 vector norm regularization for sparse coding. To that end, our Theorem [1and the proposed closed form solution (Algo 2) for (7) can\nCT C, which best explains the observation w;. Here (6) is generally a non-convex problem, unless the Mercer-kernel is linear, and must therefore be solved using non-linear optimization techniques In this work, we use the Levenberg-Marquardt algorithm for optimizing (6)\nAlgorithm 2: Robust Dimensionality Reduction\nInput: Current estimate of S. Output: Low-dimensional representation C Parameters: Current p and regularization strength T\nTheorem [1shows that each eigenvalue of the minimizer C* of (7) can be obtained by solving a. depressed cubic whose coefficients are determined by the corresponding eigenvalue of the kernel. matrix and the regularization strength t. The roots of each cubic, together with zero, comprise a set of candidates for the corresponding eigenvalue of C*. The best one from this set is obtained by. choosing the value which minimizes (9) (see Algorithm2).\nTable 1: Performance comparison on missing data completion on Oil Flow Dataset: Row 1 shows the amount of missing data and subsequent rows show the mean and standard deviation of the error in recovered data matrix over 50 runs on 100 samples of oil flow dataset by: (1) The mean method (also the initialization of other methods) where the missing entries are replaced by the mean of the known values of the corresponding attributes, (2) 1-nearest neighbor method in which missing entries are filled by the values of the nearest point. (3) PPCA|Tipping & Bishop(1999), (4) PKPCA ofSanguinetti & Lawrence(2006), (5)RKPCA|Nguyen & De la Torre (2009) and our method.\nbe seen as generalization ofZongben et al. (2012) to include the L1/2 matrix norms for which simplified proof is included in the Appendix|A] It is important to note however, that the motivatio and implication of using L1/2 regularization in the context of non-linear dimensionality reductio are significantly different to that of Zongben et al.(2012) and related work Du et al.[(2013);Zha et al. (2014) which are designed for linear modeling of the causal factors. The core insight of usin L1 regularization in the feature space via the parametrization given in|3|facilitates a natural way fc non-linear modeling of causal factors with low dimensionality while solving an inverse problem b making feature space tractable."}, {"section_index": "3", "section_name": "4 EXPERIMENTS", "section_text": "In this section we demonstrate the utility of the proposed algorithm. The aims of our experiments are twofold: (i) to compare our dimensionality reduction technique favorably with KPCA and its robust variants; and (ii) to demonstrate that the proposed non-linear dimensionality regularizer consistently outperforms its linear counterpart (a.k.a. nuclear norm) in solving inverse problems.."}, {"section_index": "4", "section_name": "4.1 MATRIX COMPLETION", "section_text": "The nuclear norm has been introduced as a low rank prior originally for solving the matrix comple tion problem. Thus, it is natural to evaluate its non-linear extensions on the same task. Assuming. W E Rmxn to be the input matrix and Z a binary matrix specifying the availability of the observa. tions in W, Algorithm1can be used for recovering a complete matrix S with the following choice. of f(W. Z. S):\nf(W, Z,S) =||Z o(W - S)l)"}, {"section_index": "5", "section_name": "where o represents Hadamard product", "section_text": "To demonstrate the robustness of our algorithm for matrix completion problem, we choose 100 training samples from the oil flow dataset described in section|3.2|and randomly remove the elements from the data with varying range of probabilities to test the performance of the proposed algorithm against various baselines. Following the experimental setup as specified in Sanguinetti & Lawrence (2006), we repeat the experiments with 50 different samples of Z. We report the mean and standard deviation of the root mean square reconstruction error for our method with the choice of t = 0.1, alongside five different methods in Table[1 Our method significantly improves the performance of missing data completion compared to other robust extensions of KPCA Tipping & Bishop1999); Sanguinetti & Lawrence(2006);Nguyen & De la Torre(2009), for every probability of missing data.\nAlthough we restrict our experiments to least-squares cost functions, it is vital to restate here that our framework could trivially incorporate robust functions like the L1 norm instead of the Frobenius norm - as a robust data term f(W, Z, S) - to generalize algorithms like Robust PCA|Wright et al. (2009) to their non-linear counterparts.\nNon-rigid structure from motion under orthography is an ill-posed problem where the goal is to esti- mate the camera locations and 3D structure of a de. formable objects from a collection of 2D images which are labeled with landmark correspondences Bregler et al.(2000).Assuming s,(x;) e R3 to be the 3D location of point x; on the deformable object in the ith image, its orthographic projection wi(xj) E R2 can be written as w;(x) = R,s;(xj), where R; E R23 is a orthographic projection ma- trix Bregler et al.(200o). Notice that as the object deforms, even with given camera poses, reconstruct- ing the sequence by least-squares reprojection error minimization is an ill-posed problem. In their semi nal work, Bregler et al.(2000) proposed to solve this problem with an additional assumption that the re- constructed shapes lie on a low-dimensional linear subspace and can be parameterized as linear combi- nations of a relatively low number of basis shapes NRSfM was then cast as the low-rank factorization problem of estimating these basis shapes and corre- sponding coefficients.\nRecent work. like Dai et al. (2014);Garg et al. [2013a) have shown that the trace norm regularizer can be used as a convex envelope of the low-rank prior to robustly address ill-posed nature of the prob- lem. A good solution to NRSfM can be achieved by optimizing:\nF N min r||S|*+Zi(xj)||w;(xj)-R;si(xj)|l3 S,R i=1 j=1\nwhere S is the shape matrix whose columns are 3N dimensional vectors storing the 3D coordinates S,(x;) of the shapes and Z,(x) is a binary variable indicating if projection of point x; is available in the image i.\nAssuming the projection matrices to be fixed, this problem is convex and can be exactly solvec with standard convex optimization methods. Additionally, if the 2D projections w;(x) are noise free, optimizing (12) with very small t corresponds to selecting the the solution - out of the many solutions - with (almost) zero projection error, which has minimum trace norm Dai et al.[(2014) Thus henceforth, optimization of (12) is referred as the trace norm heuristics (TNH). We solve this problem with a first order primal-dual variant of the algorithm given in|Garg et al.(2013a), which can handle missing data. The algorithm is detailed and compared favorably with the state of the ar NRSfM approaches (based on linear dimensionality regularization) Appendix|C\nA simple kernel extension of the above optimization problem is\nwhere (S) is the non-linear ma ping of S to the feature.. oace using an RBF kernel.\nWith fixed projection matrices R, (13) is of the general form (2), for which the local optima can be found using Algorithm1\nGround truth Our regularizer X Linear regularizer\npny Ground truth # Our regularizer X Linear regularizer esti- de- ages nces 3 to able tion xj ma- ject uct- X rror emi- this\nFigure 2: Non-linear dimensionality regular. isation improves NRSfM performance com- pared to its linear counterpart. Figure shows. the ground truth 3D structures in red wire-frame. overlaid with the structures estimated using: (a). proposed non-linear dimensionality regularizer. shown in blue dots and (b) corresponding lin-. ear dimensionality regularizer (TNH) shown in. black crosses, for sample frames of CMU mo. cap sequence. Red circles represent the 3D points. for which the projections were known whereas. squares annotated missing 2D observations. See. text and Table2|for details.\nF N min t(S)l*+ Zi(x)|wi(xj)-Risi(x)| S,R i=1 j=1 f(W,Z,R,S)\nTable 2: 3D reconstruction errors for linear and non-linear dimensionality regularization with ground truth. camera poses. Column 1 and 4 gives gives error for TNH while column (2-3) and (5-6) gives the corresponding error for proposed method with different width of RBF kernel. Row 5 reports the mean error over 4 sequences.."}, {"section_index": "6", "section_name": "4.2.1 RESULTS ON THE CMU DATASET", "section_text": "In our experiments we use ground truth camera projection matrices to compare our algorithm agains1 TNH. The advantage of this setup is that with ground-truth rotation and no noise, we can avoid the model selection (finding optimal regularization strength ) by setting it low enough. We run the TNH with t = 10-7 and use this reconstruction as initialization for Algorithm 1 For the proposed method, we set T = 10-4 and use following RBF kernel width selection approach:\nFollowing the standard protocol in Dai et al. (2014); Akhter et al.(2009), we quantify the recon-. struction results with normalized mean 3D errors e3D = FN i ; eij, where ei; is the euclidean distance of a reconstructed point j in frame i from the ground truth, o is the mean of standard devi- ation for 3 coordinates for the ground truth 3D structures, and F, N are number of input images and number of points reconstructed.\nTable 2 shows the results of the TNH and non-linear dimensionality regularization based methods using the experimental setup explained above, both without missing data and after randomly remov. ing 50% of the image measurements. Our method consistently beats the TNH baseline and improves the mean reconstruction error by ~ 40% with full data and by ~ 25% when used with 50% miss. ing data. Figure2|shows qualitative comparison of the obtained 3D reconstruction using TNH and proposed non-lienar dimensionality regularization technique for some sample frames from various sequences. We refer readers to Appendix B|for results with simultaneous reconstruction pose opti- mization."}, {"section_index": "7", "section_name": "5 CONCLUSION", "section_text": "In this paper we have introduced a novel non-linear dimensionality regularizer which can be incor. porated into an energy minimization framework, while solving an inverse problem. The proposed algorithm for penalizing the rank of the data in the feature space has been shown to be robust to noise and missing observations. We have picked NRSfM as an application to substantiate our arguments and have shown that despite missing data and model noise (such as erroneous camera poses) our algorithm significantly outperforms state-of-the-art linear counterparts.\nAlthough our algorithm currently uses slow solvers such as the penalty method and is not directly. scalable to very large problems like dense non-rigid reconstruction, we are actively considering. alternatives to overcome these limitations. An extension to estimate pre-images with a problem\n4 Since our main goal is to validate the usefulness of the proposed non-linear dimensionality regularizer, w opt for a reduced size dataset for more rapid and flexible evaluation.\nNo Missing Data 50% Missing Data Linear Non-Linear Linear Non-Linear Dataset dmax dmed dmax dmed Drink 0.0227 0.0114 0.0083 0.0313 0.0248 0.0229 Pickup 0.0487 0.0312 0.0279 0.0936 0.0709 0.0658 Yoga 0.0344 0.0257 0.0276 0.0828 0.0611 0.0612 Stretch 0.0418 0.0286 0.0271 0.0911 0.0694 0.0705 Mean 0.0369 0.0242 0.0227 0.0747 0.0565 0.0551\nMaximum distance criterion (dmax): we set the maximum distance in the feature space to be 3o. Thus, the kernel matrix entry corresponding to the shape pairs obtained by TNH with maximum Euclidean distance becomes e-9/2. Median distance criterion (dmed): the kernel matrix entry corresponding to the median euclidean distance is set to 0.5.."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Christoph Bregler, Aaron Hertzmann, and Henning Biermann. Recovering non-rigid 3d shape from image streams. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 690-696,. 2000. R. Cabral, F. De la Torre, J. P. Costeira, and A. Bernardino. Unifying nuclear norm and bilinear. factorization approaches for low-rank matrix decomposition. In International Conference on. Computer Vision (ICCV), 2013.\nEmmanuel J Candes and Benjamin Recht. Exact matrix completion via convex optimization. Foun dations of Computational mathematics. 9(6):717-772. 2009\nYuchao Dai, Hongdong Li, and Mingyi He. A simple prior-free method for non-rigid structure. from-motion factorization. International Journal of Computer Vision. 107(2):101-122. 2014\nRavi Garg, Anastasios Roussos, and Lourdes Agapito. Dense variational reconstruction of non-rigi surfaces from monocular video. In Computer Vision and Pattern Recognition, pp. 1272-1279 2013a.\nGiven the success of non-linear dimensionality reduction in modeling real data and overwhelming use of the linear dimensionality regularizers in solving real world problems, we expect that pro. posed non-linear dimensionality regularizer will be applicable to a wide variety of unsupervised inference problems: recommender systems; 3D reconstruction; denoising; shape prior based object segmentation; and tracking are all possible applications.\nIjaz Akhter, Yaser Sheikh, Sohaib Khan, and Takeo Kanade. Nonrigid structure from motion in trajectory space. In Advances in neural information processing systems, pp. 41-48, 2009\nAntonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision. 40(1):120-145. 2011\nTimothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. IEEE\nAmaury Dame, Victor Adrian Prisacariu, Carl Yuheng Ren, and Ian Reid. Dense reconstruction using 3d object shape priors. In Computer Vision and Pattern Recognition, pp. 1288-1295. IEEE 2013.\nMaryam Fazel. Matrix rank minimization with applications. PhD thesis, Stanford University, 2002\nAndreas Geiger, Raquel Urtasun, and Trevor Darrell. Rank priors for continuous non-linear dimen sionality reduction. In Computer Vision and Pattern Recognition, pp. 880-887. IEEE, 2009..\nPaulo FU Gotardo and Aleix M Martinez. Kernel non-rigid structure from motion. In IEEE Inter national Conference on Computer Vision, pp. 802-809, 2011a\nIan Jolliffe. Principal component analysis. Wiley Online Library, 2002\nNeil D Lawrence. Probabilistic non-linear principal component analysis with gaussian process laten 6:1783-1816. 2005 Varable models The. 1\nMinh Hoai Nguyen and Fernando De la Torre. Robust kernel principal component analysis. I Advances in Neural Information Processing Systems. 2009..\nJorge Nocedal and Stephen J. Wright. Numerical optimization. Springer, New York, 2006\nRalph Tyrrell Rockafellar. Conjugate duality and optimization, volume 14. SIAM, 1974\nGuido Sanguinetti and Neil D Lawrence. Missing data in kernel pca. In Machine Learning: ECML 2006, pp. 751-758. Springer, 2006\nJohn Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, and Yi Ma. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Advances in Neural Information Processing Systems, pp. 2080-2088. 2009\nXiaowei Zhou, Can Yang, Hongyu Zhao, and Weichuan Yu. Low-rank modeling and its application in image analysis. ACM Computing Surveys (CSUR), 47(2):36, 2014.\nXu Zongben, Chang Xiangyu, Xu Fengmin, and Zhang Hai. L1/2 regularization: a thresholding representation theory and a fast solver. IEEE Transactions on neural networks and learning systems, 23(7):1013-1027, 2012\nSebastian Mika, Bernhard Scholkopf, Alex J Smola, Klaus-Robert Muller, Matthias Scholz, and Gunnar Ratsch. Kernel pca and de-noising in feature spaces. In NIPS, volume 4, pp. 7, 1998.\nBernhard Scholkopf, Alexander Smola, and Klaus-Robert Muller. Nonlinear component analysis as a kernel eigenvalue problem. Neural computation, 10(5):1299-1319, 1998.\nIichael E Tipping and Christopher M Bishop. Probabilistic principal component analysis. Journa f the Roval Statistic. 61(3):611-622.1999"}, {"section_index": "9", "section_name": "PROOF OF THEOREM 3.1", "section_text": "- wr2w+l? +r|T|* min TeDn wTw=I\nmin tr -2 tr WI 2 r,w i=1 n n p min Wii 0 2 r,w i=1 i=1 j=1 n 21 0 111 2 V7 1 i=1 n 21 p min Oi+Y Yi 2 Yi0 0 i=1\nHA-LL|=+r||L|*= 2u-r2lI? + rl|Tll 2 n -y+T i=1\np 21 2YOi+Yi +TYi 0 2\nGiven the relaxations proposed in Section[2] our assertion that the novel trace regularization based. non-linear dimensionality reduction is robust need to be substantiated. To that end, we evaluate our closed-form solution of Algorithm2|on the standard oil flow dataset introduced in|Bishop & James (1993).\nIt is important to note that in this experiment, we only estimate the principal components (and their variances) that explain the estimated non-linear manifold, i.e. matrix C by Algorithm[2] without reconstructing the denoised version of the corrupted data samples\nBoth KPCA and our solution require model selection (choice of rank and t respectively) which is beyond the scope of this paper. Here we resort to evaluate the performance of both methods under different parameters settings. To quantify the accuracy of the recovered manifold (C) we use following criteria:\nProof. We will prove theorem1by first establishing a lower bound for (8) and subsequently showing that this lower bound is obtained at L* given by (10). The rotational invariance of the entering norms allows us to write 8) as:\nUE-wr2wT|?+r||T|* min (14) TeDn wTw=I obtain mintr( ) -2tr(Wr2 (15) 2 r,w n p 2T WijYjOi 2 2 min O i (16) 2 r,w i=1 i=1 j=1 n P 27 min > (17) 2 Y r 0 i=1 n 27 min (18) Yi0 0\nThe inequality in (17) follows directly by applying Holder's inequality to (16) and using the property that the column vectors wi are unitary.\nFinally, since the subproblems in (18) are separable in yi, its minimizer must be KKT-points of the individual subproblems. As the constraints are simple non-negativity constraints, these KKT points are either (positive) stationary points of the objective functions or O. It is simple to verify that the stationary points are given by the roots of the cubic function Po,r/2p. Hence it follows that there exists a * such that\nThis dataset comprises 1000 training and 1000 testing data samples, each of which is of 12 dimen- sions and categorized into one of three different classes. We add zero mean Gaussian noise with variance o to the training data|and recover the low-dimensional manifold for this noisy training data S, with KPCA and contrast this with the results from Algorithm2 An inverse width of the Gaussian kernel y = 0.075 is used for all the experiments on the oil flow dataset..\n5Note that our formulation assumes Gaussian noise in K(S) where as for this evaluation we add noise to S lirectly.\nTable 3: Robust dimensionality reduction accuracy by KPCA versus our closed-form solution on the full oil flow dataset. Columns from left to right represent: (1) standard deviation of the noise in training samples (2-3) Error in the estimated low-dimensional kernel matrix by (2) KPCA and (3) our closed-form solution, (4-5) Nearest neighbor classification error of test data using (4) KPCA and (5) our closed-form solution respectively\nManifold Error Classification Error STD KPCA Our CFS KPCA Our CFS .2 0.1099 0.1068 9.60% 9.60% .3 0.2298 0.2184 19.90% 15.70% .4 0.3522 0.3339 40.10% 22.20% 0.4 0.35 T Mann aannnor 0.3 0.25 KPCA,=.2 0.2 Ours,=.2 KPCA=.3 Ours,=.3 0.15 KPCA,=.4 Ours,=.4 0.1 1 0 2 4 6 8 10 12 14 16 Rank of kernel matrix"}, {"section_index": "10", "section_name": "B KERNEL NRSFM WITH CAMERA POSE ESTIMATION", "section_text": "Table4 shows the reconstruction performance on a more realistic experimental setup, with the mod- ification that the camera projection matrices are initialized with rigid factorization and were refined with the shapes by optimizing (2). To solve NRSfM problem with unknown projection matrices, we parameterize each R; with quaternions and alternate between refining the 3D shapes S and pro jection matrices R using LM. The regularization strength t was selected for the TNH method by golden section search and parabolic interpolation for every test case independently. This ensures the best possible performance for the baseline. For our proposed approach t was kept to 10-4 for all sequences for both missing data and full data NRSfM. This experimental protocol somewhat disad- vantages the non-linear method, since its performance can be further improved by a judicious choice of the regularization strength.\n6Errors from non-noisy kernel matrix can be replaced by cross validating the entries of the kernel matrix for model selection for more realistic experiment.\nFigure 3: Performance comparison between KPCA and our Robust closed-form solution with dimensionality. regularization on oil flow dataset with additive Gaussian noise of standard deviation o. Plots show the normal-. ized kernel matrix errors with different rank of the model. Kernel PCA results are shown in dotted line with diamond while ours are with solid line with a star. Bar-plot show the worst and the best errors obtained by our. method for a single rank of recovered kernel matrix..\nManifold Error : A good manifold should preserve maximum variance of the data - i.e. it should be able to generate a denoised version K(Sest) = CTC of the noisy kernel. matrix K(S). We define the manifold estimation error as K(Sest) K(SgT)|?, where. K(SgT) is the kernel matrix derived using noise free data. Figure3 shows the manifold. estimation error for KPCA and our method for different rank and parameter t respectively|. Classification error: The accuracy of a non-linear manifold is often also tested by the near- est neighbor classification accuracy. We select the estimated manifold which gives mini- mum Manifold Error for both the methods and report 1NN classification error (percentage. of misclassified example) of the 1000 test points by projecting them onto estimated mani-. folds.\nTable 4: 3D reconstruction errors for linear and non-linear dimensionality regularization with noisy camer pose initialization from rigid factorization and refined in alternation with shape. The format is same as Table2\nNo Missing Data 50% Missing Data Linear Non-Linear Linear Non-Linear Dataset T = T* T = 10-4 T = T* T = 10-4 dmax dmed dmax dmed Drink 0.0947 0.0926 0.0906 0.0957 0.0942 0.0937 Pickup 0.1282 0.1071 0.1059 0.1598 0.1354 0.1339 Yoga 0.2912 0.2683 0.2639 0.2821 0.2455 0.2457 Stretch 0.1094 0.1043 0.1031 0.1398 0.1459 0.1484 Mean 0.1559 0.1430 0.1409 0.1694 0.1552 0.1554\nAs suggested byDai et al.(2014), robust camera pose initialization is beneficial for the structure es-. timation. We have used rigid factorization for initializing camera poses here but this can be trivially changed. We hope that further improvements can be made by choosing better kernel functions, with cross validation based model selection (value of ) and with a more appropriate tuning of kernel. width. Selecting a suitable kernel and its parameters is crucial for success of kernelized algorithms. It becomes more challenging when no training data is available. We hope to explore other kernel. functions and parameter selection criteria in our future work..\nWe would also like to contrast our work with Gotardo & Martinez(2011a), which is the only wor we are aware of where non-linear dimensionality reduction is attempted for NRSfM. While esti mating the shapes lying on a two dimensional non-linear manifold, Gotardo & Martinez (2011a additionally assumes smooth 3D trajectories (parametrized with a low frequency DCT basis) and pre-defined hard linear rank constraint on 3D shapes. The method relies on sparse approximation o the kernel matrix as a proxy for dimensionality reduction. The reported results were hard to replicat under our experimental setup for a fair comparison due to non-smooth deformations. However, i contrast to|Gotardo & Martinez (2011a), our algorithm is applicable in a more general setup, ca be modified to incorporate smoothness priors and robust data terms but more importantly, is flexibl to integrate with a wide range of energy minimization formulations leading to a larger applicabilit beyond NRSfM.\nIn section4.2] we have compared the proposed non-linear dimensionality reduction prior against a variant of[Garg et al.(2013a) which handles missing data by optimizing:\nF N + maxmin T < S,Q> + Zi(xj)||wi(xj)- Risi(xj)| Q S,R i=1 j=1 s.t.||Q||s 1\nwhere Q E RXN stores the dual variables to S and |s represent spectral norm (highest eigen- value) of a matrix.\nFor more details on primal dual formulation and dual norm of the trace norm see[Rockafellar(1974) et al.(2010); :Chambolle & Pock (2011)\nHowever our purpose is primarily to show that the non-linear method adds value even without time-. consuming per-sequence tuning. To that end, note that despite large errors in the camera pose esti-. mations by TNH and 50% missing measurements, the proposed method shows significant (~ 10%) improvements in terms of reconstruction errors, proving our broader claims that non-linear repre-. sentations are better suited for modeling real data, and that our robust dimensionality regularizer can Improve inference for ill-posed problems.\nF N T||S|*+Zi(xj)|w(xj)-R;si(xj)l min S,R i=1 j=1\nAlgorithm 3: Trace norm Heuristics\n// set iteration count n step size and duals ( - n = 0; - = 1/T : Q = 0:\nan+1 =(I2x2+o(ZijRRi))-*(S-0TQ+oR(Zij 0 Wij))\nTable 5: 3D reconstruction errors for different NRSfM approaches and our TNH Algorithm given ground truth camera projection matrices. Results for all the methods (except TNH) are taken from[Dai et al.(2014)\nAs the main manuscript uses NRSfM only as a practical application of our non-linear dimension- ality reduction prior, we have restricted our NRSfM experiments to only compare the proposed method against its linear counterpart. For the timely evaluation, the reported experiments we con- ducted on sub-sampled CMU mocap dataset. Here, we supplement the arguments presented in the main manuscript by favorably comparing the linear dimensionality reduction based NRSfM algo rithm(TNH) to other NRSfM methods on full length CMU mocap sequences.\nDataset PTA[Akhter et al.[(2009) CSF2Gotardo & Martinez (2011b) BMMDai et al.[(2014) TNH Drink 0.0229 0.0215 0.0238 0.0237 Pick-up 0.0992 0.0814 0.0497 0.0482 Yoga 0.0580 0.0371 0.0334 0.0333 Stretch 0.0822 0.0442 0.0456 0.0431\nWe choose quaternions to perametrize the 2 3 camera matrices R; to satisfy orthonormality con- straints as done in[Garg et al.[(2013a) and optimize the saddle point problem (22) using alternation. In particular, for a single iteration: (i) we optimize the camera poses R's using LM, (ii) take a steepest descend step for updating S and (ii) a steepest ascend step for updating Q which is fol-. lowed by projecting its spectral norm to unit ball. Given ground truth camera matrices ( without. step (i)), alternation (ii-iii) can be shown to reach global minima of (22). Algorithm[3|outlines TNH algorithm."}]
HyET6tYex
[{"section_index": "0", "section_name": "UNIVERSALITY IN HALTING TIME", "section_text": "Levent Sagun Mathematics Department New York University saqun@cims u.edi\nThomas Trogdon Mathematics Department. University of California, Irvine. ttrogdon@math.uci.edi\nThe authors present empirical distributions for the halting time (measured by th number of iterations to reach a given accuracy) of optimization algorithms ap plied to two random systems: spin glasses and deep learning. Given an algorithm which we take to be both the optimization routine and the form of the randon andscape, the fluctuations of the halting time follow a distribution that, after cen tering and scaling, remains unchanged even when the distribution on the landscape is changed. We observe two main classes, a Gumbel-like distribution that appears n Google searches, human decision times, QR factorization and spin glasses, anc a Gaussian-like distribution that appears in conjugate gradient method, deep net work with MNIST input data and deep network with random input data. This empirical evidence suggests presence of a class of distributions for which the halt ing time is independent of the underlying distribution under some conditions."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In this paper we discuss both the presence and application of universality in optimization algorithms More precisely, in order to optimize an energy functional when the functional itself and the initial guess are random, we consider the following iterative algorithms: conjugate gradient for solving a linear system, gradient descent for spin glasses, and stochastic gradient descent for deep learning.\nA bounded, piecewise differentiable random field (See [Adler & Taylor (20o9) for an account or the connection of random fields and geometry), where the randomness is non-degenerate, yields a landscape with many saddle points and local minima. Given such a landscape and a moving particle that takes steps to reach a low-energy level, an essential quantity is the time the particle takes until i1 stops which we call the halting time. Many useful bounds on the halting time are known for convex cases, where the stopping condition produces a halting time that is, essentially, the time to finc the minimum. In non-convex cases, however, the particle knows only the information that can be calculated locally. And a locally measurable stopping condition, such as the norm of the gradient at the present point, or the difference in altitude with respect to the previous step, can lead the algorithn to locate a local minimum. This feature allows the halting time to be calculated in a broad range of non-convex, high-dimensional problems. A prototypical example of such a random field is the class of polynomials with random coefficients. Spin glasses and deep learning cost functions are then special cases of such fields that yield different landscapes. Polynomials with random coefficients are not only a broad class of functions, but also they are hard to study mathematically in any generality Therefore, in order to capture essential features of such problems, we focus on their subclasses tha1 are well studied (spin glasses) and practically relevant (deep learning cost functions).\nThe halting time in such landscapes, when normalized to mean zero and variance one (subtracting the mean and dividing by the standard deviation), appears to follow a distribution that is independent of the input data, in other words it follows a universal distribution: the fuctuations are universal. In statistical mechanics, the term \"universality' is used to refer to a class of systems which, on a certain macroscopic scale, behave statistically the same while having different statistics on a micro- scopic scale. An example of such a law is the central limit theorem, which states that the sums of observations tend to follow the same distribution independent of the distribution of the individual observations, as long as contribution from individual observations is reasonably small. It may fail to hold, if the microscopic behavior is not independent, does not have a finite second-moment, or if we consider something different than the sum. This work's focus is an attempt to put forward the cases"}, {"section_index": "2", "section_name": "Yann LeCun", "section_text": "YammDeoun Computer Science Departme New York University. 001"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "where we see universality. But in this spirit, we show a degenerate case in which halting time fail. to follow a universal law.\nA rather surprising example of halting time universality is in the cases of observed human decision. times and Goog1eTM query times. In Bakhtin & Correll (2012) the time it takes a person make a. decision in the presence of visual stimulus is shown to have universal fluctuations. The theoretically predicted curve in this experiment follows a Gumbel-like distribution. In addition, we randomly sampled words from two different dictionaries and submitted search queries. The time it takes Google to present the results are recorded. The normalized search times closely follow the same. Gumbel-like curve.\nIn the cases we observe, we find two main universality classes: (1) A Gumbel-like distribution thai appears in Google searches, human decision times, QR factorization and spin glasses, and (2) a Gaussian-like distribution that appears in conjugate gradient algorithm and deep learning. To the best of our knowledge, our work along with the accompanying references in this introduction are the first ones to address the question of observing and classifying the distribution of the halting time"}, {"section_index": "4", "section_name": "1.1 DEFINITION OF UNIVERSALITY", "section_text": "Definition 1.1. An algorithm A consists of both a random cost function F(x, w) where x is a give random input and an optimization routine that seeks to minimize F with respect to w.\nTo each algorithm we attach a precise e-dependent halting criteria for the algorithm. The halting time, which is a random variable. is the time it takes to meet this criteria. Within each algorithn there must be an intrinsic notion of dimension which we denote by N. The halting time Te,N,A,E depends on e, N, the choice of algorithm A, and the ensemble E (or probability distribution). We use the empirical distribution of Te,N,A,E to provide heuristics for understanding the qualitative performance of the algorithms.\nThe presence of universality in an algorithm is the observation that for sufficiently large N anc e = e(N), the halting time random variable satisfies.\nwhere t* is a continuous random variable that depends only on the algorithm. The random variabl Te,N,A,E is referred to as the fluctuations and when such an approximation appears to be valid we\nDistribution of normalized search time English words 0.5 Turkish words f BC 0.4 Frenneeey 0.3 0.2 0.1 0.0 0 2 2 4 6 Halting time (search time) fluctuations\nFigure 1: Search times of randomly selected words from two ensembles is compared with the curve fBc in Bakhtin & Correll|(2012) that is estimated from the decision times in an experiment conducted on humans. It is evident that more observations have yet to be made in identifying the underlying principles of the algorithms that are increasingly part of our life..\nTe,N,A,E E[Te,N,A,E Te,N,A,E := Var(Te,N,A,E)\nSome remarks must be made.\nTo give some context, we discuss the universality in the solution of the eigenvalue problem with th classical QR algorithm. Historically, this was first noticed in Pfrang et al.(2014). In this example the fundamental object is the QR factorization (Q, R) = QR(A) where A = QR, Q is orthogona (or unitary) and R is upper-triangular with positive diagonal entries. The QR algorithm applied to a Hermitian N N matrix A is given by the iteration\nAo := A, Qj,R) := QR(A) Aj+1 := RjQj\nGenerically, A, D as j -> oo where D is a diagonal matrix whose diagonal entries are the eigenvalues of A. The halting time in Pfrang et al. (2014) was set to be the time of first deflation. T. N A E(A). as:\nmin{j:/N(N-k)|A,(k+1: N,1: k)|l< e for some 1 < k N - 1}\nHere ||Allo refers to the maximum entry of a matrix A in absolute value and the notation A(i :. j, k : l) refers to the submatrix of A consisting of entries only in rows i, i + 1, . . . , j and in columns. k, k + 1,...,l. Thus the halting time for the QR algorithm is the time at which at least one off-. diagonal block is appropriately small. Next, we have to discuss choices for the randomness, or. ensemble E, by choosing different distributions on the entries of A. Four such choices for ensembles are, Bernoulli ensemble (BE), Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble. (GUE), Quartic unitary ensemble (QUE):\nsay that N and e (and any other external parameters) are in the scaling region. For example, in Section[1.2] A is the QR eigenvalue algorithm, N is the size of the matrix, e is a small tolerance and E is given by a distribution on complex Hermitian (or real symmetric) matrices.\nA statement like (1) is known to hold rigorously for a few algorithms (seeDeift & Trogdon (20162017)) but in practice, it is verified experimentally. This was first done in Pfrang. et al.(2014) and expanded inDeift et al.(2014) for a total of 8 different algorithms. The random variable T* depends fundamentally on the functional form of F. And we only. expect (1) to hold for a restricted class of ensembles E. Te,N,A,E is an integer-valued random variable. For it to become a continuous distribution. limit must be taken. This is the only reason N must be large -- in practice, the approxima-. tion in (1) is seen even for small to moderate N..\nUniversality in this sense is a measure of stability in an algorithm. For example, it is known from the. work of Kostlan (1988) that halting time for the power method to compute the largest eigenvalue (in modulus) of symmetric Gaussian matrices has infinite expectation and hence this type of universality . is not believed to be present. One could use this to conclude that the power method is a naive method. for these matrices. Yet, it is known that the power method is much more efficient on positive-definite. matrices where universality can be shown Deift & Trogdon (2017). Therefore, we have evidence that the presence of universality is a desirable feature of a numerical method..\nBE A is real-symmetric with iid Bernoulli 1 entries on and below the diagonal. GOE A is real-symmetric with iid standard normal entries below the diagonal. The entries on the diagonal are iid normal with mean zero and variance two. GUE A is complex-Hermitian with iid standard complex normal entries below the diagonal. The entries on the diagonal are iid complex normal mean zero and with variance two. QUE A is complex-Hermitian with probability density e-trA4 dA. SeeDeift(2o00) for de- tails on such an ensemble and Olver et al.(2015) for a method to sample such a matrix. Importantly, the entries of the matrix below the diagonal are correlated.\nHere we have continuous and discrete, real and complex, and independent and dependent ensemble but nevertheless we see universality in Figure2|where we take N = 150 and e = 10-10.\nFigure 2: Empirical histograms for the halting time fluctuations Te,N,QR,E when N = 150, e = 10-10 for various choices of ensembles E. This figure shows four normalized histograms, one each. for E = BE, GOE, GUE and QUE. It is clear that the fluctuations follow a universal law.\nRemark 1.1. The ensembles discussed above (GOE, GUE, BE and QUE) exhibit eigenvalue repu sion. That is, the probability that two eigenvalues are closq'lis much smaller than if the locations c the eigenvalues were just given by iid points on the line. It turns out that choosing a random matri with iid eigenvalues breaks the universality that is observed in Figure[2] SeePfrang et al.(2014) fo a more in-depth discussion of this.\nRemark 1.2. To put the QR algorithm in the framework, let B = U AU* define F(A, U) by\nmin{j :V/N(N - k)|B(k +1: N,1: k)| < e for some 1 k N - 1}\nWe then use the QR algorithm to minimize F with respect to unitary matrices U using the initial condition U = I. If A is random then F(A, U) represents a random field on the unitary group.\nA natural class of random fields is the class of Gaussian random functions on a high-dimensiona. sphere, known as p-spin spherical spin glass models in the physics literature (in the Gaussian proces. literature they are known as isotropic models). From the point of view of optimization, minimiz ing the spin glass model's Hamiltonian is fruitful because a lot is known about its critical points. This allows us to experiment with questions regarding whether the local minima and saddle points. due to the non-convex nature of landscapes, present an obstacle in the training of a system. Suc. observations on the Hamiltonian doesn't imply that it is a cost function or a simplified version o. a cost function. Rather, the features that both systems have in common hint at a deeper underlyin. structure that needs to be discovered..\nIn recent years Dauphin et al.(2014) attacked the saddle point problem of non-convex optimiza tion within deep learning. In contrast, Sagun et al.[(2014) and the experimental second section of Choromanska et al. (2014) jointly argue that if the system is large enough, presence of saddle points is not an obstacle, and add that the local minimum practically gives a good enough solution within the limits of the model. However, Sagun et al.(2014) and Choromanska et al.(2014) hold differ ent perspectives on what the qualitative similarities between optimization in spin glasses and deep learning might imply. The latter asserts a direct connection between the two systems based on these similarities. On the contrary, the former argues that these similarities hint at universal behaviors that are generically observed in vastly different systems rather than emphasizing a direct connection..\nBy close. we mean that their distance is much less than O(1/N) where N is the size of the matrix\nUniversal scaling - N = 150 0.5 BE GOE 0.4 GUE Frenneney QUE 0.3 0.2 0.1 0.0 -2 0 2 4 6 Halting time fluctuations.\nThe two functions are indeed different in two major ways. First, the domain of the Hamiltonian i a compact space and the couplings are independent Gaussian random variables whereas the input for (2) are not independent and the cost function has a non-compact domain. Second, at a fixec point w, variance of the function LTrain(w) is inversely proportional to the number of samples but the variance of Hy(w) is N. As a result a randomly initialized Hamiltonian can take vastl different values, but a randomly initialized cost tend to have very similar values. The Hamiltoniar has macroscopic extensive quantities: its minimum scales with a negative constant multiple of N. Ir contrast, the minimum of the cost function is bounded from below by zero. All of this indicates tha landscapes with different geometries (glass-like, funnel-like, or another geometry) might still lea to similar phenomena such as existence of the floor level, and the universal behavior of the halting time."}, {"section_index": "5", "section_name": "1.4 SUMMARY OF RESULTS", "section_text": "We discuss the presence of universality in algorithms that are of a very different character. The conjugate gradient algorithm, discussed in Section 2.1] effectively solves a convex optimizatior problem. Gradient descent applied in the spin glass setting (discussed in Section|2.2) and stochastic gradient descent in the context of deep learning (MNIST, discussed in Section2.3) are much more complicated non-convex optimization processes. Despite the fact that these algorithms share very little geometry in common, we demonstrate three things they share:\nA scaling region in which universality appears and performance is good Regions where the computation is either ineffective or inefficient. A moment-based indicator for finding the universality class.\n22-spin spherical spin glass, sum of xijw;w; terms, has exactly 2N critical points. When p 3, pspin model has exponentially many critical points with respect to N. For the latter case, complexity is a measure on the number of critical points in an exponential scale. Deep learning problems are suspected to be complex in this sense.\nIn line with the asymptotic proof in [Auffinger et al.](2013), the local minima are observed to lie roughly at the same energy level in spherical spin glasses. [Auffinger et al.(2013) also gives asymp- totic bounds on the value of the ground state and the exponential behavior of the average of the number of critical points below a given energy level. It turns out, when the dimension is large, the bulk of the local minima tend to have the same energy which is slightly above the global minimum. This level is called the floor level of the function. Simulations of the floor in spin glass can be found in |Sagun et al.(2014).Sagun et al.[(2014) also exhibits floor in a specially designed MNIST experiment: A student network is trained by the outputs of a pre-trained teacher network. Zero cost is achievable by the student, but the stochastic gradient descent cannot find zeros. It also does not have to because the floor level already gives a decent performance.\nGiven data (i.e., from MNIsT) and a measure L(xl, w) for determining the cost that is parametrized by w E R, the training procedure aims to find a point w* that minimizes the empirical training cost while keeping the test cost low. Here x' e Z for l E {1, ..., S}, where Z is a random (ordered) sample of size S from the training examples. Total training cost is given by\nS 1 F(Z, w) = LTrain(w) = L(x',w) S l=1\nN 1 F(x(),w) = Hn(w) = XijkWiWjWk. N i,j,k\n1. Compute rk =Tk-1-ak-1Apk-1 wherq|ak-1 =(rk-1,Tk-1)/(Pk-1,Apk-1). 2. Compute pk =Tk + bk-1Pk-1 where bk-1 =(rk,Tk)/(rk-1,Tk-1) 3. Compute Xk = Xk-1 + ak-1Pk-1.\nIf A is strictly positive definite xk -> x = A-1b as k -> oo. Geometrically, the iterates xk are the best approximations of x over larger and larger affine Krylov subspaces k.\nas k N. The quantity one monitors over the course of the conjugate gradient algorithm is the norn |rk|l:\nTe,N,CG,E(A,b) := min{k:||rk< e}\nIn exact arithmetic, the method takes at most N steps: In calculations with finite-precision arithmetic the number of steps can be much larger than N and the behavior of the algorithm in finite-precision arithmetic has been the focus of much research (Greenbaum] [1989] Greenbaum & Strakos]1992) What is important for us here is that it may happen that |rk| < e but the true residual rk := b- Axk (which typically differs from rk in finite-precision computations) satisfies [^k >E.\nUniversal scaling - N = 500 Degenerate scaling - N = 500 0.5 LOE LOE LUE LUE 0.4 PBE 0.2 0.1 0.0 0 4 -2 0 2 4 -2 0 6 8 Halting time fluctuations Halting time fluctuations (a) (b)\nFigure 3: Empirical histograms for the halting time fluctuations Te,n,cG,E when N = 500, e =. 10-10 for various choices of ensembles E. (a) The scaling M = N + 2|N| demonstrating the. presence of universality. This plot shows three histograms, one each for E = LUE, LOE and PBE (b) The scaling M = N showing two histograms for E = LUE and LOE and demonstrating the non-existence of universality.\nNow, we discuss our choices for ensembles E of random data. In all computations, we take b : (bj)1<j<n where each b, is iid uniform on (-1, 1). We construct positive definite matrices A b A =~XX* where X = (Xij)1i<N, 1j<M and each Xij ~ D is iid for some distribution T We make the following three choices for D, Positive definite Bernoulli ensemble (PDE), Lagueri orthogonal ensemble (LOE), Laguerre unitary ensemble (LUE):\n3We use the notation |y1l2 (y,y) for yn) CN 1\n1 F(A,y) Ay-y*6\nwhere * denotes the conjugate-transpose operation. Given an initial guess xo (we use xo = b), compute ro = b - Axo and set po = ro. For k = 1,..., N,.\nKk = xo + span{ro x||=x,A\nPBE D a Bernoulli 1 random variable (equal probability) LOE D is a standard normal random yariable. LUE D is a standard complex normal random variable.\nThe choice of the integer M, which is the inner dimension of the matrices in the product X X*, is critical for the existence of universality. In Deift et al.[(2014) and Deift et al.[(2015) it is demon- strated that universality is present when M = N + c/N and the e-accuracy is small, but fixed Universality is not present when M = N and this can be explained by examining the distribution of the condition number of the matrix A in the LUE setting (Deift et al.]2015). We demonstrate this again in Figure3(a) 1We also demonstrate that universality does indeed fail for M = N in Figure 3(b)\nThe gradient descent algorithm for the Hamiltonian of the p-spin spherical glass will find a local minimum of the non-convex function (3). Since variance of Hy(w) is typically of order N, a. local minimum has size N. More precisely, by Auffinger et al.(2013), the energy of the floor. level where most of local minima are located is asymptotically at -2/2/3N ~ -1.633N and the ground state is around -1.657N. The algorithm starts by picking a random element w of the sphere. with radius N, SN-1(N), as a starting point for each trial. We vary the environment for each. trial and introduce ensembles by setting x() ~ D for a number of choices of distributions. For. a fixed dimension N, accuracy e that bounds the norm of the gradient, and an ensemble E: (1) Calculate the gradient steps: wt+1 = wt - ntwH(wt), (2) Normalize the resulting vector to the sphere: N wt+1 Te,N,GD,E. This procedure is repeated 10,000 times for different ensembles (i.e. different choices. for D). Figure4exhibit the universal halting time presenting evidence that Te,N,GD,E is independent. of the ensemble.\nUniversal scaling - N = 400 0.5 Gaussian Bernoulli 0.4 Uniform Frenneeey 0.3 0.2 0.1 0.0 -2 0 2 4 6 Halting time fluctuations\nFigure 4: Universality across different distributions: We choose D ~ Gaussian(0, 1), D ~ uniform on (-(3/2)1/3, (3/2)1/3) and D ~ Bernoulli 1/2 with equal probability.\nA deep learning cost function is trained on two drastically different ensembles. The first is the. MNIST dataset, which consists of 60,000 samples of training examples and 10,000 samples of. test examples. The model is a fully connected network with two hidden layers, that have 500 and 300 units respectively. Each hidden unit has rectified linear activation, and a cross entropy cos. is attached at the end. To randomize the input data we sample 30K samples from the training se each time we set up the model and initialize the weights randomly. Then we train the model by the stochastic gradient descent method with a minibatch size of 100. This model gets us about 97%\nUniversality in fully connected network with SGD 0.5 Fully connected MNIST Fully connected random 0.4 MNIST on convnet MNIST on norm condition rouanbey 0.3 0.2 0.1 0.0 0 2 4 Halting time fluctuations\nFigure 5: Universality in the halting time for deep learning cost functions. MNIST digit inputs and independent Gaussian noise inputs give rise to the same halting time fluctuations, as well as a convnet with a different stopping condition..\naccuracy without any further tuning. The second ensemble uses the same model and outputs, bu. the input data is changed from characters to independent Gaussian noise. This model, as expected. gets us only about 10% accuracy: it randomly picks a number! The stopping condition is reachec. when the average of successive differences in cost values goes below a prescribed value. As a. comparison we have also added a deep convolutional network (convnet), and we used the full connected model with a different stopping condition: one that is tied to the norm of the gradient. Figure|5|demonstrates universal fluctuations in the halting time in all of the four cases.."}, {"section_index": "6", "section_name": "3 CONCLUSIONS", "section_text": "What are the conditions on the ensembles and the model that lead to such universality? What con stitutes a good set of hyperparameters for a given algorithm? How can we go beyond inspectior when tuning a system? How can we infer if an algorithm is a good match to the system at hand What is the connection between the universal regime and the structure of the landscape? This re search attempts to exhibit cases where one can extract answers to these questions in a robust anc quantitative way. The examples we have presented clearly exhibit universality. The normalized mo ment analysis, presented in the Appendix, gives a quantitative way to test for universality. And we further believe that an algorithm that exhibits universality is running in a scaling region of \"higl performance': universality is a measure of insensitivity to initial data which is a beneficial property of a numerical method. Establishing this claim is a difficult task, beyond the scope of this primarily empirical work.\nTe,N,A,E ~ + OTA,\nTe,N,A,E ~ + OTA\nP(|Te,N,A,E - | ol) ~ P(I*| l)\nIf t* has (or is just conjectured to have) exponential tails, for example, this can be quite useful\nThis work also validates the broad claims made in Deift et al.(2015) that universality is present ir. all or nearly all (sensible) computation. Future work will be along the lines of using these heuristic.. to identify when we have universality, to identify the different kinds of landscapes, and to guide both. algorithm development and algorithm tuning. Furthermore, one would like theoretical estimates fo the mean e.N.A.E and the standard deviation Oe.N.A.E..\nMore specifically, the current work gives empirical evidence that within an appropriate scaling re gion, the halting time can often be approximated as.\nwhere t* is a mean-zero, variance one universal distribution. If this holds, a simple estimate of the mean = e,N,A,E and the standard deviation o = e,N,A,E using a few samples will give a good. a priori estimate of algorithm run time"}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Robert J Adler and Jonathan E Taylor. Random fields and geometry. Springer Science & Busines. Media, 2009.\nAntonio Auffinger, Gerard Ben Arous, and Jiri Cerny. Random matrices and complexity of spi glasses. Communications on Pure and Applied Mathematics, 66(2):165-201, 2013\nAnna Choromanska. Mikael Henaff. Michael Mathieu, Gerard Ben Arous. and Yann LeCun. The loss surface of multilayer networks. arXiv preprint arXiv:1412.0233, 2014.\nYann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op- timization. In Advances in Neural Information Processing Svstems. pp. 2933-2941. 2014\nPercy Deift. Orthogonal polynomials and random matrices: a Riemann-Hilbert approach, volume 3 American Mathematical Soc., 2000.\nPercy Deift and Thomas Trogdon. Universality for eigenvalue algorithms on sample covariance matrices. arXiv Preprint arXiv:1701.01896, pp. 1-31, 2017.\nPercy Deift, Govind Menon, Sheehan Olver, and Thomas Trogdon. Universality in numerical com putations with random data. Proceedings of the National Academy of Sciences, 111(42):14973- 14978, 2014.\nPercy Deift, Govind Menon, and Thomas Trogdon. On the condition number of the critically-scalec laguerre unitary ensemble. arXiv preprint arXiv:1507.00750, 2015.\nMagnus Rudolph Hestenes and Eduard Stiefel. Method of Conjugate Gradients for solving Linea. Systems. J. Res. Nat. Bur. Stand., 20:409-436, 1952\nJason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent converge to minimizers. University of California, Berkeley, 1050:16, 2016\nWe thank Percy Deift for valuable discussions and Gerard Ben Arous for his mentorship throughout. the process of this research. The first author thanks very much to Ugur Guney for his availability for support and valuable contributions in countless implementation issues. This work was partially supported by the National Science Foundation under grant number DMS-1303018 (TT)..\nPercy Deift and Thomas Trogdon. Universality for the Toda algorithm to compute the eigenvalues of a random matrix. arXiv Prepr. arXiv1604.07384, apr 2016. URLhttp://arxiv.org/ abs/1604.07384\nAnne Greenbaum. Behavior of slightly perturbed lanczos and conjugate-gradient recurrences. Lin ear Algebra and its Applications. 113:7-63. 1989\nMoritz Hardt, Benjamin Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.\nEric Kostlan. Complexity theory of numerical linear algebra. Journal of Computational and Appliea Mathematics, 22(2):219-230, 1988.\nLevent Sagun, V Ugur Guney, Gerard Ben Arous, and Yann LeCun. Explorations on high dime. sional landscapes. arXiv preprint arXiv:1412.6615, 2014"}, {"section_index": "8", "section_name": "APPENDIX", "section_text": "Asymptotic error bounds for loss functions have been useful in the study of convergence properties of various models under various algorithms, for instance, at the heart of[Hardt et al.[(2015) and Lee et al.[(2016) lies a bound that depends largely on the number of steps. Such bounds, even when they are tight, hold asymptotically. The finite time behaviour may be less pessimistic and it may prove to be useful in many practical concerns. For example, assuming the assumptions for a possible bound of an asymptotic nature could give results that are a lot more pessimistic."}, {"section_index": "9", "section_name": "EFFECTS OF VARYING ACCURACY IN OPTIMIZATION", "section_text": "In Figure 6] we plot ensemble averages of efficiency versus accuracy for different e's. A sharp. plateau in the accuracy is seen, indicating that the extra computation for small values of e is unnec essary. In MNIST and the spin glass example, the extra computation does not come with a gain in. performance.\nIn the spin glass setting, the floor value gives a natural bound on the value that the Hamiltonian car. practically reach. That value is above the ground state at an energy level where most local minima lie. This level presents a natural barrier for an algorithm like the gradient descent. Therefore a natural measure of performance at the point w* is H(w*)/(floor value). In MNIST, performance is the percentage of correct guesses in the test."}, {"section_index": "10", "section_name": "NORMALIZED-MOMENT ANALYSIS", "section_text": "We use the normalized third and fourth moments of the data, also referred to as the skewness and kurtosis, to identify which class the distributions belong to. Note that the first and second moments are zero and one since the date is normalized.\nIntuitively, in gradient based methods, the halting time is effected by the curvature of the surface. And the curvature of the surface describes the landscape along the path of decay. The Gaussian-like behavior of halting time in MNIST might allow us to speculate that it has a funnel like non-convex landscape rather than a glassy landscape. This observation is consistent with Sagun et al.(2014) in. its landscape exploration for spin glasses and deep learning..\nPerformance of GD on the spin glass model. 100 E400 0.01 E400 400 95 0.1 75 = 0.01 E75 90 000 00 85 0 e75 = 2 80 75 N=75 70 N=400 65 500 0 1000 1500 2000 2500 3000 Average number of steps it takes to reach e (a) Test performance vs. number of steps. 10000 = 0.05 9800 9600 9400 9200 9000 fully connected 8800 convnet 0 500 1000 1500 2000 2500 3000 Average number of steps it takes to reach e accuracy (b)\n100 400 0.1 E400 E400 0.0 95 75 = 0.1 75 = 0.01 90 8 85 0 E75 = 2 80 75 N=75 70 N=400 65 0 500 1000 1500 2000 2500 3000\nFigure 6: (a) Norm of the gradient varies from 5 to O.01 for the spin glass. (b) Averages of consecutive costs on MNIST that varies from 0.6 to 0.005.\ne = 0.05 O O O e = 0.05 00 O fully connected convnet 500 1000 1500 2000 2500 3000\nMODEL ENSEMBLE MEAN ST.DEV. 3RD 4TH CG:M = N LOE 970 164 5.1 35.2 CG:M = N LUE 921 46 15.7 288.5 CG:M = N +2VN LOE 366 13 0.08 3.1 CG:M = N+2|VN LUE 367 9 0.07 3.0 CG:M = N +2|VN PBE 365 13 0.08 3.0 SPIN GLASS GAUSSIAN 192 79.7 1.10 4.58 SPIN GLASS BERNOULLI 192 80.2 1.10 4.56 SPIn GLASS UNIFORM 193 79.6 1.10 4.54 QR BE 26 15 1.18 4.77 QR GOE 24 14 1.17 4.78 QR GUE 22 12 1.04 4.32 QR QUE 22 12 1.02 4.16 FULLY CONNECTED MNIST 2929 106 -0.32 3.24 FULLY CONNECTED RANDOM 4223 53 -0.08 2.98 CONVNET MNIST 2096 166 -0.11 3.18 COND. ON GRADIENT MNIST 3371 118 -0.34 3.31\nTable 1: Skewness (3rd moment) and kurtosis (4th moment) for the experiments: (1) In the M = N + 2| N | it is clear that these normalized moments nearly coincide and they are quite distinct for. M = N. (2) Gumbel like distribution in spin glasses and QR. (3) Gaussian-like distribution, with a. flat left tail for deep learning."}]
HkvS3Mqxe
[{"section_index": "0", "section_name": "COARSE PRUNING OF CONVOLUTIONAL NEURAI NETWORKS WITH RANDOM MASKS", "section_text": "Sajid Anwar, Wonyong Sung\nDepartment of Electrical Engineering and Computer Science Seoul National University\nsaiid@dsp.snu.ac.kr, wysung@snu.ac.kr\nThe learning capability of a neural network improves with increasing depth at higher computational costs. Wider layers with dense kernel connectivity patterns further increase this cost and may hinder real-time inference. We propose feature map and kernel pruning for reducing the computational complexity of a deep con volutional neural network. Due to coarse nature, these pruning granularities can be exploited by GPUs and VLSI based implementations. Further, we propose a simple strategy to choose the least adversarial pruning masks. The proposed ap- proach is generic and can select good pruning masks for feature map, kernel and intra-kernel pruning. The pruning masks are generated randomly, and the best performing one is selected using the validation set. The sufficient number of ran- dom pruning masks to try depends on the pruning ratio, and is less than 100 when 40% complexity reduction is needed. Once the least adversarial pruning mask is selected, we prune and retrain the network in one-shot. The proposed approach therefore consumes less time compared to iterative pruning. We have extensively evaluated the proposed approach with the CIFAR-100, CIFAR-10, SVHN, and MNIST datasets. Experiments show that 60-70% sparsity can be induced in the convolution layers with less than 1% increase in the misclassification rate of the baseline network."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep and wider neural networks have the capacity to learn a complex unknown function from the training data. The network reported inDean et al.(2012) has 1.7 billion parameters and is trained on tens of thousands of CPU cores. Similarly (Simonyan & Zisserman2014) has employed 11-19 layers and achieved excellent classification results on the ImageNet dataset. However, the increasing depth and width demands higher computational power. This high computational complexity is a major obstacle in porting the benefits of deep learning to resource limited devices. Further, the hot- spot for optimization are the convolution layers, as most of the computations are conducted there. Therefore, many researchers have proposed ideas to accelerate deep networks for real-time inference Yu et al.2012 :Han et al.2015b a :Mathieu et al (2013):Anwar et al.(2015b).\nNetwork pruning is one promising technique that first learns a function with a sufficiently large. sized network followed by removing less important connections[Yu et al.(2012); Han et al.(2015b) Anwar et al.[(2015b). This enables smaller networks to inherit knowledge from the large sized pre. decessor networks and exhibit a comparable level of performance. The works of|Han et al.[(2015b a] introduce fine grained sparsity in a network by pruning scalar weights. Due to unstructured sparsity. the authors employ compressed sparse row/column (CSR/CSC) for sparse representation. Thus the fine grained irregular sparsity cannot be easily translated into computational speedups.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Sparsity in a deep convolutional neural network (CNN) can be induced at various levels. Figure[1 shows four pruning granularities. At the coarsest level, a full hidden layer can be pruned. This is shown with a red colored rectangle in Fig.[1(a). Layer wise pruning affects the depth of the network and a deep network can be converted into a shallow network. Increasing the depth improves the net- work performance and layer-wise pruning therefore demand intelligent techniques to mitigate the\nPruning granularity (coarse (left) to fine grained (right). 0 Width reduction Conv] Conv2 (a) Layer-wise pruning (c) k k Kernel-pruning (d) Intra-kernel-pruning (b) Feature map pruning Depth reduction Sparse representation (increasing complexity from left to right)\nFigure 1: (a-d) shows four possible pruning granularities. The proposed work is focussed on the (b) feature map and (c) kernel pruning for simple sparse represenation. It can be observed that for the depicted architecture in Fig. (b), four convolution kernels are pruned..\nperformance degradation. The next pruning granularity is removing feature maps Polyak & Wolf (2015);Anwar et al.](2015b). Feature map pruning removes a large number of kernels and may degrade the network performance much. We therefore may not achieve higher pruning ratios with this granularity. For the depicted architecture in Fig. 1[(b)., pruning a single feature map, removes four kernels. Feature map pruning affects the layer width and we directly obtain a thinner network and no sparse representation is needed. Kernel pruning is the next pruning granularity and it prunes k k kernels. It is neither too fine nor too coarse and is shown in Fig. 1(c). Kernel pruning is therefore a balanced choice and it can change the dense kernel connectivity pattern to a sparse one. Each convolution connection involves W H k k multiply and accumulate (MAC) oper ations where W, H and k represents the feature map width, height and the kernel size, respectively Further the sparse representation for kernel pruning is also very simple.A single flag is Pre-Train the Network enough to represent one convolution connec- to the Baseline\nperformance degradation. The next pruning granu 2015); Anwar et al.(2015b).Feature map prunir degrade the network performance much. We theref this granularity. For the depicted architecture in Fig four kernels. Feature map pruning affects the layer and no sparse representation is needed. Kernel prur k k kernels. It is neither too fine nor too coarse therefore a balanced choice and it can change the d Each convolution connection involves W H ations where W, H and k represents the feature map Further the sparse representation for kernel pruning is also very simple. A single flag is enough to represent one convolution connec- tion. Generally, the pruning techniques in- duce sparsity at the finest granularity by remov- ing scalar weights. This sparsity can be in- duced in much higher rates but high pruning ratios do not directly translate into computa- tional speedups in VLSI or parallel computer based implementationsHan et al.(2015b). Fig- ure 1(d) shows this with red colored zeroes in the kernel. Further Fig. 1 summarizes the re- lationship between three related factors: the pruning granularities, the pruning ratios and the sparse representations. Coarse pruning granu- larities demand very simple sparse representa- tion but higher pruning ratios are comparatively difficult to achieve. Similarly fine grained prun- ing granularities can achieve higher pruning ra- tios but the sparse representation is more com- Fi plicated. The proposed work therefore prunes or feature maps and kernels in a network. Experi- ge mental evaluations show that better pruning re- ite sults can be achieved when a network is pruned ac with both granularities successively Si2\nAn important contribution of this work is. proposing a simple and generic strategy for the selection of pruning masks. Finding pruning. candidates is an important and difficult prob-\nInducible pruning ratios Inside allowable budget (increasing from left to right), (e.g., budget = Accuracy)\nPre-Train the Network to the Baseline One- Shot vs. Iterative Iterative One-Shot = tpr/M = tpr For current pruning ratio For current pruning ratio cpr = j,generate mmask,where cpr = tpr, generate mmask,where i i =1, 2,...,N = 1, 2,..., N Evaluate the MCR on each Evaluate the MCR on each W .* m, network W.* m, network Choose the best pruning mask, Choose the best pruning mask m= argminmi(MCRm) m=argminmi(MCRm) Re-initialize from the baseline network Re-initialize from the baseline network, prune with m,retrain and increment j prune with m, and retrain\nFigure 2: This figure compares the iterative and. one-shot pruning. tpr and cpr represents the tar-. get and current pruning ratio respectively. The. iterative pruning Han et al.. (2015b) gradually achieves the target pruning ratio in M steps of . size each, while the = tpr for one-shot prun- ing. This work adopts the one-shot pruning ap-. proach.\nem. Generally, in the literature granularity specific pruning strategies are reported Han et al. 2015b);[Li et al.(2016). (Anwar et al.]2015b) have developed a particle filtering approach, where the sequential importance resampling is employed. The proposed strategy randomly generates pruning masks, evaluates the importance of each mask with the validation set, selects the best masl having the argminm, (M C Rm,), prunes and retrains the networkYu et al.(2012). It is important tc. mention here that the pruning can be conducted in one-shot or iteratively. This difference is shown ir. Fig.2 For a target pruning ratio (tpr), the iterative process gradually increases sparsity and repeat. the process M times. On the other hand, the one-shot pruning induces the target pruning ratio ir. one step. We employ one-shot pruning as the retraining after pruning consumes much time. Thus. the one shot pruning is much more efficient in terms of the optimization time. We show experimen. tally that the proposed algorithm can select better pruning candidates compared to other methods. Further, our approach is not computationally expensive as it involves N random evaluations on the. small sized validation set.\nPruning reduces the number of network parameters and inevitably degrades the classification per formance. The pruning candidate selection is therefore of prime importance. For a specific pruning ratio, we search for the best pruning masks which afflicts the least adversary on the pruned net work. Indeed retraining can partially or fully recover the pruning losses, but the lesser the losses. the more plausible is the recoveryMishkin & Matas(2015). Further small performance degradatior also means that the successor network has lost little or no knowledge of the predecessor network. I there are M potential pruning candidates, the total number of pruning masks is (2M) and an exhaus tive search is therefore infeasible even for a small sized network. We therefore propose a simple anc greedy strategy for selecting pruning candidates.\nWe initialize a network with pre-trained parameters. These parameters may be learnt on the same or related problem. We randomly generate N pruning masks and compute the misclassificatior rate (MCR) for each one. We then choose the best pruning mask with maximum accuracy on the validation set. Referring to the depicted architecture in Fig|4a] suppose we need to select feature map pruning candidates in layer L2 and L3 with 1/3 pruning ratio. If N = 4, the following N ordered pairs of feature maps may be randomly selected for (L2, L3) : (1, 2), (2, 3), (3, 1), (1, 1). These combinations generate random paths in the network and we evaluate the validation set MCR through these routes in the network.\nHowever, this further raises the question of how to approximate N. We analyze the relationship. between pruning ratio and N on three datasets and the results are reported in Fig. 3] This analysis. is conducted for feature map pruning but is also applicable to other pruning granularities. It can be. observed from Fig. 3a|and [3c] that for higher pruning ratios, bigger value of N is beneficial as i1. results in better pruning candidate selection. Moreover, for the pruning ratio of no more than 40%. N = 50 random evaluations generate good selections. For lower pruning ratios, retraining is alsc. more likely to compensate the losses as the non-pruned parameters may still be in good numbers. The computational cost of this technique is not much as the evaluation is conducted on the smal sized validation set. By observing Fig. 3a|and 3c], we propose that the value of N can be estimated initially and later used in several pruning passes. The plots in Fig.3b|and|3d|show the pre-retraining. distribution of N random masks. Further, the plots in Fig. 3b|and 3d] shows that the distributions. are narrow for small pruning ratios..\nWe further analyze the effect of retraining on the pruning mask selection. We prune a network with several masks and retrain each pruned network. As several networks needs to be pruned and retrained many times, we experiment with a small network where the architecture is reported like this: 32(C5) - MP2 - 64(C5) - MP2 - 64(C5) - 64FC - 10Softmax. The network is trained with the CIFAR-10 dataset (40,000 training samples) without any data augmentation and batch normalization. The network achieves the baseline performance of 26.7% on the test set. The results are reported in Fig.4d where the pre and post-retraining network performance is shown on\nThe rest of the paper is organized as follows. Section2lprovides detailed explanations on the pruning candidate selection. Section 3 discusses the two pruning granularities while Section4|presents the experimental results. In Section 5] recent related works are revisited. We finally conclude the discussion in Section|6|and add the future research dimensions for this work.\nPrune Ratio 88.8672 Prune Ratio 81.4616 Prune Ratio 81.4616 Prune Ratio 61.8076 120 Prune Ratio 34.7005 Prune Ratio 34.7005 100 20 100 200 300 400 500 600 700 800 900 1000 Random Pruning Masks N 30 MCRyalidationSet (a) Best of N masks for CIFAR10 CN Nsmall (b) Distribution of N masks for CIFAR10 C N Nsm 160 Prune Ratio 90.5132 Prune Ratio 83.3969 Prune Ratio 83.3969 140 Prune Ratio 63.6679 70 Prune Ratio 63.6679 Prune Ratio 35.9769 Prune Ratio 35.9769 120 100 60 100 200 300 400 500 600 700 800 900 Random Pruning Masks N 1000 10 20 MCRValidationSet c) Best of N.. masks for C N Na.\nFigure 3: The network architectures are reported in Table[1] The networks are feature map pruned to generate the pre-retraining plots. Figure (a, c) compares the best candidate selected out of N random. combinations for various pruning ratios. The distribution of N random evaluations is shown in Fig (b, d). We can observe that it resembles a Gamma distribution. Further, for higher pruning ratios. the distribution resembles a bell-shaped curve. Analyzing Fig. (a,c) with Fig. (b,d), we infer that. bigger N may be beneficial for higher pruning ratios..\nthe x and y axis, respectively. Further, we superimpose a least-squares (LS) line fit to each of th scatter plot. It can be observed that the slope of the LS line decreases for higher pruning ratios. W infer that for high pruning ratios, the final network performance is dictated by the surviving numbe of effective parameters. It can be observed that the overall distribution is noisy. However, in general. the pre-retraining least adversarial pruning masks perform better after retraining. In the rest of this work, we therefore use the pre-retraining best mask for pruning the network..\nWe further compare this method with the weight sum criterion proposed in Li et al.(2016) and shown in Fig. 4a The set of filters or kernels from the previous layer constitute a group. This is shown with the similar color in Fig. 4a According toLi et al.(2016), the absolute sum of weights determine the importance of a feature map. Suppose that in Fig|4a] the Layer L2 undergoes feature map pruning. The weight sum criterion computes the absolute weight sum at S1, S2 and S3. If we further suppose that the pruning ratio is 1/3, then the min(S1, S2, S3) is pruned. All the incoming and outgoing kernels from the pruned feature map are also removed. We argue that the sign of a weight in kernel plays important role in well-known feature extractors and therefore this is not a good criterion.\nWe compare the performance of the two algorithms and Fig.4b|and4c shows the experimental results. These results present the network status before any retraining is conducted. We report the performance degradation in the network classification against the pruning ratio. From Fig.4b and4c, we can observe that our proposed method outperforms the weight sum method particularly for higher pruning ratios. The best of N Pruning masks strategy evaluates pruning candidates in combinations and provides a holistic view. The criterion in Li et al.(2016) evaluates the importance of a pruning unit in the context of a single layer while our proposed approach evaluates several paths through the network and selects the best one. The combinations work together and matter more instead of individual units. Further, our proposed technique is generic and can be used for any pruning granularity: feature map. kernel and intra-kernel pruning\n140 120 100 80 60 40 20 0 0\nFigure 4: (a) This figure explains the idea presented in|Li et al. (2016) and shows three layers, L1 L2 and L3. All the filters/kernels from previous layer to a feature map constitute one group which is shown with similar color. The S1,S2 and S3 is computed by summing the absolute value of all the weights in this group. (b) The comparison of the proposed method with the absolute weight sum method is shown here for the CN Nsv H n. It can be observed that our proposed method inflicts lesser adversary on the network for different pruning ratios. (d) In this plot, we prune a CNN network with various masks and compare their pre and post retraining performance. It can be observed that on the average, pre-retraining masks perform better after retraining.\nIn this section we discuss feature map and kernel pruning granularities. For a similar sized network we analyze the achievable pruning ratios with feature map and kernel pruning. In terms of granu larity, feature map pruning is coarser than kernel pruning. Feature map pruning does not need any sparse representation and the pruned network can be implemented in a conventional way, convolu tion lowering[Chellapilla et al. (2006) or convolution with FFTs[Mathieu et al.(2013). The propose work analyzes the unconstrained kernel and feature map pruning. Pruning a feature map eliminate all the incoming and outgoing kernels because the outgoing kernels are no more meaningful.\nKernel pruning is comparatively finer. The dimension and connectivity pattern of 2D kernels deter-. mine the computing cost of a convolutional layer. The meshed fully connected convolution layers. increases this cost and can hinder the real-time inference. In LeNet|LeCun et al.[(1998), the second. convolution layer has 6 16 feature maps and the kernel connectivity has a fixed sparse pattern.. With kernel pruning, we learn this pattern and convert the dense connectivity to sparse one. Kernel. pruning zeroes k k kernels and is neither too fine nor too coarse. Kernel level pruning provides a. balance between fine-grained and coarse-grained pruning. It is coarser than the intra-kernel sparsity. and finer than the feature map pruning. Depending on the network architecture, kernel pruning may. achieve good pruning ratios at very small sparse representation and computational cost. Each con-. volution connection represents one convolution operation which involves width x heiaht x k x k\nS1 S1 Baseline MCR = 3.93% FM FM1 FeatureMap Pruning with Weight Sum Voting FeatureMap Pruning with N = 10 Rand Evaluattions FeatureMap Pruning with N = 20 Rand Evaluattions - FeatureMap Pruning with N = 50 Rand Evaluations FeatureMap Pruning with N = 100 Rand Evaluations FeatureMap Pruning with N = 200 Rand Evaluations 70 FM, FM2 r FM3 FM3 S3 S3 L2 L3 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 J 1 Pruning Ratio (a) Absolute weight sum votingLi et al.(2016 (b) Weight sum vs. best of N random masks. 50 33 45 Baseline MCR 0.62% Pruning with weight sum voting 40 32 Pruning with the best of 10 random masks Pruning Ratio 31.12 % Pruning with the best of 20 random masks X+ Pruning Ratio 56.73% 35 Pruning with the best of 50 random masks Pruning Ratio 66.7% Pruning with the best of 100 random masks Pruning Ratio 77.13% Pruning with the best of 200 random masks 30 )set XX MCR 20 X 15 10 5 26 25 30 40 50 60 70 80 90 5 MCR with Pre-Retraining Pruning Masks 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Feature Map Pruning Ratio (d) Pre and Post Retraining Pruning masks cBest of random masks ys (a.\n20 3 19.5 (1). MCRBaseline = 16.260% (2). MCRBaseline + To(1.0) = 17.26% 2.5 19 - (3). FeatureMap Pruning (5). Kernel Pruning (1). MCRBaseline = 0.79% x (2). Kernel Pruning 2 18 O (3). FeatureMap Pruning 17.5 1.5 MCR 17 16.5 16 0.5 15.5 15 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 7 Pruning Ratio 0 0 0.2 0.4 0.6 0.8 1 Pruning Ratio (a) Feature map and kernel pruning of CIFAR-10 CN Nsmall (b) MNIST feature map and kernel pruning\nFigure 5: Figure (a) and (b) shows feature map and kernel pruning of two networks: CNNc1FAR-10.small and CNNMN1sT2. The corresponding network architectures are reported in Table[1 The network can be pruned by more than 50% with very small degradation in perfor- mance. Further, due to finer nature, the kernel pruning may inflict lesser adversary on the network performance.\n140 120 Conv1 3 x 128 Conv2 128 128 Conv3 128 x 128 100 Conv4 128 128 Conv5 128 x 256 80 60 40 20 F; Fo dim3 dimGrid (F F pr, BatchSize, 1); 0 dim3 dimThread (H, W, 1); 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 Kernel Prune Ratio (b) Custom GPU kernel f (a) Profiling kernel pruning. convolutions\n120 100 80 60 40 20 0\nFigure 6: (a) This figure shows the profiling results for kernel pruning with a customized GPU im plementation. It can be observed that the kernel pruning reduces the execution time. The experiment is conducted with the CIFAR-1O CNN. In (b), F, and Fo shows the input and output feature maps. while pr represents the pruning ratio. The GPU function scheduler shows that the call is only for. non-masked kernels.\nMAC operations. We first select pruning candidates with the criterion outlined in Section 2] The pruned network is then retrained to compensate for the losses incurred due to pruning. Figure 5a and|5b|show that depending on the network architecture, kernel pruning may achieve higher pruning. ratio than feature map pruning due to finer granularity. As the sparse granularities are coarse, a. generic set of computing platform can benefit from it. One disadvantage of the unconstrained kernel pruning is that the convolution unrolling technique cannot benefit from it Chellapilla et al.[(2006) However, customized VLSI implementations and FFT based convolutions do not employ convolu-. tion unrolling.Mathieu et al.(2013), have proposed FFT based convolutions for faster CNN training. and evaluation and the GPU based parallel implementation showed very good speedups. As com-. monly known that the IFFT(FFT(kernel) FFT(featuremap)) = kernel * featuremap,. the kernel level pruning can relieve this task. Although the kernel size is small, massive reusability. of the kernels across the mini-batch enables the use of FFT. The FFT of each kernel is computed. only once and reused for multiple input vectors in a mini-batch. In a feed-forward and backward path, the summations can be carried in the FFT domain and once the sum is available, the IFFT can\nTable 1: Specifications of the three CIFAR-10 networks\nNetwork Architecture Baseline MCR(%) Data Augmentation CNNMNIST1 16(C532C5)64C5)120-10 0.62 NO CNNMNIST2 6(C5) - 16(C5) 120(C5) - 84 - 10 0.79 NO CNNCI FAR10.small 2 x 128C3- MP2 -2 128C3- MP2-2 256C3-256FC -10Softmax 16.6 NO CN NCI FAR10.large 2 128C3- MP2 -2 256C3- MP2-2 256C3-1 512C3-1024FC - 1024FC -10Softmax 9.41 YES CNNSVHN (2 64C3)- MP2- (2 128C3)- MP2- (2 128C3)-512FC-512FC -10Softmax 3.5 NO CNNCIFAR100 (2 128C3) - MP2 - (2 128C3) - MP2 - (2 256C3) - 256C3 - 512FC -10Softmax 33.65 YES 13 20 x- FeatureMap Pruning 1.MCR = 16.260% > Kernel Pruning 19.5 (2). MCRBaseline +Tol(1.0) Baseline 12 = 17.26% -O-- FeatureMap followed by Kernel Pruning 19 (3). FeatureMap Pruning (4). Feature Map Followed by Kernel Pruning - Baseline MCR = 9.39% (5). Kernel Pruning Baseline + Tolerance (1.0%) (6). Kernel Prune Followed by Feature Map Pruning 11 18.5 Rernnn 18 AAier 10 17.5 les set MoR 9 16.5 16 8 15.5 7 15 0 0.2 0.4 0.6 0.3 0.7 0.8 0.2 0.5 0.8 1 0 0.1 0.4 0.6 0.9 Prune RatioConv2-Conv7 Pruning Ratio (a) CIFAR-10 C N Nsmall (b) CIFAR-10 C N N1arge\nFigure 7: The combinations of feature map and kernel pruning is reported here. Figure (a) and (b provides pruning results for the CNNc1FAR10.small and CNNc1FAR1o.large networks. It can be observed from both figure, that more sparsity can be induced in the network by indcuing sparsit with two granularities.\nbe performed Mathieu et al.(2013). Similarly, a customized VLSI based implementation can alsc benefit from the kernel level pruning. If the VLSI implementation imposes a constraint on the prun ing criterion, such as the fixed number of convolution kernels from the previous to the next layer, the pruning criterion can be adapted accordingly. In the next Section, we report and discuss the exper imental results in detail. As the commonly available libraries do not support masked convolutions we therefore profile kernel pruning with customized GPU functions. We call the GPU function only for the non-pruned convolution kernels and pass the appropriate indices. It can be observed thai fewer number of convolutions will reduce the required number of GFLOPs. Howevr, we conjecture that the true benefit of kernel pruning can be obtained with FFT based masked convolution.\nIn this section, we present detailed experimental results with the CIFAR-1O and SVHN datasets. Krizhevsky & Hinton(2009). We experiment on three image classification problems and induce sparsity feature map and kernel wise. We also prune one network with more than one pruning. granularity in combinations. During training and pruning, we use the stochastic gradient descent (SGD) and batch normalization Ioffe & Szegedy(2015). As elaborated in Section[1] we do not prune the network in small steps, and instead one-shot prune the network for a given pruning ratio. followed by retraining. The experimental results are reported in the corresponding subsections.."}, {"section_index": "3", "section_name": "4.1 CIFAR-10", "section_text": "The CIFAR-10 dataset includes samples from ten classes: airplane, automobile, bird, cat, deer dog, frog, horse, ship and truck. The training set consists of 50,o0o RGB samples and we allo cate 20% of these samples as validation set. Test set contains 10.000 samples and each sample has 32 32 RGB resolution. We evaluate the proposed pruning granularities with two networks. CN Nc1FAR10.small and CNNc1FAR10.large. CNNc1FAR10.small has six convolution and two overlapped max pooling layers. We report the network architecture with an alphanumeric string as\nTable 2: Feature map and kernel level pruning (75%) in CN Nc1\nreported in|Courbariaux et al. (2015) and outlined in Table[1] The (2 128C3) represents two con- volution layers with each having 128 feature maps and 3 3 convolution kernels. M P2 represents 3 3 overlapped max-pooling layer with a stride size of 2. We pre-process the original CIFAR-10 dataset with global contrast normalization followed by zero component analysis (ZCA) whitening.\nThe CN Nc1FAR1o.large has seven convolution and two max-pooling layers. Further, online data. augmentations are employed to improve the classification accuracy. We randomly crop 28 28 : patches from the 32 32 3 input vectors. These cropped vectors are then geometrically transforme randomly. A vector may be flipped horizontally or vertically, rotated, translated and scaled. A evaluation time, we crop patches from the four corners and the center of a 32 32 3 patch anc. flip it horizontally. We average the evaluation on these ten 28 28 3 patches to decide the fina label. Due to larger width and depth, the CN Nc1FAR1o.large achieves more than 90% accuracy or. the CIFAR-10 dataset. The CN Nc1FAR10.small is smaller than CNNc1FAR10.large and trainec without any data augmentation. The CN Nc1FAR10.smal therefore achieves 84% accuracy.."}, {"section_index": "4", "section_name": "4.1.1 FEATURE MAP AND KERNEL LEVEL PRUNING", "section_text": "For the same network, we can see that kernel level pruning performs better. We can achieve 70% sparsity with kernel level pruning. This is attributed to the fact that kernel pruning is finer anc hence it achieves higher ratios. Further kernel pruning may ultimately prune a feature map if all th incoming kernels are pruned. However at inference time, we need to define the kernel connectivity pattern which can simply be done with a binary flag. So although the sparse representation is needed it is quite simple and straightforward. Experimental results confirm that fine grained sparsity can b induced in higher rates. We achieved 70% kernel wise sparsity for Conv2 - Conv6 and the networl is compressed with very simple sparse representation."}, {"section_index": "5", "section_name": "4.1.2 COMBINATIONS OF KERNEL AND FEATURE MAP PRUNING", "section_text": "In this section we discuss the various pruning granularities applied in different combinations. We first apply the feature map and kernel pruning to the CN Nc1FAR1o.small network in different or- ders. With feature map pruning, we can achieve 60% sparsity under the budget of 1% increase in MCR. But at this pruning stage, the network learning capability is affected much. So we take a 50% feature map pruned network, where the CN Nc1FAR10.small is reduced to (128C3 - 89C3)- MP3-(89C3 - 89C3)-MP3-(179C3 - 179C3)-256FC-10Softmax. As pruning is only applied to Conv2 - Conv6, therefore in Fig.5al, pruning ratios are computed only for these layers. This network then undergoes kernel level pruning. The blue rectangle line in Figure 7a shows the pruning\nFeature Maps Pruned Feature Maps Feature Maps Prune Ratio Pruned Kernels (%) Conv Connections Kernel Prune Ratio (%) Conv2(128 128) 128 x 89 30.5 27306/9 = 3034 11392 3034/11392 = 26.6 Conv3(128 128) 89 x 89 51.5 18702/9 = 2078 7921 2078/7921 = 26.2 Conv4(128 128) 89 x 89 51.5 18702/9 = 2078 7921 2078/7921 = 26.2 Conv5(128 x 256) 89 x 179 51.4 37881/9 = 4209 15931 4209/15931 = 26.4 Conv6(256 256) 179 x 179 51.1 76851/9 = 8539 32041 8539/32041 = 26.6\nAfter layer pruning, feature map pruning is the 2nd coarsest pruning granularity. Feature map pruning reduces the width of a convolutional layer and generates a thinner network. Pruning a single feature map, zeroes all the incoming and outgoing weights and therefore, higher pruning ratios degrade the network classification performance significantly. Feature map pruning for the CN Nc1FAR1o.small is shown in Fig. 5a|with a circle marked red colored line. The sparsity re- ported here is for Conv2 to Conv6. We do not pruned the first convolution layer as it has only 3 128 (3 3) = 3456 weights. The horizontal solid line shows the baseline MCR of 16.26% whereas the dashed line shows the 1% tolerance bound. Training the network with batch normaliza- tionIoffe & Szegedy(2015) enables us to directly prune a network for a target ratio, instead of taking small sized steps. With a baseline performance of 16.26%, the network performance is very bad at 80% feature map pruning. We can observe that 62% pruning ratio is possible with less than 1% increase in MCR. The CN Nc1FAR10.small is reduced to (128C3 - 83C3)-MP3-(83C3 83C3)- MP3-(166C3- 166C3)-256FC-10Softmax. As pruning is only applied in Conv2 to Conv6, therefore the Figure5a pruning ratios are computed only for these layers.\nFigure 8: The pruning plots for 100 class classification problem is reported in (a). It can be observec that this network can be pruned by more than 60% with very small degradation in the network performance. Figure (b) shows the pruning results for the CNNsvhn. It can be observed that more than 70% sparsity can be induced in the network while the network accuracy still remains above 96%.\nresults. We achieve the best pruning results in this case and the final pruned network is reported in. detail in Table[2] Overall we achieve more than 75% pruning ratio in the final pruned network.\nWe further conducted experiments on the CNNc1FAR1o.large and the corresponding plots are. shown in Fig. 7b]The CNNc1FAR10.large is much wider and deeper than the CNNsmall as reported in Table 1. Therefore there are more chances of redundancy and hence more room for. pruning. Further we observe similar trends as C N Nc1FAR1o.small where the kernel pruning can be. induced in higher ratios compared to the feature map pruning. When the kernel pruning is applied. to the feature map pruned network, we can achieve more than 88% sparsity in the Conv2 - Conv7. of the CN Nc1FAR1o.large network. This way we show that our proposed technique has good scal-. ability. These results are in conformity to the resiliency analysis of fixed point deep neural networks Sung et al."}, {"section_index": "6", "section_name": "4.2 CIFAR-100", "section_text": "The CIFAR-100 dataset has 50,000 images classified into 100 fine and 20 coarse labels. The datase has 50,000 training and 10,000 test set images. The hundred class classification problem of CIFAR 100 has 500 images for each class. We construct a validation set for learning rate scheduling during training. The validation set is constructed with 100 samples for each class from the training set. This way we are left with 400 samples per class for training. We train the network with 40,000 image. with data augmentation and batch normalization Ioffe & Szegedy(2015). We obtain a baseline accuracy of 33.65% on the CIFAR-100 test set with a VGG styled network. The network architecture is reported in Table1as CN Nc1 F AR100.\nThe pruning plots for this dataset are provided in Fig.8a It can be observed that around 60% of the. network parameters can be pruned with less than 1% (absolute) increase in the network performance Moreover, pruning in combinations further improve the pruning ratios. Thus the lessons learnt. generalize well to other datasets.\nThe SVHN dataset consists of 32 32 3 cropped images of house numbers [Netzer et al. 2011] and bears similarity with the MNIST handwritten digit recognition dataset [LeCun et al. 1998]. The classification is challenging as more than one digit may appear in sample and the goal is to identify a digit in the center of a patch. The dataset consists of 73,257 digits for training, 26,032 for testing and 53,1131 extra for training. The extra set consists of easy samples and may augment the\n41 5 Baseline MCR 33.65% 40 Baseline + 1.0% 4.8 X Feature map pruning Feature Map Prunninge >- Kernel Pruning 39 Kernel Prunning 4.6 Baseline MCR is 3.5% Feature Map(42%) Followed by Kernel Pruned - -. Tolerance MCR is 4.00% Feature Map(50%) Followed by Kernel Pruned 4.4 37 MMCR 4 ay uoe 36 3.8 3.6 34 3.4 33 3.2 3 32 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Prune Ratio Prune Ratio (b) SVHN CNN (a) CIFAR-100 CN N\nFigure 9: This figure shows the per class MCR for the original, feature map, and kernel prunec networks. It can be observed that the per class error does not vary much in the pruned networks This shows that the pruning method is not biased towards a specific class. The feature map pruned network has 63.67% sparsity with MCRTest = 3.84%, MC Rval = 4.16%. The kernel pruned net work has 65.01% sparsity with MC RTest = 3.77%, M CRval = 4.45%. The sparsity are computed for Conv2-Conv6.\ntraining set. We generate a validation set of 6000 samples which consists of 4000 samples from the training set and 2000 samples from the extra [Sermanet et al. 2012]. The network architecture is reported like this: (2 64C3)-MP2- (2 128C3)-MP2-(2 128C3)-512FC-512FC-10Softmax This network is trained with batch normalization and we achieve the baseline MCR of 3.5% on the test set. The corresponding pruning plots are reported in Fig. 8b] We can observe a similar trend where kernels can be pruned by a bigger ratio compared to feature maps. More than 70% pruning ratio can be implemented in the reported network. Thus we show that the lessons learnt generalize well on various datasets.\nIn the literature, network pruning has been studied by several researches Han et al.(2015b a);Yu et al.(2012); Castellano et al.](1997); Collins & Kohli(2014);Stepniewski & Keane(1997); Reed (1993). Collins & Kohli(2014) have proposed a technique where irregular sparsity is used to re-. duce the computational complexity in convolutional and fully connected layers. However they have. not discussed how the sparse representation will affect the computational benefits. The works of. Han et al.(2015b a) introduce fine-grained sparsity in a network by pruning scalar weights. If the absolute magnitude of any weight is less than a scalar threshold, the weight is pruned. This work therefore favors learning with small valued weights and train the network with the L1/L2 norm augmented loss function. Due to pruning at very fine scales, they achieve excellent pruning ratios.. However this kind of pruning results in irregular connectivity patterns and demand complex sparse representation for computational benefits. Convolutions are unrolled to matrix-matrix multiplication in|Chellapilla et al.[(2006) for efficient implementation. The work of Lebedev & Lempitsky[(2015] also induce intra-kernel sparsity in a convolutional layer. Their target is efficient computation by un-. rolling convolutions as matrix-matrix multiplication. Their sparse representation is not also simple because each kernel has an equally sized pruning mask. A recently published work propose sparsity. at a higher granularity and induce channel level sparsity in a CNN network for deep face application Polyak & Wolf(2015). The work ofCastellano et al.(1997); Collins & Kohli (2014); Stepniewski & Keane(1997);Reed (1993) utilize unstructured fine grained sparsity in a neural network. Fixed\n0.1 0.1 Non-Pruned Non-Pruned 0.08 Feature Map Pruned 0.08 Feature Map Pruned Kernel Pruned Kernel Pruned 0.06 0.06 aasssesn 0.04 0.04 0.02 0.02 0 0 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Class (0 to 9 Digits) Class (0 to 9 Digits)\nThere can be a concern that pruning may decrease the accuracy of the original network when it is deployed in the field for run time classification. For a specific problem domain, the test set is used as a proxy for the future unseen data. We argue that to some extent, this question can be answered by comparing the per class error for the original and pruned networks. This way we can see whether the pruned network is biased towards a specific class. To analayze this, we computed the per class error with the CNNsv Hv network as reported in Table[1] The results are reported in Fig.9 It can be observed that the per class error for both validation and test set do not vary significantly We therefore infer that the pruning and retraining process is a promising technique for complexity reduction.\npoint optimization for deep neural networks is employed byAnwar et al.(2015a); Hwang & Sung (2014); Sung et al.|for VLSI based implementations. The reference work of|Anwar et al.(2015b analyzed feature map pruning with intra-kernel strided sparsity. To reduce the size of feature map and kernel matrices, they further imposed a constraint that all the outgoing kernels from a feature map must have the same pruning mask. In this work, we do not impose any such constraint and the pruning granularities are coarser. We argue that this kind of sparsity is useful for VLSI and FFT based implementations. Moreover we show that the best pruning results are obtained when we combine feature map and kernel level pruning."}, {"section_index": "7", "section_name": "6 CONCLUDING REMARKS", "section_text": "In this work, we proposed feature map and kernel pruning for reducing the computational complexity of deep CNN. We have discussed that the cost of sparse representation can be avoided with coarse pruning granularities. We demonstrated a simple and generic algorithm for selecting the best pruning mask from a random pool. We showed that the proposed approach adopts a holistic approach and performs better than the other methods. Further, we adopted the efficient one-shot pruing approach as the iterative retraining consumes much time. We conducted experiments with several benchmarks and networks and showed that the proposed technique has good scalability."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Fixed point optimization of deep convolutional. neural networks for object recognition. In Acoustics, Speech and Signal Processing (ICASsP) 2015 1EEE International Conference on, pp. 1131-1135. IEEE, 2015a.\nMaxwell D Collins and Pushmeet Kohli. Memory bounded deep convolutional networks. arXiv preprint arXiv:1412.1442, 2014.\nMatthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing. Systems, pp. 3105-3113, 2015.\nJeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior. Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pp. 1223-1231, 2012.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training b reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009\nSong Han, Huizi Mao, and William J Dally. A deep neural network compression pipeline: Pruning quantization, huffman encoding. arXiv preprint arXiv:1510.00149, 2015a.\nHao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016.\nMichael Mathieu. Mikael Henaff, and Yann LeCun. Fast training of convolutional networks througl ffts. arXiv preprint arXiv:1312.5851, 2013.\nDmytro Mishkin and Jiri Matas. All you need is a good init. arXiv preprint arXiv:1511.06422 2015.\nAdam Polyak and Lior Wolf. Channel-level acceleration of deep face representations. Access, IEEE 3:2163-2175, 2015.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.\nSlawomir W Stepniewski and Andy J Keane. Pruning backpropagation neural networks using mod-. ern stochastic optimisation techniques. Neural Computing & Applications. 5(2):76-98. 1997.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nWonyong Sung, Sungho Shin, and Kyuyeon Hwang. Resiliency of deep neural networks unde quantization.\nDong Yu, Frank Seide, Gang Li, and Li Deng. Exploiting sparseness in deep neural networks for large vocabulary speech recognition. In 2012 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 4409-4412. IEEE. 2012"}]
rJe-Pr9le
[{"section_index": "0", "section_name": "MULTI-TASK LEARNING WITH DEEP MODEL BASED REINFORCEMENT LEARNING", "section_text": "Asier Mujika\nZurich. Switzerlanc"}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "Recently, there has been a lot of success in applying neural networks to reinforcement learning achieving super-human performance in many ATARI games (Mnih et al. (2015); Mnih et al. (2016) Most of these algorithms are based on Q-learning, which is a model free approach to reinforcemen learning. This approaches learn which actions to perform in each situation, but do not learn al explicit model of the environment. Apart from that, learning to play multiple games simultaneousl remains an open problem as these approaches heavily degrade when increasing the number of task to learn.\nIn contrast, we present a model based approach that can learn multiple tasks simultaneously. The idea of learning predictive models has been previously proposed (Schmidhuber (2015); Santana & Hotz (2016)), but all of them focus on learning the predictive models in an unsupervised way We propose using the reward as a means to learn a representation that captures only that which is important for the game. This also allows us to do the training in a fully supervised way. In the experiments, we show that our approach can surpass human performance simultaneously on three different games. In fact, we show that transfer learning occurs and it benefits from learning multiple tasks simultaneously.\nIn recent years, approaches that use Deep Q-learning have achieved great success, making an important breakthrough when Mnih et al. (2015) presented a neural network architecture that was able to achieve human performance on many different ATARI games, using just the pixels in the screen as input."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In recent years, model-free methods that use deep learning have achieved greai. success in many different reinforcement learning environments. Most successful. approaches focus on solving a single task, while multi-task reinforcement learn-. ing remains an open problem. In this paper, we present a model based approach. to deep reinforcement learning which we use to solve different tasks simultane. ously. We show that our approach not only does not degrade but actually benefits. from learning multiple tasks. For our model, we also present a new kind of recur-. rent neural network inspired by residual networks that decouples memory from computation allowing to model complex environments that do not require lots of. memory. The code will be released before ICLR 2017..\nIn this paper, we first discuss why Q-learning fails to learn multiple tasks and what are its draw-. backs. Then, we present our approach, Predictive Reinforcement Learning, as an alternative to. overcome those weaknesses. In order to implement our model, we present a recurrent neural net- work architecture based on residual nets that is specially well suited for our task. Finally, we discuss our experimental results on several ATARI games..\nQ(s,a) = Es + y maxQ(s', a)|s, a a\nFor the rest of this subsection, we assume the reader is already familiar with Deep Q-learning and. we discuss its main problems. Otherwise, we recommend skipping to the next section directly as none of the ideas discussed here are necessary to understand our model..\nAs the true value of the Q-function is not known, the idea of Deep Q-learning is iteratively approximating this function using a neural network' which introduces several problems..\nFirst, the Q-values depend on the strategy the network is playing. Thus, the target output for the. network given a state-action pair is not constant, since it changes as the network learns. This means. that apart from learning an strategy, the network also needs to remember which strategy it is playing This is one of the main problems when learning multiple tasks, as the networks needs to remember. how it is acting on each of the different tasks. Rusu et al. (2015) and Parisotto et al. (2015) have managed to successfully learn multiple tasks using Q-learning. Both approaches follow a similai idea: an expert network learns to play a single game, while a multi-tasking network learns to copy. the behavior of an expert for each different game. This means that the multi-tasking network does not iteratively approximate the Q-function, it just learns to copy the function that the single-tasl expert has approximated. That is why their approach works, they manage to avoid the problem oi. simultaneously approximating all the Q-functions, as this is done by each single task expert..\nApart from that, the network has to change the strategy very slightly at each update as drastically. changing the strategy would change the Q-values a lot and cause the approximation process to diverge/slow-down. This forces the model to interact many times with the environment in order to find good strategies. This is not problematic in simulated environments like ATARI games where the. simulation can easily be speed up using more computing power. Still, in real world environments, like for example robotics, this is not the case and data efficiency can be an important issue..\nIn order to avoid the drawbacks of Deep Q-learning, we present Predictive Reinforcement Learn ing (PRL). In our approach, we separate the understanding of the environment from the strategy This has the advantage of being able to learn from different strategies simultaneously while also being able to play strategies that are completely different to the ones that it learns from. We will also argue that this approach makes generalization easier. But before we present it, we need to define what we want to solve."}, {"section_index": "3", "section_name": "3.1 PREDICTION PROBLEM", "section_text": "The problem we want to solve is the following: given the current state of the environment and the actions we will make in the future, how is our score going to change through time?.\nTo formalize this problem we introduce the following notation\nWe do not explain the process, but Mnih et al. (2015) give a good explanation on how this is done\nAs the name indicates, this approach revolves around the Q-function. Given a state s and an action a, Q(s, a) returns the expected future reward we will get if we perform action a in state s. Formally, the Q-function is defined in equation 1.\na: The observation of the environment at time i. In the case of ATARI games, this corre-. sponds to the pixels of the screen. r;: The total accumulated reward at time i. In the case of ATARI games, this corresponds to the in-game score. cy: The control that was performed at time i. In the case of ATARI games, this corresponds. to the inputs of the ATARI controller: up, right, shoot, etc..\n+1 reward MARIO WORLD TIME MARIO WORLD TIME 000400 01 000600 02 Input jump action\nFigure 1: We chose i = 0 and k = 1. We assume ao to be the pixels in the current image (the lefl one) and c1 to be the jump action. Then, given that input, we want to predict r1 - ro, which is 1 because we earn a reward from time 0 to time 1.\nThen, we want to solve the following problem: For a given time i and a positive integer k, let the input to our model be an observation a, and a set of future controls ci+1, ... Ci+k. Then, we want to predict the change in score for the next k time steps, i.e. (ri+1 r), ..., (ri+k - r). Figure 1 illustrates this with an example."}, {"section_index": "4", "section_name": "3.2.1 PERCEPTION", "section_text": "The Perception has to be tailored for the kind of observations the environment returns. For now.. we will focus only on vision based Perception. As we said before. the idea of this network is tc convert the high dimensional input to a low dimensional vector that contains only the necessary. information for predicting the score. In the case of video games, it is easy to see that such vector. exists. The input will consists of thousands of pixels but all we care about is the position of a few key objects, like for example, the main character or the enemies. This information can easily be.\nObserve that, unlike in Q-learning, our predictions do not depend on the strategy being played The outputs only depend on the environment we are trying to predict. So, the output for a given state-actions pair is always the same or, in the case of non-deterministic environments, it comes from the same distribution.\nWe have defined what we want to solve but we still need to specify how to implement a model that will do it. We will use neural networks for this and we will divide it into three different networks as follows:\nPerception: This network reads a state a; and converts it to a lower dimensional vector ho that is used by the Prediction. Prediction: For each j E {1,..., k}, this network reads the vector h-1 and the corre- sponding control ci+; and generates a vector h; that will be used in the next steps of the Prediction and Valuation. Observe that this is actually a recurrent neural network. Valuation: For each j E {1, ..., k}, this network reads the current vector h; of the Predic tion and predicts the difference in score between the initial time and the current one, i.e ri+i - ri.\nFigure 2 illustrates the model. Observe that what we actually want to solve is a supervised learning problem. Thus, the whole model can be jointly trained with simple backpropagation. We will now proceed to explain each of the components in more detail.\nFigure 2: Diagram of our predictive model\nencoded using very few neurons. In our experiments, we convert an input consisting of 28K pixels into a vector of just 100 real values.\nIn order to do this, we use deep convolutional networks. These networks have recently achievec super-human performance in very complex image recognition tasks (He et al., 2015). In fact, it has been observed that the upper layers in these models learn lower dimensional abstract representations of the input (Yosinski et al. (2015), Karpathy & Li (2015)). Given this, it seems reasonable to believe that if we use any of the successful architectures for vision, our model will be able to learn a usefu representation that can be used by the Prediction."}, {"section_index": "5", "section_name": "3.2.2 PREDICTION", "section_text": "For the Prediction network, we present a new kind of recurrent network based on residual neura networks (He et al., 2015), which is specially well suited for our task and it achieved better result than an LSTM (Hochreiter & Schmidhuber, 1997) with a similar number of parameters in our initia teStS.\nResidual Recurrent Neural Network (RRNN) We define the RRNN in Figure 3 using the fol lowing notation: LN is the layer normalization function (Ba et al., 2016) which normalizes the activations to have a median of 0 and standard deviation of 1. \".\" is the concatenation of two vec. tors. f can be any parameterizable and differentiable function, e.g., a multilayer perceptron..\nNj 4 ri=f(LN(hi-1)xi) (2) h=hi-1+ri (3) LN hj-1 Xj\nFigure 3: The equations of the RRNN and a diagram of the network\nAs in residual networks, instead of calculating what the new state of the network should be, we calculate how it should change (r;). As shown by He et al. (2015) this prevents vanishing gradients or optimization difficulties. LN outputs a vector with mean 0 and standard deviation 1. As we\nri+1-ri),...,ri+k-ri} ri+1-ri ri+2-rj A Valuation Valuation Valuation Prediction Prediction Prediction Perception Perception ^ aj {Ci+1, ..., Ci+k} aj Cj+1 Ci+2 (a) The recurrent model. (b) The same model unfolded in time\nproof? in Observation 1, this prevents internal exploding values that may arise from repeatedly adding r to h. It also avoids the problem of vanishing gradients in saturating functions like sigmoid or hyperbolic tangent.\nObservation 1. Let x E Rn be a vector with median 0 and standard deviation 1. Then, for al 1 < i < n, we get that x; < n.\nProof. Taking into account that the median is O and the standard deviation is 1, simply substitutin the values in the formula for the standard deviation shows the observation.."}, {"section_index": "6", "section_name": "3.2.3 VALUATION", "section_text": "The Valuation network reads the h vector at time i + j and outputs the change in reward for. that time step, i.e. ri+j - rj. Still, it is a key part of our model as it allows to decouple the. representation learned by the Prediction from the reward function. For example, consider a robot in a. real world environment. If the Perception learns to capture the physical properties of all surrounding. objects (shape, mass, speed, etc.) and the Prediction learns to make a physical simulation of the. environment, this model can be used for any possible task in that environment, only the Valuation would need to be changed."}, {"section_index": "7", "section_name": "3.3 STRATEGY", "section_text": "As we previously said, finding an optimal strategy is a very hard problem and this part is the most complicated. So, in order to test our model in the experiments, we opted for hard-coding a strategy. There, we generate a set of future controls uniformly at random and then we pick the one that would maximize our reward, given that the probability of dying is low enough. Because of this, the games we have tried have been carefully selected such that they do not need very sophisticated and long-term strategies.\nThe bound is not tight but it is sufficient for our p poses and straightforward to prove.\nn 1 > = n j=1 n 1 x3 n j=1 n x 2 j=1 Vn xi\nn 1 (xj - n j=1 n 1 1 j n j=1 n j=1 Vn xi\nThe idea behind this network is mimicking how a video game's logic works. A game has some. variables (like positions or speeds of different objects) that are slightly modified at each step. Our intuition is that the network can learn a representation of these variables (h), while f learns how they are transformed at each frame. Apart from that, this model decouples memory from computation allowing to increase the complexity of f without having to increase the number of neurons in h. This is specially useful as the number of real valued neurons needed to represent the state of a game. is quite small. Still, the function to move from one frame to the next can be quite complex, as it has to model all the interactions between the objects such as collisions, movements, etc..\nEven if this method looks like it may be just tailored for video games, it should work equally well for real world environments. After all, physics simulations that model the real world work in the same way, with some variables that represent the current state of the system and some equations that define how that system evolves over time.\nTable 1: f function of the Prediction network. We apply the non-linearity be- fore the linear layer, this way we avoid always adding positive values. The ReLU is not applied to the control in- puts.\nStill, our approach learns a predictive model that is independent of any strategy and this car be beneficial in two ways. First, the model can play a strategy that is completely different to the ones it learns from. Apart from that, learning a predictive model is a very hard task to over-fit Consider a game with 10 possible control inputs and a training set where we consider the next 25 time steps. Then, there are 1025 possible control sequences. This means that every sequence we train on is unique and this forces the model to generalize. Unfortunately, there is also a downside. Our approach is not able to learn from good strategies because we test our model with many different ones in order to pick the best. Some of these strategies will be quite bad and thus, the model needs to learn what makes the difference between a good and a bad set of moves."}, {"section_index": "8", "section_name": "4.1 ENVIRONMENT", "section_text": "Our experiments have been performed on a computer with a GeForce GTX 980 GPU and an Inte Xeon E5-2630 CPU. For the neural network, we have used the Torch7 framework and for the ATAR simulations, we have used Alewrap, which is a Lua wrapper for the Arcade Learning Environmer (Bellemare et al., 2015)."}, {"section_index": "9", "section_name": "4.2 MODEL", "section_text": "For the Perception, we used a network inspired in deep. residual networks (He et al., 2015). Figure 4 shows the architecture. The reason for this, is that even if the Per- ception is relatively shallow, when unfolding the Predic-. tion network over time, the depth of the resulting model. is over 50 layers deep\nFor the Prediction. we use a Residual Recurrent Neu. ral Network. Table 1 describes the network used for the f function. Finally. Table 2 illustrates the Valuation net-. work."}, {"section_index": "10", "section_name": "4.3 SETUP", "section_text": "We preprocess the images following the same tech-. nique of Mnih et al. (2015). We take the maximum from. the last 2 frames to get a single 84 84 black and white. image for the current observation. The input to the Per-. ception is a 4 84 84 tensor containing the last 4 obser- vations. This is necessary to be able to use a feed-forward network for the Perception. If we observed a single frame.\nTable 2: Valuation network. We apply. Layer Normalization to bound the incoming values to the network.\nStride: 2 7x7 conv, 16 Output size: 40x40 3x3 conv, 16 3x3 conv, 32 Output size: 20x20 max pooling /2 3x3 conv, 32 3x3 conv, 64 Output size: 10x10 max pooling /2 3x3 conv, 64 3x3 conv, 128 Output size: 5x5 max pooling /2 Output size: 512 fc 3200 Output size: 100 fc 512\nFigure 4: Each layer is followed by a Batch Normalization (Ioffe & Szegedy 2015) and a Rectifier Linear Unit..\nit would not be possible to infer the speed and direction of a moving object. Not doing this woul force us to use a recurrent network on the Perception, making the training of the whole model mucl slower.\nIn order to train the Prediction, we unfold the network over time (25 time steps) and treat the model as a feed-forward network with shared weights. This corresponds to approximately 1.7 seconds"}, {"section_index": "11", "section_name": "4.4 GENERATING DATA", "section_text": "We start with k = 25 and increase it every few iterations up to k = 200. For the full details check Appendix A. In order to accelerate training, we run several games in parallel. This allows to run the Perception, Prediction and Valuation networks together with the ATARI simulation in parallel which heavily speeds up the generation of data without any drawback.."}, {"section_index": "12", "section_name": "4.5 TRAINING", "section_text": "In the beginning, we generate 400K training cases for each of the games by playing randomly which gives us a total of 1.2M training cases. Then, for the subsequent iterations, we generate 200K additional training cases per game (600K in total) and train again on the whole dataset. That is, at first we have 1.2M training cases, afterwards 1.8M, then 2.4M and so on.\nFor our Valuation, network we output two values. First, the probability that our score is highe. than in the initial time step. Second, we output the probability of dying. This is trained using cross entropy loss.\nTo train the model, we use an off-line learning approach for simplicity. During training we al. ternate between two steps. First, generate and store data and then, train the model off-line on that data.\na;: A 4 84 84 tensor, containing 4 consecutive black and white frames of size 84 84 each. C: For j E {i + 1, ..., i + 25}, each c; is a 3 dimensional vector that encodes the control action performed at time j. The first dimension corresponds to the shoot action, the second to horizontal actions and the third to vertical actions. For example, [1, 1, 0] represent pressing shoot and left. R: For j E {i + 1, ..., i +25}, we store a 2 dimensional binary vector rj. Tj1 is 1 if we die between time i and j. rg2 is 1 if we have not lost a life and we also earn a point between time i and j.\nInitially, we have an untrained model, so at each time step, we pick an action uniformly at random and perform it. For the next iterations, we pick a k and do the following to play the game:.\n1. Run the Perception network on the last 4 frames to obtain the initial vector.. 2. Generate k - 1 sequences of 25 actions uniformly at random. Apart from that, take the best sequence from the previous time step and also consider it. This gives a total of k sequences. Then, for each sequence, run the Prediction and Valuation networks with the vector obtained in Step 1. 3. Finally, pick a sequence of actions as follows. Consider only the moves that have a low enough probability of dying. From those, pick the one that has the highest probability of earning a point. If none has a high enough probability, just pick the one with the lowest probability of dying.\nTable 3: After one iteration Preditive Reinforcement Learning (PRL) has only observed random play. but it can play much better. This means that it is able to generalize well to many situations it has not observed during training\nPong Breakout Demon Human score. 9.3 31.8 3401 PRL Best (Multi-task) 14.6 316 6872 PRL Best (Single-task). 18.2 186 6100 A3C (Mnih et al., 2016) 18.9 766.8 115202\nTable 4: The best iteration of PRL is able to surpass human performance in all three tasks. Still state of the art model-free approaches work better..\nThe training is done in a supervised way as depicted in Figure 2b. a; and C are given as input tc the network and R as target. We minimize the cross-entropy loss using mini-batch gradient descent. For the full details on the learning schedule check Appendix A..\nIn order to accelerate the process, instead of training a new network in each iteration, we keep training the model from the previous iteration. This has the effect that we would train much more on the initial training cases while the most recent ones would have an ever smaller effect as the training set grows. To avoid this, we assign a weight to each iteration and sample according to these weights during training. Every three iterations, we multiply by three the weights we assign to them. By doing this, we manage to focus on recent training cases, while still preserving the whole training set\nObserve that we never tell our network which game it is playing, but it learns to infer it fron. the observation a;. Also, at each iteration, we add cases that are generated using a different neura. network. So our training set contains instances generated using many different strategies"}, {"section_index": "13", "section_name": "4.6 RESULTS", "section_text": "We have trained a model on the three games for a total of 19 iterations, which correspond tc 4M time steps per game (74 hours of play at 60 Hz). Each iteration takes around two hours or our hardware. We have also trained an individual model for each game for 4M time steps. In the individual models, we reduced the length of the training such that the number of parameter update per game is the same as in the multi-task case. Unless some kind of transfer learning occurs, one would expect some degradation in performance in the multi-task model. Figure 5 shows that not onl there is no degradation in Pong and Demon Attack, but also that there is a considerable improvemen in Breakout. This confirms our initial belief that our approach is specially well suited for multi-tasl learning.\nWe have also argued that our model can potentially play a very different strategy from the one it has observed. Table 3 shows that this is actually the case. A model that has learned only fron random play is able to play at least 7 times better.\nDemon Attack's plot in Figure 5c shows a potential problem we mentioned earlier which also happens in the other two games to a lesser extent. Once the strategy is good enough, the agent dies very rarely. This causes the model to '\"forget'\"' which actions lead to a death and makes the score Oscillate.\n350- 20 15 300 10 250 - 5 coree 0 S150 S -5 -10 100 15 50 -20 0 -25 910111213141516171819 2 A 5 89101112131415161718 19 Iteration Iteration (a) Breakout (b) Pong 7000 - 6000 5000 4000 S3000 2000 1000 0 123456 8910111213 1415 161718 19 Iteration (c) Demon Attack\nFigure 5: Comparison between an agent that learns the three games simultaneously (continuous blue), one that learns each game individually (dashed red) and the score of human testers (horizonta. green) as reported by Mnih et al. (2015)."}, {"section_index": "14", "section_name": "5 DISCUSSION", "section_text": "We have presented a novel model based approach to deep reinforcement learning. Despite not achieving state of the art results, this papers opens new lines of research showing that a model based approach can work in environments as complex as ATARI. We have also shown that it can beat human performance in three different tasks simultaneously and that it can benefit from learning multiple tasks.\nStill, the model has two areas that can be addressed in future work: long-term dependencies anc the instability during training. The first, can potentially be solved by combining our approach witl Q-learning based techniques. For the instability, balancing the training set or oversampling harc. training cases could alleviate the problem\nFinally, we have also presented a new kind of recurrent network which can be very useful for problems were little memory and a lot of computation is needed.."}, {"section_index": "15", "section_name": "ACKNOWLEDGMENTS", "section_text": "I thank Angelika Steger and Florian Meier for their hardware support in the final experiments anc comments on previous versions of the paper.."}, {"section_index": "16", "section_name": "REFERENCES", "section_text": "Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer Normalization. arXiv, 2016. URI\nMarc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi ronment: An evaluation platform for general agents. In IJCA7 International Joint Conference on Artificial Intelligence, volume 2015-January, pp. 4148-4152, 2015. ISBN 9781577357384. doi: 10.1613/jair.3912\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Imag Recognition. arXiy. 2015. URL ht+p. df/1512.03385v1.pdf.\nSepp Hochreiter and Urgen Schmidhuber.. Long Short-Term Memory. Neural computa- tion,9(8):1735-80.1997. ISSN 0899-7667. doi:10.1162/neco.1997.9.8.1735. URL http://www.ncbi.nlm.nih.gov/pubmed/9377276\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by re. ducing internal covariate shift. arXiv, 2015. URL http://arxiv.org/abs/1502.03167\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei a Rusu, Joel Veness, Marc G Belle mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wier stra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learn ing. Nature, 518(7540):529-533, 2015. ISSN 0028-0836. doi: 10.1038/nature14236. URI http://dx.doi.0rg/10.1038/nature14236.\nVolodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous Methods for Deep Reinforcement Learning. arXiv, 2016. URL http://arxiv.0rg/abs/1602.01783.\nAndrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirk patrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy Dis tillation. arXiv, 2015. URL http://arxiv.0rg/abs/1511.06295.\nAndrej Karpathy and Fei Fei Li. Deep visual-semantic alignments for generating image descrip tions. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pai tern Recognition, volume 07-12-June-2015, pp. 3128-3137, 2015. ISBN 9781467369640. doi 10.1109/CVPR.2015.7298932"}, {"section_index": "17", "section_name": "Appendices", "section_text": "Due to the huge cost involved in training the agents, we have not exhaustively searched over all the possible hyper parameters. Still, we present them here for reproducibility of the results\nApart from that, at the beginning of each episode, we pick an n E [0, 30] uniformly at randon and do not perform any action for the initial n time steps of that episode. This idea was also used by Mnih et al. (2015) to avoid any possible over-fitting. In addition, we also press shoot to start a new episode every time we die in Breakout, since in the first iterations the model learns that the safest option is not to start a new episode. This causes the agent to waste a lot of time without starting a new episode.\nNumber of strategies: As explained in Section 4.4, we need to pick a number k of strategies we consider at each step. Initially, we pick k = 25, raise it to k = 100 at iteration 4 and finally, at iteration 7, we set it to k = 200 for the remaining of the experiment. Confidence interval: We also need to pick how safe we want to play, i.e., where we set the threshold for the set of actions we consider. For simplicity, in Breakout and Pong, we set it to O and only pick the safest option. In Demon Attack, initially we only consider actions with a survival probability higher than 0.2 for three iterations. After that, we reduce it to 0.1 for another three iterations. Then, we set it to 0.005 until iteration 15 and finally, reduce it to 0.001 for the rest of the iterations. Learning schedule: For training we use the Adam (Kingma & Ba, 2014) optimizer with a batch size of 100. We use a learning rate of 10-4 for the first 3 iterations, then reduce it to 5 10-5 for the next 3 iterations and finally set it to 10-5 for the rest of the experiment. We make a total of 4.8 104 parameter updates per iteration (1.6 104 in the case of single- task networks) and divide the learning rate in half after 2.4 104 updates for the remaining of the iteration. We add a weight decay of O.0001 and clamp the gradients element-wise to the [-1, 1] range"}]
ryrGawqex
[{"section_index": "0", "section_name": "DEEP LEARNING WITH DYNAMIC COMPUTATION GRAPHS", "section_text": "Moshe Looks, Marcello Herreshoff, DeLesley Hutchins & Peter Norvig Google Inc\n{madscience, marcelloh, delesley onorvig}@google.com\nNeural networks that compute over graph structures are a natural fit for problems. in a variety of domains, including natural language (parse trees) and cheminfor-. matics (molecular graphs). However, since the computation graph has a different shape and size for every input, such networks do not directly support batched training or inference. They are also difficult to implement in popular deep learn-. ing libraries, which are based on static data-flow graphs. We introduce a technique called dynamic batching, which not only batches together operations between dif-. ferent input graphs of dissimilar shape, but also between different nodes within a. single input graph. The technique allows us to create static graphs, using popu-. lar libraries, that emulate dynamic computation graphs of arbitrary shape and size. We further present a high-level library' lof compositional blocks that simplifies the. creation of dynamic graph models. Using the library, we demonstrate concise and batch-wise parallel implementations for a variety of models from the literature."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "However, there is also a long history of neural networks that compute over structures such as parse trees (Pollack||1990), logical terms (Goller & Kuchler!|1996), and molecular graphs (Bianucci et al. 2000). In these models, each distinct input has a different computation graph structure; we say thai. they use dynamic computation graphs (DCGs). Such models continue to be developed and have recently yielded superior results on problems such as sentiment classification and semantic related ness (Tai et al.]2015] Li et al.]2015), question-answering (Andreas et al.]2016), and screening o1 chemical compounds (Kearnes et al.|2016). Despite these successes, most practitioners avoid DCGs for implementation reasons. For example, Bowman et al.(2016) assert that \"because TreeRNNs use. a different model structure for each sentence ... efficient batching is impossible in standard imple. mentations\"'. Moreover, even if efficient batching were possible in principle, current libraries such. as TensorFlow (Abadi et al.||2016) assume that the data-flow graph is static (i.e. is the same for each input) and impose a significant cost to graph construction, which makes it infeasible to build a new. graph for each input."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Training deep neural networks directly on minimally pre-processed corpora has led to many recent performance breakthroughs, mainly on problems in domains such as vision (Krizhevsky et al.]2012) and natural language (Bahdanau et al.]2015) where the inputs can be cast as dense n-dimensional arrays (henceforth tensors), or sequences of tensors. These successes exploit the effectiveness of training via gradient descent on mini-batches of tens to hundreds of inputs, implemented using the parallel SIMD capabilities of modern GPUs (Oh & Jung2004) and multi-core CPUs (Vanhoucke et al.2011). This, in turn has led to a proliferation of libraries making it easier to train and deploy such models, by expressing them in terms of differentiable data-flow graphs over tensors (Abadi et al.[ 2016] Theano Development Team2016} [Collobert et al.]2011].\nSection 2lintroduces dynamic batching, which enables efficient batching for training and inference with DCGs. Dynamic batching runs DCGs efficiently with existing libraries that only support static data-flow graphs; e.g. the same static graph can run a TreeRNN over any parse tree. We present empirical results for our implementation in TensorFlow. Section|3|presents a combinator library for concisely implementing models with DCGs using dynamic batching. Section 4 concludes.\nn deep learning libraries like TensorFlow, computations are manually batched. The computatic s expressed as a static graph of mathematical operations, such as y = o(x . w + c), which ai. olymorphic in batch size; an input x of dimensions (6, n) will yield an output of dimensions (6, m vhere b is the batch size. With DCGs, the graph of operations is not static, but is assumed to b. lifferent for every input, so multiple inputs no longer naturally batch together in the same way. Th. lynamic batching algorithm overcomes this difficulty. Given a set of computation graphs as inpu. each of which has a different size and topology, it will rewrite the graphs by batching together a. nstances of the same operation that occur at the same depth in the graph. The rewriting proce. nserts additional concat and gather operations to move data between the batched operations; tl. ndices to gather encode the topology of the original input graphs..\nWe distinguish between individual operations appearing as nodes in the underlying data-flow graph such as addition or matrix-multiply, and small sub-graphs that conceptually act as functions over tensors, such as a feed-forward layer or LSTM cell. We refer to the former as \"ops\"', and to the latter as \"operations.'' Operations, (i.e. sub-graphs), form the building-blocks from which neural networks with DCGs are composed; dynamic batching schedules operations, not ops. Our algorithm requires that all operations which might be used be specified in advance, and it enumerates them for scheduling purposes. For example, a binary TreeRNN for NLP parse trees has two operations: embedding table lookups for words at the leaves of the tree, and RNN cells for the non-terminals.\nThe inputs and outputs of operations have tensor types. Each input or output may have a different. type, but all types must be fixed and fully specified in advance. A tensor type consists of a shape. x1,...xn, together with a scalar data type (e.g. float32). The inputs to an operation shall be. tensors of dimension (b, x1,... xn), where b is the batch size and x1...xn is the shape of corre-. sponding input tensor type. The outputs must all be tensors of dimension (b, y1,... ym), where. Y1, ... ym is the shape of the corresponding output tensor type. Operations must be polymorphic. with respect to the batch size, because the batch size will change each time the operation is invoked depending on the topologies of the input graphs. However, their tensor types are fixed, so that it is. possible to assign a known tensor type to each edge in the input computation graph..\nIn our TensorFlow implementation, each dynamic operation is instantiated once in the static. data-flow graph. The inputs to each operation are tf.gather ops, and the outputs are fed. into tf.concat ops, as described above. These TensorFlow ops are then placed within a t f . while_loop. Each iteration of the loop will evaluate all of the operations at a particular depth.. The loop maintains state variables for each tensor type t, and feeds the output of concat for tensor. type t and iteration d into the input of the gathers at tensor type t and iteration d + 1. The indices. for gather at iteration d are drawn from the edge labels i for depth d in the schedule. The initial values for the state variables at iteration/depth O are the constants in the input graph..\nThe dynamic batching algorithm takes a directed acyclic computation graph as input. A batch of multiple input graphs can be treated as a single disconnected graph. Source nodes are constant tensors, and non-source nodes are operations. Edges connect one of the outputs of a node to one of the inputs of another node. Scheduling is performed using a greedy algorithm:\nAssign a depth to each node in the graph. Nodes with no dependencies (constants) are assigned depth zero. Nodes with only dependencies of depth zero are assigned depth one, nodes whose dependencies have a maximum depth of one get assigned depth two, etc.. Insert pass-through (identity) operations so that an operation at depth d + 1 only refers to results at depth d. Batch together all nodes invoking the same operation at the same depth into a single node.. .Concatenate all outputs which have the same depth and tensor type. The order of concate-. nation corresponds to the order in which the dynamic batching operations were enumerated.. Assign a label (d, t, i) to each edge in the original graph, where d is the depth, t is the tensor type, and i is the integer index for that edge into the (concatenated) outputs for d, t. The. schedule for the graph consists of the indices i for all edges, which are grouped together by. depth and operation.\nFigure 1: The static data-flow graph created by dynamic batching for a binary TreeRNN over parse. trees (left), and input graph corresponding to the parse tree ((word1, word3), word5) (right)\nDynamic batching allows us to construct a static TensorFlow graph that contains a single instance. of each operation, yet can emulate input graphs of arbitrary size and topology where operations may. appear an arbitrary number of times. The TensorFlow concat, gather, and while_loop ops are. all differentiable, so gradients calculations and back-propagation do not require any additional code\nFor example, a binary TreeRNN as described above yields a TensorFlow data-flow graph with a t f. whi1e_loop whose body is shown on the left of Figure1 Here each gather has an additional. input (the indices for the given op at the given depth) which picks out which elements the operations are to be called with. The long downward arrows are the pass-throughs. The algorithm consumes a tree such as the one shown on the right of Figure[1|and turns it into inputs for the gather | operations at each depth (here depth is the loop counter for the t f . whi1e_loop.)."}, {"section_index": "3", "section_name": "2.1 EXPERIMENTAL RESULTS", "section_text": "We have implemented dynamic batching as part of a new library, TensorFlow Fold, and designed a synthetic speed benchmark to compare it with manual batching in native TensorFlow. The bench mark uses the same underlying kernels and execution engine in both cases. Native TensorFlow cannot batch together trees of different shapes so, for testing purposes, we use a batch of random binary trees, all of which have the same shape. These test results thus represent a best-case scenario in which all operations can be batched together perfectly. For the manual batching tests, we con struct a static data-flow graph of operations corresponding to the shape of the tree. For the dynamic batching tests, we traverse each tree to construct a schedule, as described above.\nThe leaves of the tree are lookups into an embedding table, while the non-terminals implement a variant of the Tree-LSTM (Tai et al.]2015) equations. The tree size is 128, with a state size of. 1024 for the LSTM. The CPU tests were run on a Dell z620 workstation with dual 8-core Intel Xeon processors (32 hardware threads), and the GPU tests were done using a consumer Nvidia GeForce GTX-1080 card. We compare manual batching, dynamic batching where all trees have the. same shape, and dynamic batching where each tree has a different shape (the column marked \"full dynamic'). There is no measurable penalty for dealing with trees of different shapes..\nThe test results shown in Table1emphasize the importance of batching, especially on GPUs. Tensor. Flow will launch a GPU kernel for every node in the tree, so there is a fixed overhead, proportional. to the size of the tree, that dominates execution for small batch sizes. TensorFlow does not begin t saturate the GPU until relatively large batch sizes - 1024 or higher. The difference in speed betweer. fully-batched and unbatched is over 160x..\nDynamic batching has less kernel invocation overhead because the data-flow graph is smaller. Dy. namic batching instantiates each operation only once, and invokes it once for each depth, so the number of kernel invocations is log(n), rather than n, where n is tree size. Dynamic batching thus achieves substantial speedups even at batch size 1, because it batches operations at the same depth. within a single tree.\nint [] state. float32 [128] state 1 3 5 gather gather gather gather gather embed embed embed embed lookup. RNN Cel1 cell concat concat cell int [] state. float32 [128] state\nTable 1: Inference timing benchmark; times are wall-clock averages in seconds\nbatch-size manual dynamic full dynamic cost speedup batch tree batch tree batch tree ratio ratio (CPU) 1024 14.62 0.014 18.68 0.018 18.37 0.017 1.27 28.86 512 7.54 0.014 9.84 0.019 9.57 0.018 1.30 27.68 256 4.14 0.016 5.22 0.020 5.25 0.020 1.26 25.23 128 2.48 0.019 2.95 0.023 3.08 0.024 1.18 21.47 64 1.64 0.025 1.76 0.027 1.78 0.027 1.06 18.55 32 1.27 0.039 1.05 0.032 1.10 0.034 0.82 14.94 1 0.52 0.517 0.26 0.258 0.26 0.262 0.49 1.97 (GPU) 1024 0.978 0.0009 1.590 0.0015 1.617 0.0015 1.62 101.79 512 0.530 0.0010 0.715 0.0013 0.721 0.0014 1.34 114.15 256 0.312 0.0012 0.323 0.0012 0.340 0.0013 1.03 120.86 128 0.236 0.0018 0.164 0.0012 0.178 0.0013 0.69 115.05 64 0.193 0.0030 0.093 0.0014 0.106 0.0016 0.48 96.40 32 0.153 0.0047 0.061 0.0019 0.074 0.0023 0.40 68.79 1 0.161 0.1608 0.038 0.0376 0.036 0.0359 0.23 4.47\nHowever, the extra concat and gather ops that dynamic batching inserts do have a cost. The \"cos. ratio\"' column above shows the ratio between dynamic and manual batching, in the case where al trees in the batch have the same shape. The cost is only 20% for inference on GPUs with batch-siz 1, but rises to 60% for training with backpropagation. The cost is mainly visible at large batch sizes. because it is balanced by the benefit of within-tree batching at smaller sizes..\nEven with the cost, dynamic batching yields a 120x speedup over using a batch size of 1 on GPU.. and 28x on CPU. The \"speedup ratio' column above shows the ratio between the per-tree time for dynamic batching on random shapes (\"full dynamic'), versus manual batching with a batch size of. 1. Note that using a batch size of 1 is not actually feasible for TensorFlow, because TensorFlow has. a large graph construction overhead, which is not included in these measurements, but it may apply. to other libraries that lack such overhead.."}, {"section_index": "4", "section_name": "A COMBINATOR LIBRARY FOR NEURAL NETWORKS", "section_text": "In addition to dynamic batching, the TensorFlow Fold library provides a set of combinators tha simplify the task of constructing neural networks for DCGs. Our goal here is to show how dynami batching enables implementing deep learning models (which are growing ever more complex) at a. higher level of abstraction than manual batching. This in turn facilitates a more rapid feedback loop. for trying out novel model variants, and thus obtaining superior results..\nThe design of the library was inspired by functional programming techniques such as parser combi- nators (Hutton & Meijer1996) and arrows (Hughes 2000). In a combinator library computations are structured compositionally, by plugging together simpler computations in various ways. The basic unit of computation in TensorFlow Fold is a block, essentially a function from input to output. In a typical DCG model, the input is a graph or tree of some kind, and the output is a vector, which can be attached to a loss for training.\nFor example, consider a model where the inputs are sequences of words, of varying lengths, and the output is a sentence vector. Our library provide several different ways of handling sequences. Given a simpler block f that operates on elements of the sequence, or g on pairs of elements, we define the following combinators:\nMap (f) : yields [f(x1), f(x2), ... f(xn)]. Applies f to each element of the sequence, e.g. embedding each of the words of a sentence into RN. Fold(g, z): yields g(...g(g(z,x1),x2),...xn). Applies g sequentially in a leftward chain, e.g. running an RNN over a sequence. By default z = 0.\nNote that it is not necessary to pad or truncate sequences to the same length; dynamic batching handles sequences of differing lengths\nBlocks are statically typed; each block has an input type and an output type. Types are inferred. where possible, but must be explicitly specified in some cases. A type is one of the following\n.Input denotes objects in the host language (Python), such as trees and dictic : Tensoratype,shape denotes tensors of a particular dt ype and shape.. .Tuple(t1,... tn), denotes a tuple of values of types t1, ... tn.. .Sequence(t), denotes a sequence of elements of type t, of any length.. Void is the unit type.\nFor example Sequence(Sequence(Tuple(Tensorf1oat32,, Tensorint8,[3,4]))) denotes jagged ar rays whose elements are pairs (f1oat32, int83x4\nIn addition to the the sequence combinators described above, important combinators in the library include the following:\n2Reduce uses a balanced tree rather than a chain in order to minimize computation depth and provide mo opportunities for batching. oftbo Od1n0\nReduce (g): yields g(Reduce([x1,...x[n/2]]), Reduce([[n/2]+1,...xn]). Applies g in a balanced tree|2le.g. max or sum-pooling over the elements..\nBlocks are composed hierarchically; a block expression is always a tree. The non-terminals in the tree are combinators such as Map and Fold, which take simpler blocks as arguments. The leaves of the tree are atomic blocks, which include the following:.\nScalar: Input -> Tensor Convert a Python scalar to a tensor. Tensor: Input -> Tensor Convert a NumPy array to a tensor. Function (h) : [Tensor or Tuple(Tensor,...)] -> [Tensor or Tuple(Tensor,...)] Defines an operation h (see Section[2) over tensors. Operations with multiple inputs and outputs use tuples of tensors. InputTransform(h) : Input -> Input Applies a user-defined Python function h to pre-process the input.\nb1 >> b2: Function composition; the output of b1 is fed to the input of b2 Record({li: b1 ,... ln : bn}: Input -> Tuple(t1,...tn) Takes a Python dictionary or tuple as input, and applies each block b, to the field label li, to yield an object of type t. Returns a tuple of the results for all fields. OneOf (b1,...bn): Input -> t Conditionally dispatches on its input to one of the blocks b1, . . . bn. Optional (b): Input -> t Applies b if the input is not None, otherwise returns zeros. A special case of OneOf. AllOf(b1,...bn):to-> Tuple(t1,...tn Passes its input of type to to each of the blocks b1, . . . bn, returning a tuple of results\nsplit > word2vec expr qits h word rnn a pair ax\nFigure 2: Block architectures for a pipeline (Section 3.3), feed-forward attention (Section 3.4) binary Tree-LSTMs (Section|3.5), and the weave module for molecule graphs (Section|3.6)\nAssume we have a set of (text, labe1) pairs as input and wish to predict the label from the text. The text consists of words, and we want to use an array of pretrained word embeddings (word_matrix) and corresponding dictionary mapping words to indices (word_idx). We call word_idx. get (word) to obtain the index of wordin word_matrix, or None if wordis unknown\nWe start by creating a block which embeds each word into a continuous space:\nd2vec = (InputTransform(word idx.get) >> Optional(Scalar('int32')) >> Function(Embedding(initializer=word matrix)))\nword2vec = (InputTransform(word_idx.get) >> Optional(Scalar('int32')) >> Function(Embedding(initializer=word matrix)))\nWith word2vec in hand, we can define text2vec, which embeds sentences:\nplit = InputTransform(str.split) nn_cell = Concat() >> Function(Fc(d, activation=tf.nn.relu)) ext2vec = split >> Map(word2vec) >> Fold(rnn cell, Zeros(d) )\nWe use an Input Trans form to split the string into words. Then we map the words to vectors witl word2vec, and combine the word vectors with a simple RNN, which uses a single fully connectec layer FC with d hidden units. The Zeros block defines the initial state for the RNN.\nAssume there are n labels: we use a linear layer with n outputs to get unscaled logits\nFor training, we create a Record block to convert the labe1 to a tensor as well, and calculate loss.\nrecord = Record([('text' text2loqits) , ('labe1', Scalar('int32'))l) loss = record >> Function(tf.nn.sparse_softmax_cross_entropy)\nFinally, we create a Compiler, which validates a block, performs type-checking, and sets up dy namic batching in TensorFlow. Outputs of a compiled block are available as TensorFlow tensors, so training now proceeds as it would for any other TensorFlow model:\ncompiler = Compiler.create(loss) cross_entropy = Compiler.output_tensors[0] train_op = tf.train.AdamOptimizer().minimize(cross_entrop\nsplit > word2vec ex 0. rnn loqits h word pair ax\nThis block uses an Input Transform to get the index of a word, which is passed to an Optional block that converts the scalar index to a tensor (or O if None). This in turn gets passed to an Embedding operation, which performs a lookup into an embedding table\nRecently, Raffel & Ellis (2016) have introduced an attention mode1 for feed-forward neural net works. The model generalizes average-pooling and is defined as:.\nIn this model, the block architecture is not a simple pipeline (i.e. a composition using >>) but instead forms a directed acyclic graph, as illustrated in Figure[2] A Composition block allows blocks to be composed into DAGs. The model code and details may be found in Appendix[A"}, {"section_index": "5", "section_name": "3.5 RECURSIVE DEFINITIONS", "section_text": "where TreeLSTM (x, hteft, hright) is a learnable function corresponding toTai et al.(2015) eqs 9-14 with N = 2. Since a tree is a recursive data type, a model that processes trees must be recursively defined, as illustrated by the cycle in Figure 2 A ForwardDeclaration allows the creation of recursive models:\nexpr = ForwardDeclaration() word = AllOf(Record([('word', word2vec)]),. Zeros((state size, state size)) pair = AllOf(Zeros(embedding_size), Record([('left', expr()), ('right', expr())])) expr_def = (Oneof(key_fn=len, case_blocks-[(1, word), (2, pair)]) >> TreeLSTM(state_size) ) expr.resolve_to(expr_def)\nA forward declaration like expr is not itself a block, but may be called (using the expr () syntax). to create references -i.e. blocks which refer to the declaration. The subsequent call to resolve_to then updates all the references to refer to expr_def..\nThe word2vec block is as defined in Section|3.3\nHere we briefly report on some experiments with our implementation of N-ary Tree-LSTMs fc sentiment analysis. While we set a new state-of-the-art, that is not really the point here. Our model are not particularly original, and could certainly be implemented without using TensorFlow Fold What Fold does is to enable simpler and more concise definitions (see Table 3), along with faste execution, thus making it easier to rapidly explore novel model variants.\nWe used constituency Tree-LSTMs with tuned Glove vectors for word embedding, which achievec the best results of all sentiment models presented in Tai et al.(2015). In addition to this specific model, we have explored several novel variants,4 In particular,Tai et al.(2015) employed non\n4Unsuccessful variants included standard LSTMs (i.e. having only a single forget gate) accepting poolec histories from their children, and models based on character rather than word-level embeddings..\nT exp(et t = a(ht),Qt = t=1\nN-ary Tree-LSTMs (Tai et al.[2015, sec. 3.2) generalize LSTMs from 1 to N previous states. In|Tai et al.[(2015] sec. 5.1) they are applied to classify sentences from the Stanford Sentiment Treebank. This corpus consists of binarized constituency parse trees of one-sentence movie reviews, where every node has a sentiment label. At the leaves of the tree, words are mapped to word-embedding vectors which serve as the input to a binary tree-LSTM with O for the previous states. At the internal nodes, the LSTM takes O as input, and previous states from its two children. More formally,\nhword = TreeLSTM(Embedding(word), 0, 0 hleft,right = TreeLSTM(0, hleft, hright).\nTable 2: Test set accuracies on the Stanford Sentiment Treebank\nTable 3: Lines of code comparison\nmodel ours Original ratio Feed-Forward Attention 26 71 0.37 Tree-LSTM 119 219 0.54 Graph Convolutions 32 44 0.73\nrecurrent dropout and L2 weight regularization. We eliminated weight regularization in favor of th. recurrent dropout scheme introduced bySemeniuta et al.(2016) and increased the LSTM state size from 150 to 300, leaving all other hyperparameters unchanged..\nResults are shown in Table|2l including the best previously reported results. Fine-grained accuracy is. measured for all trees and calculated based on the five possible labels. Binary accuracy is measured only for trees with non-neutral sentiment, and is based on negative vs. positive classification. The numbers in parentheses are standard deviations.Tai et al.(2015) report five independent runs, our. results are based on thirty independent runs/|Noting the small size of this dataset (8544/1101/2210 trees for train/dev/test), we further evaluated an ensemble consisting of these thirty independently trained models; this variant sets a new state-of-the-art on both subtasks..\nAs a final example, we have used the Fold library to implement the graph convolution model intro. duced byKearnes et al.[(2016) for molecules, which are represented as undirected graphs of atoms The code is more complex than our previous examples because it involves nested Compositior. blocks, and is given in AppendixB."}, {"section_index": "6", "section_name": "4 DISCUSSION", "section_text": "The experimental results presented in section2.1 quantify the impact of dynamic batching. The impact of the combinator library is harder to demonstrate quantitatively. One way to approach this (with a large grain of salt) is by comparing lines of code, which we do in Table[3] vs. the origina. author's sources. See Appendix |C|for details on the comparison protocol. Of course, a very short implementation is suboptimal if it comes at the cost of flexibility. The results in Section|3.5.1show that models from the literature can be reimplemented in Fold, then extended to achieve superior performance. We suspect that other models with DCGs will have quite a bit of \"head room' as well. due to simply having less work done tuning them compared with more mainstream architectures.\nMunkhdalai & Yu (2016a b do not report standard deviations or number of runs\nmodel fine-grained binary Tai et al.(2015) 51.0 (0.5) 88.0 (0.3) Munkhdalai & Yu 2016a 52.8 89.7 Munkhdalai & Yu 2016b 53.1 89.3 Ours (Single Model) 52.3 (0.7) 89.4 (0.4) Ours (Ensemble) 53.6 90.2\nNeural architectures with dynamic computation graphs suffer from inefficient batching and poor tooling. Dynamic batching solves the former problem in full generality, we believe for the first time. The SPINN architecture (Bowman et al.] 2016) is an alternative stack-based approach that also en- ables efficient batching with DCGs, but it is limited to binary trees, and requires padding/truncation to handle trees of different sizes. The Fold library addresses the tooling problem by providing a high-level combinator library which is intended to make it easy for practitioners to rapidly develop and iterate on architectures with DCGs."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. In NAACL, 2016..\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. In ICLR, 2015.\nAnna Maria Bianucci, Alessio Micheli, Alessandro Sperduti, and Antonina Starita. Application of cascade correlation networks for structures to chemistry. Applied Intelligence, 2000..\nSamuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, anc. Christopher Potts. A fast unified model for parsing and sentence understanding. In NAACL, 2016\nJohn Hughes. Generalising monads to arrows. Science of Computer Programming, 2000\nSteven Kearnes, Kevin McCloskey, Marc Berndl, Vijay Pande, and Patrick Riley. Molecular graph convolutions: moving beyond fingerprints. Journal of Computer-Aided Molecular Design, 2016.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo lutional neural networks. In NIPS, 2012\nTsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. arXiv, 1607.04315, 2016a\nTsendsuren Munkhdalai and Hong Yu. Neural tree indexers for text understanding. arXiv 1607.04492, 2016b.\nJordan B Pollack. Recursive distributed representations. Artificial Intelligence, 1990.\nStanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss arXiv, 1603.05118, 2016.\nRonan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A Matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011..\nJiwei Li, Minh-Thang Luong, Dan Jurafsky, and Eudard Hovy. When are tree structures necessary for deep learning of representations? arXiv, 1503.00185, 2015..\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. In NAACL, 2015.."}, {"section_index": "8", "section_name": "FEED-FORWARD ATTENTION", "section_text": "The feed-forward attention model from Section|3.4|may be implemented in Fold as follows\nattention = Composition () with attention.scope() : h = attention.input exp_e = Map(a >> Function(tf.exp)).reads(h) z = (Sum() >> Broadcast()).reads(exp_e) alpha = ZipWith(Function(tf.div)).reads(exp_e, z) c = (Zipwith(Function(tf.mul)) >> Sum()).reads(alpha, h) attention.output.reads(c)\nOPO31CIO with attention.scope(: h = attention.input exp_e = Map(a >> Function(tf.exp)).reads(h) z = (Sum() >> Broadcast()).reads(exp_e) alpha = Zipwith(Function(tf.div)).reads(exp_e, z) c = (Zipwith(Function(tf.mul)) >> Sum()).reads(alpha, h) attention.output.reads (c)\nWithin a composition scope, blocks may be wired together with reads, provided no directed cycle. are formed. The input and output properties are used to define the overall inputs and outputs o the composition block. This example introduces several additional block types:."}, {"section_index": "9", "section_name": "B GRAPH CONVOLUTIONS", "section_text": "This section implements the graph convolution model introduced by Kearnes et al.. (2016), for molecules represented as undirected graphs of atoms. There are real-valued feature vectors for each atom and for each distinct pair of atoms. For a molecule having N atoms, we index its atom feature vectors as ai E Rn for 1 i N. We index its pair feature vectors as pi,j E Rm for. 1 < i, j < N, where pi.i = Pi.i and pii = 0.\nN ay=fA(fA>A(a),fP-A(Pi,j)) j=1 pi=fP(fA-P(a,a)+fAP(a,a),fP-P(P,)\nwhere fA. fp.. are learnable functions ndf\nIt is noteworthy that the a -> p' calculation involves a nested scan over the atoms; for each a; we must calculate fA-P(a,, a%) + fA-p(ax, ax) for all 1 < j < N:\ncomposIcIon( with a_i_to_p.scope(): a_x_i = Broadcast().reads(a_i_to_p.input[0]) a_x = a_i_to_p.input[1] f_i_j = Zipwith(Concat() >> f_a_p).reads(a_x_i, a_x) f_j_i = ZipWith(Concat() >> f_a_p).reads(a_x, a_x_i) p = ZipWith(Sum()).reads(f_i_j, f_j_i) a_i_to_p.output.reads(p)\nTuple(Tensorf1oat32, [n], Sequence(Tensorf1oat32, [n]\nWe broadcast a over a twice in succession to compute fA->p(a, a) and fA-P(at, a) for all 1 < j < N, yielding f_i_j and f_j_i, which are length-n sequences of vectors. We join and sum\nSum is a specialization of Reduce that performs elementwise addition. Zipwith is a variant of Map that accepts n sequences as input and applies an n-ary functio f elementwise (stopping when the end of the shortest input sequence is reached).. Broadcast creates a Sequence(t) from a single t, repeating the same element endlessly\nThe core of the graph convolution model is the weave module, which combines atom-level and. pair-level features using six learnable functions (typically fully connected ReLU layers). The weave module can be stacked arbitrarily to create deep graph convolution models. Denoting inputs and. outputs by x and y superscripts respectively, the weave module is:.\neach of these vectors elementwise to obtain the ultimate output of the block, which is also a length-r sequence of vectors. The overall weave module may now be implemented as follows..\nNeave= Compos1t1on( vith weave.scope() : a_x = weave.input [0] p_x = weave.input [1] a_to_a = Map(f_a_a).reads(a_x) p_to_a = Map(Map(f_p_a) >> Sum()).reads(p_x) a_y = Zipwith(Concat() >> f_a).reads(a_to_a, p_to_a). a_to_p = Zipwith(a_i_to_p).reads(a_x, Broadcast().reads(a_x)) p_to_p = Map(Map(f_p_p)).reads(p_x) p_y = ZipWith(ZipWith(Concat() >> f_p)).reads(a_to_p, p_to_p) weave.output.reads(a_y, p_y)\nCOPC(/ a_x = weave.input[0] p_x = weave.input[1] a to_a = Map(f_a a).reads(a x) p_to_a = Map(Map(f_p_a) >> Sum()).reads(p_x) a_y = Zipwith(Concat() >> f_a).reads(a_to_a, p_to_a). a_to_p = Zipwith(a_i_to_p).reads(a_x, Broadcast().reads(a_x)) p_to_p = Map(Map(f_p_p)).reads(p_x) p_y = ZipWith(ZipWith(Concat() >> f_p)).reads(a_to_p, p_to_p) weave.output.reads(a_y, p_y)\n. Define the functional unit of comparison as an input-output mapping . Prepare a single file that implements this functionality and nothing else. . Remove import statements, abstract base classes, logging, file i/o, and validation lo : Count lines of code, ignoring blank lines and comments"}, {"section_index": "10", "section_name": "FEED-FORWARD ATTENTION", "section_text": "The functional unit of comparison is creating the model for the variable-length experiment described in Raffel & Ellis[(2016, sec. 2.3). This includes the loss and accuracy calculations, but does not include the training loop or the creation of training data. The original implementatior Jis in Python and uses Theano and Lasagne. The TensorFlow Fold implementation is more concise, partly due to differences between TensorFlow and Lasagne. Fold itself reduces implementation complexity by eliminating the need for manual batching, e.g. x. sum (axi s=1) where batching is explicit over axis O. vs. x >> Sum (). which is implicitly batched.\na_to_a maps over a* with fA--A, going from Sequence(Tensor) to Sequence(Tensor) p_to_a maps over p with fA-P and sums along the inner dimension, reducing from Sequence(Sequence(Tensor)) to Sequence(Tensor). a_y zips a_to_a and p_to_a with fA, going from. Tuple(Sequence( Tensor), Sequence(Tensor)) to Sequence( Tensor).. a_to_p broadcasts a* over itself with a_i_to_p, expanding from Sequence(Tensor) to. Sequence(Sequence( Tensor)). p_to_p maps over p\" with fp-P, going from Sequence(Sequence(Tensor)) to. Sequence(Sequence(Tensor)). : p_y zips a_to_p and p_to_p with fp, going from. Tuple(Sequence(Sequence(Tensor)), Sequence(Sequence( Tensor))) to. Sequence(Sequence( Tensor)).\n6All of the implementations we examine are formatted with 80-column lines excepting the Tree-LSTM. mplementation, which has a few lines that are slightly longer; we still count these as single lines."}, {"section_index": "11", "section_name": "TREE-LSTM", "section_text": "The functional unit of comparison is creating a (binary) constituency Tree-LSTM and running an. epoch of training for the fine-grained sentiment classification task as described in Tai et al.(2015 sec. 5.1). This does not include loading the word embeddings or dataset, which are provided as inputs. The original implementatior?is in Lua and uses Torch. Lua terminates blocks with the end. keyword; we do not count these lines. Here, the use of Python and TensorFlow leads to substantially. more concise code than with Lua and Torch. Unlike the previous example manual batching plays. no role here, because the original implementation computes gradients and losses one tree at a time. Fold reduces complexity here by using a OneOf block to distinguish between leaves and internal. nodes, rather than a recursive function that explicitly traverses the tree."}, {"section_index": "12", "section_name": "GRAPH CONVOLUTION", "section_text": "The functional unit of comparison is creating a single weave module as described in Kearnes. et al.(2016, sec. 3.3). The original implementation1o[is in Python and uses TensorFlow. Here,. both implementations use the same language and deep learning library. Fold helps by eliminat-. ing the need for manual batching, as in the first example. This is particularly apparent in the. atoms-to-pairs calculation, which requires making n \"copies' of an n d matrix x to get an n n d tensor. In native TensorFlow the first dimension is batch, and the copying is explicit, as reshape(tile(x, [1, n, 1]), [batch_size, n, n, d]). InFold,x >> Broadcast(). suffices, because the number of copies needed is determined lazily by subsequent computations.."}]
SkwSJ99ex
[{"section_index": "0", "section_name": "DEEPREBIRTH: A GENERAL APPROACH FOR ACCEL- ERATING DEEP NEURAL NETWORK EXECUTION ON MOBILE DEVICES", "section_text": "Computer Science and Engineering Department Lehigh University 11 1801511S A"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recent years have witnessed the breakthrough of deep learning techniques for image classification and object recognition. Mobile device becomes more and more popular due to its convenient mo- biles services provided for end users. More and more mobile applications require deep learning techniques to provide accurate, intelligent and effective services. However, the execution speed of the deep learning model on mobile devices becomes a bottleneck for many applications due to the large model size, deep network structure and complicated model parameters, which hinders the real-time deployment. However, if deep learning service is only provided at cloud side, transmis-\nXiaolong Wang\nSamsung Research America (SRA"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deploying deep neural networks on mobile devices is a challenging task due to computation complexity and memory intensity. Existing works solve this prob- lem by reducing model size using weight compression methods based on dimen- sion reduction (i.e., SVD, Tucker decomposition and Quantization). However, the execution speed of these compressed models are still far below the real-time processing requirement of mobile services. To address this limitation, we pro- pose a novel acceleration framework: DeepRebirth by exploring the deep learning model parameter sparsity through merging the parameter-free layers with their neighbor convolution layers to a single dense layer. The design of DeepRebirth is motivated by the key observation: some layers (i.e., normalization and pool- ing) in deep learning models actually consume a large portion of computational time even few learned parameters are involved, and acceleration of these layers has the potential to improve the processing speed significantly. Essentially, the functionality of several merged layers is replaced by the new dense layer - re- birth layer in DeepRebirth. In order to preserve the same functionality, the rebirth layer model parameters are re-trained to be functionality equivalent to the orig inal several merged layers. The extensive experiments performed on ImageNet using several popular mobile devices demonstrate that DeepRebirth is not only providing huge speed-up in model deployment and significant memory saving but also maintaining the model accuracy, i.e., 3x-5x speed-up and energy saving on GoogLeNet with only 0.4% accuracy drop on top-5 categorization in ImageNet Further, by combining with other model compression techniques, DeepRebirth of- fers an average of 65ms model forwarding time on single image using Samsung Galaxy S6 with only 2.4% accuracy drop. In addition, 2.5x run-time memory saving is achieved with rebirth layers\nRunning deep learning models efficiently on mobile CPUs is a highly intriguing feature due to. many reasons: (1) CPU is available for all mobile devices, even phones released many years ago;. (2) powerful CUDA-enabled GPUs are generally not available on (compact) mobile devices; (3) though a large majority of mobile devices are equipped with mobile GPUs, the speed-up achieved. on the mobile GPUs is quite limited when compared to CPU sh1r0 et al. (2015), not to mention the complexity caused by different mobile GPU architectures; (4) major deep learning frameworks such as Caffe Jia et al.(2014) and Tensorflow [Abadi et al.(2015) only support CPU implementation on mobile devices currently, and therefore an efficient CPU-friendly model is highly desirable..\nHowever, most of current mobile CPUs cannot meet the needs of deep learning model deploymen because it takes much longer time and higher energy cost to process an image using pre-trained deep learning models. For example, it takes more than 651ms to recognize an image using GoogleNet or Samsung S5 (Table 4) with 984mJ energy costs (Table 5). Therefore a question that naturally follows is: can we develop an efficient deep learning acceleration framework to facilitate deployment of deep learning service on mobile device?\nThis problem is challenging due to the fact that the practical solution is highly desirable to suppor different practical scenarios by addressing the following challenges (C1-C3)..\nC2: Leveraging existing trained deep framework. In order to provide the best deep learning service, the mechanism is designed to taking advantage of existing state-of-the-art deep learning architectures (e.g., GoogLeNet and ResNet) instead of training from scratch.\nC3: Supporting different deep learning architecture components. The proposed techniqu should provide generic framework that can be applied to these popular deep learning models tha may consist of different types of layers. In general, all neural network layers can be grouped into tw categories: tensor layer and non-tensor layer based on whether the layer contains tensor-type param. eters. For example, fully connected layer and the convolution layer are both tensor-layers since the contain 2-d and 4-d tensor-type weight parameters, respectively. Pooling layer and LRN layer ar. both non-tensor layers because they do not contain any high-order tensor-type weight parameters. Therefore, the framework is expected to support both tensor and non-tensor layers optimization..\nHowever, the current solutions for deep learning model acceleration are still quite limited in address. ing these challenges. The main goal of works (Han et al.(2016b),Li(2013); Kim et al.(2015) Jiaxiang Wu & Cheng(2016)) is to reduce the model size by approximating the tensor-type layer. using low rank approximation and vector quantization techniques. While they can provide som. acceleration for only fully-connected layers (used in AlexNet, VGGNet), the application scenario. of these methods are very limited and ineffective because modern deep learning architectures (e.g. Inception and ResNet) have removed large fully-connected layers. Moreover, for non-tensor layer. (e.g., normalization and pooling layers) that are generally used for speeding up the network training. and obtaining better generalization performance, none works, to the best of our knowledge, hav discussed how to accelerate the execution process..\nTo bridge these gaps, this paper proposes DeepRebirth, a new deep learning model acceleratior. framework by exploring the sparsity of deep neural network layers to accelerate both non-tensor lay ers and tensor layers from two types of rebirth: streaming merging and branch merging. In stream ing merging, the new tensor layers are generated by merging non-tensor layers with its neighboring. sparse tensor layers in the feed-forward structure as illustrated in Figure 2l while in branch merg. ing, the new tensor layers are created by fusing non-tensor banches with the sparse tensor branche. (at the same level) as shown in Figure [3] i.e., the inception module in GoogLeNet (Szegedy et al. (2014)). The design of DeepRebirth is guided by the key observation:.\nNon-tensor layers are the major obstacles for real-time mobile CPU execution (Section 2)\nThen reducing the execution time on non-tensor layers can greatly reduce the overall model for warding time. In order to reduce the execution time, both streaming merging and branch merging\nlOther examples of non-tensor layers include dropout layer, normalization layer, softmax layer, etc\nTable 1: Percentage of Forwarding Time on Non-tensor Layers\nNetwork Intel x86 Arm Titan X AlexNet 32.08% 25.08% 22.37% GoogLeNet 62.03% 37.81% 26.14% ResNet-50 55.66% 36.61% 47.87% ResNet-152 49.77% N/A 44.49% Average 49.89% 33.17% 35.22% 60% 70% 70% Arm Arm Arm 50% Intel_x86 60% Intel_x86 60% Intel_x86 50% 50% 40% hreetoon uoI 40% hrect 40% 30% 38313333 30% Tmne 30% **** 20% 20% 20% 10% 10% 10% 7 0% 0% Convolutionopout ReLU LRN PoolingSoftmax ConvolutjonpoutReLU LRN Scale InnerProduct Convolutiq/LU Pooling Softma InnerProdutwise BatchNorm (a) AlexNet (b) GoogLeNet (c) ResNet-50\nNetwork Intel x86 Arm Titan X AlexNet 32.08% 25.08% 22.37% GoogLeNet 62.03% 37.81% 26.14% ResNet-50 55.66% 36.61% 47.87% ResNet-152 49.77% N/A 44.49% Average 49.89% 33.17% 35.22%\n60% 70% 70% Arm Arm Arm 50% **** Intel_x86 60% **** Intel_x86 60% *Intel_x86 50% 50% 40% freton fn 133 40% 388888 40% 30% 188888888883 38888888881 30% 30% 20% 20% 888888 20% 388881 10% 10% 10% 0% 0% 0% LRN Pooling Softmax ConvolutjoopoutReLU LRN Poolingoftmax tInnerProdushcat Scale InnerProduct ConvolutiqnLU Pooling Softmax InnerProdycwise BatchNorm\nFigure 1: Time Decomposition for each layer. Non-tensor layers (e.g., dropout, ReLU, LRN, soft max, pooling, etc) shown in red color while tensor layers (e.g., convolution, inner-product) shown. in black color.\nare applied to merge non-tensor layers into tensor layers. Overall, reducing the execution time or non-tensor layers can greatly reduce the model forwarding time given the fact that tensor-layer has been optimized to the minimum as suggested by (Han et al.(2016b),Kim et al.(2015)). Ideally we can combine both non-tensor and tensor layer optimization together and further reduce latency as well as the model size. To summarize, this paper makes the following contributions.\n. Our approach is the first work that optimizes non-tensor layers and significantly accelerates a deep. learning model on CPUs while reducing the required runtime-memory since there are less layers ir the reconstructed deep learning mode|2\n. To address the challenges of (C1-C3), we perform both streaming merging and branch merging based on the original structure of old layers while the new tensor layers are generated by merging non-tensor layers with its neighboring sparse tensor layers vertically and horizontally..\n. As demonstrated in the experiment, our approach has obtained the state-of-the-art speeding up. on popular deep learning models with negligible accuracy loss. Our proposed method enables GoogLeNet to achieve 3x-5x speed-up for processing a single image with only 0.4% drop on Top-5 accuracy on ImageNet without any weights compression method. By further applying model com. pression techniques, we achieve around 65 ms for processing a single image with Top-5 accuracy of 86.5%. Furthermore, we show that our methods work for state-of-the-art non-tensor layers, e.g.,. batch normalization, in very deep neural network models such as ResNetHe et al.(2015).\nExperimental Settings To give a better understanding of the neural network latency, we evaluate the time cost of different types of layers within a given network. We measure their latency by using the time percentage measurement where larger value indicates longer time3] Our experiment is carried on different processors including Intel x86 CPU, Arm CPU and Titan X GPU. Along with different processors, we also use different state-of-the-art networks to evaluate. These networks\n2Tensor weights decomposition method such as Tucker Decomposition effectively reduces the model size. (i.e., the number of learned weights) and thus reduce the storage cost on hard drive. However, since the decom position methods increase the number of layers of the model, the actual runtime-memory (RAM) cost (whicl. is much more scarce resource than hard drive storage) can be even larger than the model before decomposition 3The accumulated percentage for a given network is 100%."}, {"section_index": "3", "section_name": "include AlexNet (Figure|1a Krizhevsky et al.), GoogLeNet(Figure[1b]Szegedy et al.](2014)) anc ResNet(Figure1cHe et al. 2015)). We list the results in Figure[1and Table[1", "section_text": "Observations and Insights As demonstrated in the results, for classical deep models (e.g AlexNet), among the non-tensor layers, \"LRN\" and \"Pooling\" layers are the major obstacles tha slow-down the model execution. ResNet-50 has abandoned the \"LRN\" layers by introducing th batch normalization layer, but the findings remain valid as it takes up more than 25% of the tim on ARM CPU and more than 40% on Intel x86 CPU (in Caffe (Jia et al.(2014)), it was decom posed into a \"BatchNorm' layer followed by a \"Scale\" layer as shown in Figure 1c). The time fraction spent over such layers ranges from 22.37% to 62.03%. Among different types of proces sors, non-tensor layers have the largest impact on Intel x86 CPUs, and more specifically 62.03% o the computing time. On the other hand, though non-tensor layers do not affect the mainstream ARM CPUs, on average they still cost about 1/3 of the computing time. All these numbers confirm ou intuition: there is a great potential to accelerate the model by optimizing those non-tensor layers"}, {"section_index": "4", "section_name": "3 DEEPREBIRTH", "section_text": "This section covers the design of DeepRebirth in three aspects: streaming merging, branching mer ing and adapting DeepRebirth to the whole model."}, {"section_index": "5", "section_name": "3.1 STREAMLINE MERGING", "section_text": "For deep network architecture with streamline layer connections, in order to accelerate the execution we first identify the layers that have large latency but also have potentials to be merged or processed. The merging design is motivated by the following two key observations..\nMethod The streamline merging regenerates a new tensor layer (i.e., rebirth layer) by merging non tensor layers with its bottom tensor units in the feed-forward structure. After layer-wise regenera- tion, we retrain the deep neural network model by fine-tuning the parameters of the new generated layers. There are two streamline merging operations in the proposed scheme. The choice of merging operation is depending on the type of non-tensor layers.\nExample Figure 2 illustrates how the optimization works using streamline merging. This is one. representative part in GoogLeNet where the convolution layer conv2/3 3 is followed by a LRN. layer conv2/norm2 and a pooling layer poo2/3 3_s2 (The ReLU layer which has negligible. latency is retained to keep accuracy). Before merging, the 2 non-tensor layers without a singl learned parameter weight take even more time than running the convolution layer. After merging.\nIn general deep learning models, the probability distribution of the dataset can be represented by a large, very sparse deep neural network that is constructed layer after layer. From analyzing the correlations of the current layer and preceding layers (or parallel layers), we can merge the highly correlated layers and substitute it as a new \"rebirth' layer. This process is similar to viewing the Inception model as a logical culmination as suggested byArora et al.(2013).\nNon-tensor layers are usually following a tensor layer such as convolution layer as shown. in Figure2 Several consecutive layers can be viewed as a blackbox for non-linear transformations. and therefore this can be replaced by a new tensor-layer by learning the parameters to. approximate the functionality of original several layers. An example is shown in Figure[2\nMerging Pooling Layer: The pooling layer down-samples feature maps learned from pre-. vious layers. Therefore, to merge a pooling layer to a convolution layer, we remove the pooling layer and set the stride value of the \"merged\"' convolution layer as the product of the stride values for both the original pooling layer and the convolution layer. With a larger stride value for the new \"merged\"' convolution layer, it further reduces the computation required for executing the new model.. Merging Non-Pooling Layer: For non-pooling layers such as LRN and batch normalization.. we directly prune those layers from the original deep neural network..\nTop Layers Output Shape 56x56x192 pool2/3x3_s2 Top Layers 16.3 ms (Pooling) Stride: 2 Output Shape 56x56x192 112x112x192 conv2/3x3_merge conv2/norm2 153.8 ms 68.4 ms (Convolution) (LRN) 16.6 ms Tensor: 3x3x64x192 MERGE Stride: 2 112x112x192 conv2/3x3 69.1 ms Input Shape (Convolution) 112x112x64 Tensor: 3x3x64x192 Stride: 1 Bottom Layers Input Shape. 112x112x64 Bottom Layers\nFigure 2: Streamline Merging: The GoogLeNet example and the running time is measured using bvlc_googlenet model in Caffe on a Samsung Galaxy S5. Left panel: convolution (in green), LRN (in red), pooling (in red). Right Panel: single convolution layer. The three layers in the left panel are merged and regenerated as a convolution layer (i.e., rebirth layer) in the right panel.\nprocess them to generate a new rebirth convolution layer conv2/3 3_merge, the time spent on the rebirth layer is greatly reduced compare to the original layers."}, {"section_index": "6", "section_name": "3.2 BRANCH MERGING", "section_text": "Example One representative unit is the inception module in GoogLeNet. For example as illustrated. in Figure[3] layer \"inception_3a\"' of GoogLeNet has 4 branches: 3 convolution branches take feature. maps from the bottom layer at various scales (1 1, 3 3 and 5 5) and 1 additional 3 3 pooling branch Szegedy et al.(2014). The output feature maps of each branch are concatenated as input for. the following top layer.\nMethod For deep network architecture with parallel branches, the output of each branch constitutes part of the feature maps as the input for the next layer. We identify non-tensor branches that have large latency (e.g., the pooling branch in Figure3). Similar to streamline merging, if we can use a faster tensor branch to simulate the function of the non-tensor branch by relearning its parameters. we can achieve clear speed-up.\nThe design of branch merging is motivated by the following key observation. Given the fact that non-tensor layer requires more time on computation, if we can learn new tensor layers by fusing non-tensor layers with the tensor units at the same layer level, then the the execution time will be decreased.\nTo merge a non-tensor branch into a tensor branch, we re-create a new tensor layer (i.e., rebirth layer) by fusing the non-tensor branch and a tensor unit with relatively small latency to output the feature maps that were originally generated by the non-tensor branch. If the non-tensor branch has a kernel size larger than 1 1 (e.g., the 3 3 pooling branch in Figure[3), the picked tensor branch's kernel size should be at least the size of the non-tensor branch. As shown in this figure, we re-learn a new tensor layer \"inception_3a\"' by merging the 3 3 pooling branch with the 5 5 convolution branch at the same level, and the number of feature maps obtained by the 5 5 convolution is increased from 32 to 64.\nReducing: Current deep neural networks usually include convolution branches with 1 1. convolution layers (e.g., inception_3a/3x3_reduce in Figure 3) aiming to reduce feature. maps channels. This unit will be processed by a following convolution layer with larger kernel size. For greater speed-up, we further reduce the number of feature maps generated.\nTop Layers Top Layers Output Shape Output Shape 56x56x256 56x56x256 inception_3a inception_3a /output /output_new (Concat) (Concat) 6x56x32 6 0X5 6.48 ms 20.6 ms 3.68 ms 1.51 ms 11.3 ms/ 5.23 ms 56 56 inception_3a inception_3a inception_3a inception_3a inception_3a inception_3a /1x1 /3x3 /5x5 /pool_proj /3x3_merge /5x5_merge (Convolution) (Convolution) (Convolution) (Convolution) (Convolution) (Convolution) 55.2 ms Tensor: 1x1x192x64 Tensor: 3x3x96x128 Tensor: 5x5x16x32 Tensor: 1x1x192x32 Tensor: 3x3x48x192 Tensor: 5x5x16x64 21.1 ms MERGE 6.58 ms 56x56x96 2.35 ms56x56x16 14.0 ms 56x56x192 2.19 ms 56x56x48 2.35 ms 56x56x16 inception_3a inception_3a inception_3a inception_3a inception_3a /3x3 reduce /5x5_reduce /pool /3x3_reduce_new /5x5_reduce (Convolution) (Convolution) (Pooling) (Convolution) (Convolution) Tensor: 1x1x192x96 Tensor: 1x1x192x16 Kernel_size: 3x3 Tensor: 1x1x192x48 Tensor: 1x1x192x16 Input Shape: 56x56x192 Input Shape: 56x56x192 Bottom Layers Bottom Layers"}, {"section_index": "7", "section_name": "3.3 ADAPTING DEEPREBIRTH TO OVERALL MODEL", "section_text": "The new generated layer (i.e., rebirth layer) is required to learn the new parameters using fine-. tuning as discussed inYosinski et al.(2014);[Razavian et al.(2014). We use standard initialization methods to (e.g., Xavier Glorot & Bengio (2010) initialization) to initialize the parameters in the. new layer while keeping the weights of other layers unchanged. In our optimization procedure, we. set the learning rate of the new learning layers 10 times over those in other layers. The proposed. optimization scheme is applied from the bottom layer to the top layer. It is also possible to learn multiple rebirth layers at the same time (we merge and fine-tune 3 sequential inception layers 4b-4d. together for GoogLeNet) or merge layers in orders other than bottom-to-top.."}, {"section_index": "8", "section_name": "4.1 GOOGLENET", "section_text": "To evaluate the performance of DeepRebirth, we performed a comprehensive evaluation using iffer ent optimization approaches on top of GoogLeNet. We use Caffe's GoogLeNet implementation (i.e. bvlc_googlenet) with its pre-trained model weights. Then we apply the proposed DeepRebirth opti mization scheme to accelerate the running speed of GoogLeNet, which is denoted as \"GoogLeNet- Merge\"' (see structure in appendix). After non-tensor layer optimization (streamline and branch merging), we further apply tucker decomposition approach (Kim et al.[(2015)) to reduce the mode. size (i.e., the number of learned weights) by 50%, represented as \"GoogLeNet-Merge-Tucker\"'. In addition, we directly employ tucker decomposition method to compress original GoogLeNet. This\nFigure 3: Branch Merging: The GoogLeNet example and the running time is measured using bvlc googlenet model in Caffe on a Samsung Galaxy S5. Left panel: four branches in parallel. convolution layer, convolution + convolution, convolution + convolution, convolution + pooling Right panel: two branches in parallel, convolution + convolution, convolution + convolution. The four branches are merged into two branches.\nby the 1 1 \"reducer\"'. For layer inception_3a/3x3_reduce, we reduce the number of output feature maps from 96 to 48. Merging: A convolution branch with a smaller kernel size can be merged to a convolution branch with a larger kernel size. The method is similar to the merging of non-tensor lay- ers. To keep other layers' structures in network unchanged, we remove the small-kernel convolution branch and increase the number of feature maps generated by the large-kernel convolution layers. For examples, for layer inception_3a/3x3_reduce, we remove the 1 1 convolution branch and increase the number of feature maps generated by the 3 3 convo- lution from 128 to 196.\nTable 2: GoogLeNet Accuracy on each layer after merging\nStep Merged Layer(s) Top-5 Accurac 0 N/A 88.89% 1 conv1 88.73% 2 conv2 88.82% 3 inception_3a 88.50% 4 inception_3b 88.27% 5 inception_4a 88.60% 6 inception_4b-4d 88.61% 7 inception_4e 88.43% 8 inception_5a 88.41% 9 inception_5b 88.43 % Tucker Decomposition N/A 86.54 %\nis indicated as \"GoogLeNet-Tucker\"'. Thus, we have 4 models to compare, namely GoogLeNet GoogLeNet-Merge, GoogLeNet-Tucker and GoogLeNet-Merge-Tucker.."}, {"section_index": "9", "section_name": "4.1.1 ACCURACY", "section_text": "Since one of our major goals is to propose a new acceleration approach which can speed up th model running time with satisfied accuracy (in constrast to the original model), we list the accurac. changes along with the optimization steps conducted on ImageNet ILSVRC-2012 validation datase as indicated in Table2 During the whole optimization procedure of model training, we set the base learning rate for the re-generated layer as O.01 (the rest layers are O.001). We apply stochastic gradient descent training method (Bottou(2012)) to learn the parameters with a batch size of 32 During our training phase, we set 40,o00 as the step size together with O.1 set for gamma value and 0.9 for momentum parameter. At each step, the model generally converges at around 90,000 iterations (2 epochs).\nThe result indicates that the proposed method has almost negligible impact on the model accu racy, and the accuracy even increases at certain step (e.g., step 5). This indicates that \"the new born\"' layers perfectly simulate the functionalities of previous non-tensor layers before optimiza tion. By applying tucker decomposition method on the merged model to reduce the weights by hal (GoogLeNet-Merge-Tucker), we observer that there is a larger drop on accuracy (around 2%). How ever, directly applying tucker decomposition method (GoogLeNet-Tucker) to reduce the GoogLeNe weights to a half drops the top-5 accuracy to 85.7%. These results imply that our method perform reasonable well even after streamline and branch layer mergings."}, {"section_index": "10", "section_name": "4.1.2 SPEED-UP", "section_text": "To evaluate and compare the latency of different optimization approaches, we evaluate the the layer-. wise running speed on a Samsung Galaxy S5 smart phone which has an ARMv7 quad-core CPU @ 2.5 GHz and 2 GB RAM. We use Caffe's integrated benchmark module to test the model forwarding time. Each test run includes 50 subtests with a random input. We try 10 test runs on each compared model and report the best test run in terms of forwarding time. During the whole experiment, we. turn on phone to the airplane mode and close all other apps..\nAs is demonstrated in Table [3] we observe that for the best case scenario, GoogLeNet-Merge i 3x faster than GoogLeNet and for the worst case scenario, GoogLeNet takes around 950 ms for a single forwarding while GoogLeNet-Merge takes only around 250 ms, which is almost 4x speed up. This is because the original GoogLeNet model has too many small layers and this results i1 performance fluctuation. The same finding is also sharply observed in|Kim et al.|(2015) . The Tucke Decomposition method further reduces the computation for around 50% at the cost of around 2% accuracy loss. On the other hand, directly applying tucker decomposition on tensor layers doesn show any significant acceleration.\nTable 3: Breakdown of GoogLeNet forwarding time cost using different methods on each layer\nTable 4: Execution time using different methods (including SqueezeNet) on different mobile device\nNot limited to mobile platform of Samsung Galaxy S5, we also apply the speed-up schemes or other popular processors. These mobile devices include (1) Moto E: a low-end mobile ARM CPU (2) Samsung Galaxy S5: a middle-end mobile ARM CPU, (3) Samsung Galaxy S6: a high-enc mobile ARM CPU, (4) Macbook Pro: an Intel x86 CPU, and (5) Titan X: a powerful server GPU We demonstrate the experimental results in Table4 The promising result indicates that the proposec method achieves significant speed-up on various types of CPUs. Even on the low-end mobile CPL. (i.e., Moto E), around 200 ms model forwarding time is achieved by further applying tensor weight compression method. Finally, we compare the proposed approach with SqueezeNet (Iandola et al. (2016)) which is a state-of-the-art compressed CNN model. We are very excited to see that oui optimization approach can obtain faster speed with higher accuracy compared to SqueezeNet(80% for Top-5)'s performance on all CPU platforms as listed in Table4\nWe measure the energy cost of each compared model using PowerTutor Android app (Zhang et al. (2010) on Samsung Galaxy S5. The original GoogLeNet consumes almost 1 Joule per image while GoogLeNet-Merge consumes only 447 mJ. Applying tucker decomposition further reduces the energy cost to only 1/4 at 226 mJ .\nGoogLeNet GoogLeNet GoogLeNet Device GoogLeNet -Tucker -Merge -Merge-Tucker conv1 94.92 ms 87.85 ms 8.424 ms 6.038 ms conv2 153.8 ms 179.4 ms 16.62 ms 9.259 ms inception_3a 55.23 ms 85.62 ms 21.17 ms 9.459 ms inception_3b 98.41 ms 66.51 ms 25.94 ms 11.74 ms inception_4a 30.53 ms 36.91 ms 16.80 ms 8.966 ms inception_4b 32.60 ms 41.82 ms 20.29 ms 11.65 ms inception_4c 46.96 ms 30.46 ms 18.71 ms 9.102 ms inception_4d 36.88 ms 21.05 ms 24.67 ms 10.05 ms inception_4e 48.24 ms 32.19 ms 28.08 ms 14.08 ms inception_5a 24.64 ms 14.43 ms 10.69 ms 5.36 ms inception_5b 24.92 ms 15.87 ms 14.58 ms 6.65 ms loss3 3.014 ms 2.81 ms 2.97 ms 2.902 ms Total 651.4 ms 614.9 ms (1.06x) 210.6 ms (3.09x) 106.3 ms (6.13x)\nGoogLeNet GoogLeNet GoogLeNet Device GoogLeNet SqueezeNe -Tucker -Merge -Merge-Tucker Moto E 1168.8 ms 897.9 ms 406.7 ms 213.3 ms 291.4 ms Samsung Galaxy S5 651.4 ms 614.9 ms 210.6 ms 106.3 ms 136.3 ms Samsung Galaxy S6 424.7 ms 342.5 ms 107.7 ms 65.34 ms 75.34 ms Macbook Pro (CPU) 91.77 ms 78.22 ms 23.69 ms 15.18 ms 17.63 ms Titan X 10.17 ms 10.74 ms 6.57 ms 7.68 ms 3.29 ms\nWhen deploying to the mobile devices, we remove the loss1 and loss2 branches from the trained models so that the storage cost of each model is reduced by 24.33 MB. GoogLeNet-Merge which achieves significant speed-up does not save much storage cost compared to the original GoogLeNet model. However, for modern mobile devices, storage is not a scarce resource (e.g., Samsung Galaxy S5 has 16 GB or 32 GB storage), so a 20 MB deep learning model is \"affordable' on mobile devices. Meanwhile, we can always perform the tensor weights compression method to further reduce the storage cost.\nTable 5: GoogLeNet Execution Stora vs.En vs. Runtime-Memory Cos1\nTable 6: AlexNet Result (Accuracy vs. Speed vs. Energy cost\nStep Merged Layer(s) Top-5 Accuracy Speed-up Energy Cost 0 N/A 80.03% 445 ms 688 mJ 1 conv1+norm1 -> conv1 79.99% 343 ms (1.29x) 555 mJ (1.24x) 2 conv2+norm2 -> conv2 79.57% 274 ms (1.63x) 458 mJ (1.51x)\nAnother benefit of layer merging is run-time memory saving. The generated GoogLeNet-Merge model reduces the number of layers and consumes only 13.2 MB to process one image. This feature is also very useful for the cloud based deep learning service which can process a much larger batch at one run. As shown in table[5] one Titan X GPU can run a batch size of 882 with the GoogLeNet- Merge model while the original GoogLeNet can only allow a batch size of 350. On the other hand SqueezeNet though has much less trained parameters, it has much larger run-time memory impact due to the increased number of layers."}, {"section_index": "11", "section_name": "4.2 ALEXNET AND RESNET", "section_text": "To further analyze the generality of proposed DeepRebirth acceleration framework, besides. GoogLeNet, we also apply the proposed framework to other popular deep neural structures: AlexNet (Krizhevsky et al.) and ResNet (He et al.(2015)). Note that we did not apply tensor weights com- pression to those two models which can further reduce the model forwarding latency..\nFirst, we study the classical AlexNet model. We apply streamline merging approach to re-generate new layers by merging the first two convolution layers followed by LRN layers. We illustrate the re sult in Table[6 This indicates that by applying merging to the first two layers, the model forwarding time of AlexNet is reduced from 445 ms to 274 ms on Samsung Galaxy S5, and the Top-5 accuracy is slightly dropped from 80.03% to 79.57%.\nWe also apply the acceleration scheme to the state-of-the-art ResNet model. In the experiment, we. use the popular 50-layer ResNet-50 model as baseline. We mainly apply the acceleration framework. to conv1 and res2a layers (res2a has 2 branches; one branch has 1 convolution layer and anothe. branch has 3 convolution layers). We present the result in Table[7] The time latency on Samsung. Galaxy S5 for the processed layers (i.e., conv1 and res2a) is reduced from 189 ms to 104 ms. More-. over, the run-time memory cost is reduced by 2.21x. The accuracy is only slightly reduced..\nTable 7: ResNet (conv1-res2a) Result (Accuracy vs. Speed up)\nStep Merged Layer(s) Top-5 Accuracy Speed-up Runtime-Mem Batch3 0 N/A 92.36% 189 ms 2505 MB 1 conv1 92.13% 162 ms (1.17x) 2113 MB (1.19x) 2 res2a_branch1 92.01% 140 ms (1.35x) 1721 MB (1.46x) 3 res2a_branch2a-2c 91.88% 104 ms (1.82x) 1133 MB (2.21x)\nMax Batch Size Model Energy Storage Runtime Memory on Titan X GoogLeNet 984 mJ 26.72 MB 33.2 MB 350 GoogLeNet-Tucker 902 mJ 14.38 MB 35.8 MB 323 GoogLeNet-Merge 447 mJ (2.2x) 23.77 MB 13.2 MB 882 (2.52x) GoogLeNet-Merge-Tucker 226 mJ (4.4x) 11.99 MB 14.8 MB 785 (2.24x) SqueezeNet 288 mJ 4.72 MB 36.5 MB 321"}, {"section_index": "12", "section_name": "5 RELATED WORK", "section_text": "In order to improve the network running efficiency, some scalable networks have been proposec by balancing the running speed and the accuracy.Rastegari et al.(2016) designed a binary deej learning network (called XNOR-Net) where both the network weight and the input can be binarize for memory and computational saving. However, this network design depressed the accuracy greatly The top-5 accuracy obtained by this framework is reduced by more than 10% for ResNet-18 mode along with 2x speed-up. Another popular newly designed small model -SqueezeNet Iandola et al (2016) becomes widely used for its much smaller memory cost and increased speed. However the near-AlexNet accuracy is far below the state-of-the art performance. Compared with these tw newly networks, our approach has much better accuracy with more significant acceleration.\nSpringenberg et al.[(2014) shows that the conv-relu-pool substructure may not be necessary for a neural network architecture. The authors find that max-pooling can simply be replaced by another convolution layer with increased stride without loss in accuracy. Different from this work, Deep- Rebirth replaces a complete substructure (e.g., conv-relu-pool, conv-relu-LRN-pool) with a single convolution layer, and aims to speed-up the model execution on a mobile device. In addition, our work fine-tunes a trained network by relearning the merged \"rebirth' layers and does not require to train from scratch."}, {"section_index": "13", "section_name": "6 CONCLUSION", "section_text": "We have proposed DeepRebirth acceleration framework which can speed up the neural network with satisfactory accuracy. Our method operates by re-generating new tensor layers from optimiz ing the non-tensor layers and their neighboring units. Moreover, as a generic method, DeepRebirtl is compatible with state-of-the-art deep models like GoogleNet and ResNet, where most paramete weight compression methods failed. By applying DeepRebirth at different deep learning architec tures, we obtain the significant speed-up on different processors, especially on mobile CPUs. Thi will greatly facilitate the deployment of deep learning models on mobile phones and make it possibl to provide more smart and intelligent services in the new AI tide."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Reducing the model size and accelerating the running speed are two general ways to facilitate the deployment of deep learning models on mobile devices. Many efforts have been spent on improving the model size. In particular, most works focus on optimizing tensor-layers to reduce the model size due to the high redundancy in the learned parameters in tensor layers of a given deep model. Vanhoucke et al.[(2011) proposed a fixed-point implementation with 8-bit integer activation to re- duce the number of parameter used in the deep neural network while [Gong et al.[(2014) applied vector quantization to compressed deep convnets. These approaches, however, mainly focus on compressing the fully connected layer without considering the convolutional layers. To reduce the parameter size, Denton et al.(2014) applied the low-rank approximation approach to compress the neural networks with linear structures. Afterwards, hashing function was utilized by |Chen et al. (2015) to reduce model sizes by randomly grouping connection weights. More recently, Han et al. [2016b) proposed to effectively reduce model size and achieve speed-up by the combination of prun- ing, huffman coding and quantization. However, the benefits can only be achieved by running the compressed model on a specialized processor|Han et al. (2016a). In general, reducing the mode1 size can help deployment of deep learning models, this, however, does not necessarily bring significant speed up for running deep learning models. Compared to these works, instead of reducing model size, DeepRebirth provides a generic framework to accelerate the running speed that can be applied for different deep learning architectures on different low-level hardware (CPU, GPU, etc).\nSanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. CoRR, abs/1310.6343,2013. URLhttp://arxiv.org/abs/1310. 6343\nWenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressin? neural networks with the hashing trick. CoRR, abs/1504.04788, 2015..\nEmily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear. structure within convolutional networks for efficient evaluation. In Advances in Neural Informa tion Processing Systems. pp. 1269-1277. 2014.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neu- ral networks. In In Proceedings of the International Conference on Artificial Intelligence ana Statistics (AISTATS10). Society for Artificial Intelligence and Statistics, 2010.\nYunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net works using vector quantization. arXiv preprint arXiv:1412.6115, 2014\nForrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <1mb model size arXiv:1602.07360, 2016.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo lutional neural networks. In Advances in Neural Information Processing Systems, pp. 2012..\nMohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In ECCV, 2016..\nVincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on cpus. 2011."}, {"section_index": "15", "section_name": "Appendices", "section_text": "Lide Zhang, Birjodh Tiwana, Zhiyun Qian, Zhaoguang Wang, Robert P. Dick, Zhuoqing Morley Mao, and Lei Yang. Accurate online power estimation and automatic battery behavior based power model generation for smartphones. In Proceedings of the Eighth IEEE/ACM/IFIP Interna- tional Conference on Hardware/Software Codesign and System Synthesis, CODES/ISSS '10, pp. 105-114, New York, NY, USA, 2010. ACM. ISBN 978-1-60558-905-3. doi: 10.1145/1878961. 1878982. URLhttp://doi.acm.0rg/10.1145/1878961.1878982\nGoogleNet-Merge eption_4d/rel_5 /3x3_red uce_nev 13\nFigure 4: An illustration of GoogleNet-Merge's structure in details"}]
r1VdcHcxx
[{"section_index": "0", "section_name": "RECURRENT BATCH NORMALIZATION", "section_text": "Tim Cooijmans, Nicolas Ballas, Cesar Laurent, Caglar Gulcehre & Aaron Courville\nfirstname.lastname@umontreal.ca\nVe propose a reparameterization of LSTM that brings the benefits of batch noi nalization to recurrent neural networks. Whereas previous works only apply batc ormalization to the input-to-hidden transformation of RNNs, we demonstrate tha is both possible and beneficial to batch-normalize the hidden-to-hidden trans. ion, thereby reducing internal covariate shift between time steps. Ve evaluate our proposal on various sequential problems such as sequence class cation, language modeling and question answering. Our empirical results sho nat our batch-normalized LSTM consistently leads to faster convergence and inr roved generalization."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recurrent neural network architectures such as LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014) have recently exhibited state-of-the-art performance on a wide range of complex sequential problems including speech recognition Amodei et al. (2015), machine transla tion (Bahdanau et al., 2015) and image and video captioning (Xu et al., 2015; Yao et al., 2015) Top-performing models, however, are based on very high-capacity networks that are computation ally intensive and costly to train. Effective optimization of recurrent neural networks is thus an active area of study (Pascanu et al., 2012; Martens & Sutskever, 2011; Ollivier, 2013).\nIt is well-known that for deep feed-forward neural networks, covariate shift (Shimodaira, 2ooo; Ioffe & Szegedy, 2015) degrades the efficiency of training. Covariate shift is a change in the distribution of the inputs to a model. This occurs continuously during training of feed-forward neural networks where changing the parameters of a layer affects the distribution of the inputs to all layers above it As a result, the upper layers are continually adapting to the shifting input distribution and unable. to learn effectively. This internal covariate shift (Ioffe & Szegedy, 2015) may play an especially. important role in recurrent neural networks, which resemble very deep feed-forward networks..\nBatch normalization (Ioffe & Szegedy, 2015) is a recently proposed technique for controlling the distributions of feed-forward neural network activations, thereby reducing internal covariate shift. It involves standardizing the activations going into each layer, enforcing their means and variances to be invariant to changes in the parameters of the underlying layers. This effectively decouples each layer's parameters from those of other layers, leading to a better-conditioned optimization problem Indeed, deep neural networks trained with batch normalization converge significantly faster and generalize better.\nAlthough batch normalization has demonstrated significant training speed-ups and generalization. benefits in feed-forward networks, it is proven to be difficult to apply in recurrent architectures (Lau. rent et al., 2016: Amodei et al., 2015). It has found limited use in stacked RNNs, where the nor malization is applied \"vertically', i.e. to the input of each RNN, but not \"horizontally' between. timesteps. RNNs are deeper in the time direction, and as such batch normalization would be most. beneficial when applied horizontally. However, Laurent et al. (2016) hypothesized that applying. batch normalization in this way hurts training because of exploding gradients due to repeated rescal-. in g.\nOur findings run counter to this hypothesis. We show that it is both possible and highly beneficial to. apply batch normalization in the hidden-to-hidden transition of recurrent models. In particular, we. describe a reparameterization of LSTM (Section 3) that involves batch normalization and demon strate that it is easier to optimize and generalizes better. In addition, we empirically analyze the\nLiao & Poggio (2016) simultaneously investigated batch normalization in recurrent neural networks. albeit only for very short sequences (10 steps). Ba et al. (2016) independently developed a variant of batch normalization that is also applicable to recurrent neural networks and delivers similar im- provements as our method."}, {"section_index": "2", "section_name": "2.1 LSTM", "section_text": "ht = Q(Wnht-1+ WzXt+ b)\nIn what follows, we focus on the LSTM architecture (Hochreiter & Schmidhuber, 1997) with recur. rent transition given by\nWnht-1+ WxXt+b Ct o(ft) O ct-1 + (it) O tanh(g ht (0t) O tanh(ct),\nThe LSTM differs from simple RNNs in that it has an additional memory cell c whose update is nearly linear which allows the gradient to flow back through time more easily. In addition, unlike the RNN which overwrites its content at each timestep, the update of the LSTM cell is regulated by a set of gates. The forget gate f determines the extent to which information is carried over from the previous timestep, and the input gate it controls the flow of information from the current input xt The output gate ot allows the model to read from the cell. This carefully controlled interaction with the cell is what allows the LSTM to robustly retain information for long periods of time."}, {"section_index": "3", "section_name": "2.2 BATCH NORMALIZATION", "section_text": "Covariate shift (Shimodaira, 2ooo) is a phenomenon in machine learning where the features pre. sented to a model change in distribution. In order for learning to succeed in the presence of covari ate shift, the model's parameters must be adjusted not just to learn the concept at hand but also tc. adapt to the changing distribution of the inputs. In deep neural networks, this problem manifests as.\ngradient backpropagation and show that proper initialization of the batch normalization parameters. is crucial to avoiding vanishing gradient (Section 4). We evaluate our proposal on several sequen-. tial problems and show (Section 5) that our LSTM reparameterization consistently outperforms the LSTM baseline across tasks, in terms of both time to convergence and performance..\nLong Short-Term Memory (LSTM) networks are an instance of a more general class of recurrent neural networks (RNNs), which we review briefly in this paper. Given an input sequence X = x1, X2, . . . , XT), an RNN defines a sequence of hidden states ht according to.\nRNNs are popular in sequence modeling thanks to their natural ability to process variable-length. sequences. However, training RNNs using first-order stochastic gradient descent (SGD) is notori- ously difficult due to the well-known problem of exploding/vanishing gradients (Bengio et al., 1994;. Hochreiter, 1991; Pascanu et al., 2012). Gradient vanishing occurs when states h, are not influenced by small changes in much earlier states h, t < t, preventing learning of long-term dependencies in the input data. Although learning long-term dependencies is fundamentally difficult (Bengio et al. 1994), its effects can be mitigated through architectural variations such as LSTM (Hochreiter &. Schmidhuber, 1997), GRU (Cho et al., 2014) and iRNN/uRNN (Le et al., 2015; Arjovsky et al. 2015).\nwhere Wn E RdnX4dn,WzRdxX4dn,b E R4dn and the initial states ho E Rdn,Co E Rdn are model parameters. o is the logistic sigmoid function, and the O operator denotes the Hadamard product."}, {"section_index": "4", "section_name": "BATCH-NORMALIZED LSTM", "section_text": "This section introduces a reparameterization of LSTM that takes advantage of batch normalization Contrary to Laurent et al. (2016); Amodei et al. (2015), we leverage batch normalization in botl the input-to-hidden and the hidden-to-hidden transformations. We introduce the batch-normalizing transform BN( : ; , ) into the LSTM as follows:\n4 BN(Wnht-1;7h,Pn) + BN(WxXt;7x,x) + l Ot gt Ct o(ft) O ct-1 + (it) O tanh(gt) ht o(0t) O tanh(BN(ct;Yc,c))\nIn our formulation, we normalize the recurrent term Wh-1 and the input term W.x separately Normalizing these terms individually gives the model better control over the relative contributior of the terms using the Yh and Yx parameters. We set n = x = O to avoid unnecessary redun dancy, instead relying on the pre-existing parameter vector b to account for both biases. In order tc leave the LSTM dynamics intact and preserve the gradient flow through ct, we do not apply batch normalization in the cell update.\nThe batch normalization transform relies on batch statistics to standardize the LSTM activations. It would seem natural to share the statistics that are used for normalization across time, just as recurrent neural networks share their parameters over time. However, we find that simply averaging statistics. over time severely degrades performance. Although LSTM activations do converge to a stationary distribution, we observe that their statistics during the initial transient differ significantly (see Fig. ure 5 in Appendix A). Consequently, we recommend using separate statistics for each timestep to. preserve information of the initial transient phase in the activations.1.\nGeneralizing the model to sequences longer than those seen during training is straightforward thank. to the rapid convergence of the activations to their steady-state distributions (cf. Figure 5). For ou experiments we estimate the population statistics separately for each timestep 1, ..., Tmax where.\n1 Note that we separate only the statistics over time and not the and parameters\nBatch Normalization (Ioffe & Szegedy, 2015) is a recently proposed network reparameterization. which aims to reduce internal covariate shift. It does so by standardizing the activations using. empirical estimates of their means and standard deviations. However. it does not decorrelate the activations due to the computationally costly matrix inversion. The batch normalizing transform is. as follows:\nh -E[h] BN(h;x,) = + y Var[h] + e\nwhere h E Rd is the vector of (pre)activations to be normalized, y E Rd, E Rd are model. parameters that determine the mean and standard deviation of the normalized activation, and e E R. is a regularization hyperparameter. The division should be understood to proceed elementwise\nAt training time, the statistics E[h] and Var[h] are estimated by the sample mean and sample vari ance of the current minibatch. This allows for backpropagation through the statistics, preserving the convergence properties of stochastic gradient descent. During inference, the statistics are typically estimated based on the entire training set, so as to produce a deterministic prediction..\nTmax is the length of the longest training sequence. When at test time we need to generalize beyonc Tmax, we use the population statistic of time Tmax for all time steps beyond it.\nDuring training we estimate the statistics across the minibatch, independently for each timestep. At test time we use estimates obtained by averaging the minibatch estimates over the training set.\nAlthough batch normalization allows for easy control of the pre-activation variance through the -. parameters, common practice is to normalize to unit variance. We suspect that the previous difficul. ties with recurrent batch normalization reported in Laurent et al. (2016); Amodei et al. (2015) ar largely due to improper initialization of the batch normalization parameters, and y in particular. Ir. this section we demonstrate the impact of y on gradient flow..\nRNN gradient propagation 100 derivative through tanh 1.0 10-2 (abue1 10-4 0.8 10-6 10-8 pue) 10-10 gamma=0.10 0.6 10-12 gamma=0.20 SSO 10-14 gamma=0.30 10-16 gamma=0.40 0.4 gamma=0.50 10-18 gamma=0.60 10-20 gamma=0.70 0.2 10-22 gamma=0.80 gamma=0.90 10-24 gamma=1.00 0.8.0 10-26 0 100 200 300 400 500 600 700 800 0.2 0.4 0.6 0.8 1.0 t input standard deviation (a) We visualize the gradient flow through a batch- (b) We show the empirical expected derivative an normalized tanh RNN as a function of y. High interquartile range of tanh nonlinearity as a func variance causes vanishing gradient. tion of input variance. High variance causes satura. tiOn Iuhiohde\nIn Figure 1(a), we show how the pre-activation variance impacts gradient propagation in a simple. RNN on the sequential MNIST task described in Section 5.1. Since backpropagation operates ir. reverse, the plot is best read from right to left. The quantity plotted is the norm of the gradien of the loss with respect to the hidden state at different time steps. For large values of y, the norn. quickly goes to zero as gradient is propagated back in time. For small values of the norm is nearly. constant.\nTo demonstrate what we think is the cause of this vanishing, we drew samples x from a set of. centered Gaussian distributions with standard deviation ranging from O to 1, and computed the. derivative tanh'(x) = 1 tanh2(x) E [0, 1] for each. Figure 1(b) shows the empirical distribution. of the derivative as a function of standard deviation. When the input standard deviation is low, the. input tends to be close to the origin where the derivative is close to 1. As the standard deviation increases, the expected derivative decreases as the input is more likely to be in the saturation regime. At unit standard deviation, the expected derivative is much smaller than 1..\nWe conjecture that this is what causes the gradient to vanish, and recommend initializing y to a small value. In our trials we found that values of O.01 or lower caused instabilities during training Our choice of 0.1 seems to work well across different tasks."}, {"section_index": "5", "section_name": "5 EXPERIMENTS", "section_text": "This section presents an empirical evaluation of the proposed batch-normalized LSTM on four dif- ferent tasks. Note that for all the experiments, we initialize the batch normalization scale and shift parameters y and to 0.1 and 0 respectively.\nPixel-by-Pixel MNisT (Validation Set) Pixel-by-Pixel Permuted-MNisT (Validation Set) 1.0 1.0 0.9 0.8 0.8 0.7 0.6 ACeenrey 0.6 Aecnnne 0.5 0.4 0.4 0.3 0.2 Istm 0.2 Istm bn_Istm bn_Istm 0.0 0 0.1 20000 40000 60000 80000 100000 0 20000 40000 60000 80000 100000 Training Iteration Training Iteration\nFigure 2: Accuracy on the validation set for the pixel by pixel MNIST classification tasks. The. batch-normalized LSTM is able to converge faster relatively to a baseline LSTM. Batch-normalized LSTM also shows some improve generalization on the permuted sequential MNIST that require to. preserve long-term memory information.."}, {"section_index": "6", "section_name": "5.1 SEOUENTIAL MNIST", "section_text": "We evaluate our batch-normalized LSTM on a sequential version of the MNIST classificatior task (Le et al., 2015). The model processes each image one pixel at a time and finally predicts the label. We consider both sequential MNIST tasks, MNIST and permuted MNIST (pMNIST). Ir MNIST, the pixels are processed in scanline order. In pMNIST the pixels are processed in a fixec random order.\nOur baseline consists of an LSTM with 100 hidden units, with a softmax classifier to produce a prediction from the final hidden state. We use orthogonal initialization for all weight matrices. except for the hidden-to-hidden weight matrix which we initialize to be the identity matrix, as this yields better generalization performance on this task for both models. The model is trained using RMSProp (Tieleman & Hinton, 2012) with learning rate of 10-3 and 0.9 momentum. We apply gradient clipping at 1 to avoid exploding gradients.\nThe in-order MNIST task poses a unique problem for our model: the input for the first hundred or so. timesteps is constant across examples since the upper pixels are almost always black. This causes the variance of the hidden states to be exactly zero for a long period of time. Normalizing these zero- variance activations involves dividing zero by a small number at many timesteps, which does not affect the forward-propagated activations but causes the back-propagated gradient to explode. We. work around this by adding Gaussian noise to the initial hidden states. Although the normalization amplifies the noise to signal level, we find that it does not hurt performance compared to data- dependent ways of initializing the hidden states..\nModel MNIST pMNIST TANH-RNN (Le et al., 2015) 35.0 35.0 iRNN (Le et al., 2015) 97.0 82.0 uRNN (Arjovsky et al., 2015) 95.1 91.4 sTANH-RNN (Zhang et al., 2016) 98.1 94.0 LSTM (ours) 98.9 90.2 BN-LSTM (ours) 99.0 95.4\nTable 1: Accuracy obtained on the test set for the pixel by pixel MNIST classification tasks\nIn Figure 2 we show the validation accuracy while training for both LSTM and batch-normalized LSTM (BN-LSTM). BN-LSTM converges faster than LSTM on both tasks. Additionally, we ob serve that BN-LSTM generalizes significantly better on pMNIST. It has been highlighted in Ar jovsky et al. (2015) that pMNIST contains many longer term dependencies across pixels than in the original pixel ordering, where a lot of structure is local. A recurrent network therefore needs to\nModel Penn Treebank LSTM (Graves, 2013) 1.262 HF-MRNN (Mikolov et al., 2012) 1.41 Norm-stabilized LSTM (Krueger & Memisevic, 2016) 1.39 ME n-gram (Mikolov et al., 2012) 1.37 LSTM (ours) 1.38 BN-LSTM (ours) 1.32 Zoneout (Krueger et al., 2016) 1.27 HM-LSTM (Chung et al., 2016) 1.24 HyperNetworks (Ha et al., 2016) 1.22\nTable 2: Bits-per-character on the Penn Treebank test sequence\ncharacterize dependencies across varying time scales in order to solve this task. Our results suggest that BN-LSTM is better able to capture these long-term dependencies..\nWe evaluate our model on the task of character-level language modeling on the Penn Treebank. corpus (Marcus et al., 1993) according to the train/valid/test partition of Mikolov et al. (2012). For training, we segment the training sequence into examples of length 100. The training sequence does. not cleanly divide by 100, so for each epoch we randomly crop a subsequence that does and segment. that instead.\nOur baseline is an LSTM with 1000 units, trained to predict the next character using a softma. classifier on the hidden state ht. We use stochastic gradient descent on minibatches of size 64, with gradient clipping at 1.0 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 0.002. We use orthogonal initialization for all weight matrices. The setup for the batch-normalizec LSTM is the same in all respects except for the introduction of batch normalization as detailed in 3\nWe show the learning curves in Figure 3(a). BN-LSTM converges faster and generalizes better than the LSTM baseline. Figure 3(b) shows the generalization of our model to longer sequences. We observe that using the population statistics improves generalization performance, which confirms that repeating the last population statistic (cf. Section 3) is a viable strategy. In table 2 we report the performance of our best models (early-stopped on validation performance) on the Penn Treebank test sequence. Follow up works havd since improved the state of the art (Krueger et al., 2016; Chung et al., 2016; Ha et al., 2016).\nWe evaluate our model on a second character-level language modeling task on the much larger text8 dataset (Mahoney, 2o09). This dataset is derived from Wikipedia and consists of a sequence of 1ooM characters including only alphabetical characters and spaces. We follow Mikolov et al (2012); Zhang et al. (2016) and use the first 90M characters for training, the next 5M for validation and the final 5M characters for testing. We train on nonoverlapping sequences of length 180.\nBoth our baseline and batch-normalized models are LSTMs with 2000 units, trained to predict the next character using a softmax classifier on the hidden state ht. We use stochastic gradient descen on minibatches of size 128, with gradient clipping at 1.0 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 0.001. Al1 weight matrices were initialized to be orthogonal\nTable 1 reports the test set accuracy of the early stop model for LSTM and BN-LSTM using the pop. ulation statistics. Recurrent batch normalization leads to a better test score, especially for pMNIST where models have to leverage long-term temporal depencies. In addition, Table 1 shows that our batch-normalized LSTM achieves state of the art on both MNIST and pMNIST.\nWe early-stop on validation performance and report the test performance of the resulting model in table 3. We observe that BN-LSTM obtains a significant performance improvement over the LSTM baseline. Chung et al. (2016) has since improved on our performance..\nModel text8 td-LSTM (Zhang et al., 2016) 1.63 HF-MRNN (Mikolov et al., 2012) 1.54 skipping RNN (Pachitariu & Sahani, 2013) 1.48 LSTM (ours) 1.43 BN-LSTM (ours) 1.36 HM-LSTM (Chung et al., 2016) 1.29\nTable 3: Bits-per-character on the text8 test sequence"}, {"section_index": "7", "section_name": "5.4 TEACHING MACHINES TO READ AND COMPREHEND", "section_text": "To demonstrate the generality and practical applicability of our proposal, we apply batch normaliza tion in the Attentive Reader model and show that this drastically improves training.\nOur third variant, BN-e*, is like BN-everywhere, but improved to more carefully handle variable. length sequences. Throughout this experiment we followed the common practice of padding each. batch of variable-length data with zeros. However, this biases the batch mean and variance of x. toward zero. We address this effect using sequencewise normalization of the inputs as proposed. by Laurent et al. (2016): Amodei et al. (2015). That is. we share statistics over time for normalization\n2.4 1.46 LSTM LSTM BN-LSTM BN-LSTM, population statistics 1.44 BN-LSTM, batch statistics 2.2 1.42 2.0 1.40 I sitq 1.38 bits 1.8 1.36 1.6 1.34 1.4 2000 4000 6000 8000 10000 12000 14000 16000 200 300 400 500 600 700 800 900 1000 training steps sequence length (a) Performance in bits-per-character on length- (b) Generalization to longer subsequences of Pen 100 subsequences of the Penn Treebank validation. Treebank using population statistics. The subse\nRecently, Hermann et al. (2015) introduced a set of challenging benchmarks for natural language processing, along with neural network architectures to address them. The tasks involve reading real news articles and answering questions about their content. Their principal model, the Atten- tive Reader, is a recurrent neural network that invokes an attention mechanism to locate relevant information in the document. Such models are notoriously hard to optimize and yet increasingly popular.\nWe evaluate several variants. The first yariant. referred to as BN-LSTM. consists of the vanilla At tentive Reader model with the LSTM simply replaced by our BN-LSTM reparameterization. The second variant, termed BN-everywhere, is exactly like the first, except that we also introduce batch normalization into the attention computations, normalizing each term going into the tanh nonlin- earities.\n(a) Performance in bits-per-character on length- (b) Generalization to longer subsequences of Penn 100 subsequences of the Penn Treebank validation Treebank using population statistics. The subse-. sequence during training quences are taken from the test sequence.\n1.0 1.0 LSTM train LSTM train BN-LSTM train BN-e** train BN-everywhere train 0.9 LSTM valid BN-e*train 0.8 BN-e** valid BN-e**train 0.8 LSTM valid BN-LSTM valid BN-everywhere valid 0.7 BN-e* valid 0.6 BN-e** valid 0.6 e rrrrr 0.5 0.4 0.3 0.2 0.2 0.10 0.0 0 100 200 300 400 500 600 700 800 50 100 150 200 250 300 350 400 training steps (thousands) training steps (thousands)\nFigure 4: Training curves on the CNN question-answering tasks\nof the input terms W,xt, but not for the recurrent terms Wpht or the cell output c. Doing so avoids many issues involving degenerate statistics due to input sequence padding.\nOur fourth and final variant BN-e** is like BN-e* but bidirectional. The main difficulty in adapting to bidirectional models also involves padding. Padding poses no problem as long as it is properly gnored (by not updating the hidden states based on padded regions of the input). However tc perform the reverse application of a bidirectional model, it is common to simply reverse the paddec sequences, thus moving the padding to the front. This causes similar problems as were observec on the sequential MNIST task (Section 5.1): the hidden states will not diverge during the initia timesteps and hence their variance will be severely underestimated. To get around this, we reverse only the unpadded portion of the input sequences and leave the padding in place.\nBN-e* and BN-e** converge faster yet, and reach lower minima: 47.1% and 43.9% respectively\nTable 4: Error rates on the CNN question-answering task Hermann et al. (2015)\nWe train and evaluate our best model, BN-e**, on the full task from (Hermann et al., 2015). Or this dataset we had to reduce the number of hidden units to 120 to avoid severe overfitting. Training. curves for BN-e** and a vanilla LSTM are shown in Figure 4(b). Table 4 reports performances of the early-stopped models\n(a) Error rate on the validation set for the Atten- (b) Error rate on the validation set on the full CNN tive Reader models on a variant of the CNN QA QA task from Hermann et al. (2015).. task (Hermann et al., 2015).As detailed in Ap-. pendix C, the theoretical lower bound on the error. rate on this task is 43%\nFigure 4(a) shows the learning curves for the different variants of the attentive reader. BN-LSTM trains dramatically faster than the LSTM baseline. BN-everywhere in turn shows a significant im- provement over BN-LSTM. In addition, both BN-LSTM and BN-everywhere show a generalization benefit over the baseline. The validation curves have minima of 50.3%, 49.5% and 50.0% for the baseline, BN-LSTM and BN-everywhere respectively. We emphasize that these results were ob- tained without any tweaking - all we did was to introduce batch normalization.\nModel CNN valid CNN test Attentive Reader (Hermann et al., 2015) 38.4 37.0 LSTM (ours) 45.5 45.0 BN-e** (ours) 37.9 36.3\nContrary to previous findings by Laurent et al. (2016); Amodei et al. (2015), we have demonstrated that batch-normalizing the hidden states of recurrent neural networks greatly improves optimiza tion. Indeed, doing so yields benefits similar to those of batch normalization in feed-forward neural networks: our proposed BN-LSTM trains faster and generalizes better on a variety of tasks in cluding language modeling and question-answering. We have argued that proper initialization of the batch normalization parameters is crucial, and suggest that previous difficulties (Laurent et al. 2016; Amodei et al., 2015) were due in large part to improper initialization. Finally, we have shown our model to apply to complex settings involving variable-length data, bidirectionality and highly nonlinear attention mechanisms."}, {"section_index": "8", "section_name": "ACKNOWLEDGEMENTS", "section_text": "Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net works. arXiv:1609.01704, 2016.\nA. Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2013\nDavid Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv:1609.09106, 2016\nK. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom Teaching machines to read and comprehend. In NIPS, 2015\nS. Hochreiter. Untersuchungen 1 zu dynamischen neuronalen netzen. Master's thesis. 1991\nS. Hochreiter and J Schmidhuber. Long short-term memory. Neural computation, 1997\nD. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014\nD Krueger and R. Memisevic. Regularizing rnns by stabilizing activations. ICLR. 201\nThe authors would like to acknowledge the following agencies for research funding and computing. support: the Nuance Foundation, Samsung, NSERC, Calcul Quebec, Compute Canada, the Canada. Research Chairs and CIFAR. Experiments were carried out using the Theano (Team et al., 2016) and the Blocks and Fuel (van Merrienboer et al., 2015) libraries for scientific computing. We thank David. Krueger, Saizheng Zhang, Ishmael Belghazi and Yoshua Bengio for discussions and suggestions..\nD. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and. translate. ICLR, 2015. Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is. difficult. Neural Networks, IEEE Transactions on, 1994.. K. Cho, B. Van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv:1406.1078, 2014.\nM. Mahoney. Large text compression benchmark. 2009\nYann Ollivier. Persistent contextual neural networks for learning symbolic data sequences. CoRR abs/1306.0514, 2013.\nMarius Pachitariu and Maneesh Sahani. Regularization and nonlinearities for neural language mod els: when are they needed? arXiv:1301.5650, 2013\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. arXiv:1211.5063, 2012\nThe Theano Development Team et al. Theano: A Python framework for fast computation of mathe matical expressions. arXiv e-prints, abs/1605.02688, May 2016.\nBart van Merrienboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning CoRR. abs/1506.00619.2015. URL http://arxiv.0rg/abs/1506.00619.\nDavid Krueger, Tegan Maharaj, Janos Kramar, Mohammad Pezeshki, Nicolas Ballas, Nan Rose. mary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, and Aaron Courville. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv:1606.01305, 2016. C. Laurent, G. Pereyra, P. Brakel, Y. Zhang, and Y. Bengio.Batch normalized recurrent neural networks. ICASSP, 2016. Quoc V Le, N. Jaitly, and G. Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv:1504.00941, 2015. Oianli Liao and Tomaso Pogoio. Bridoing the oans. hetween. residual learning recurrent neural\nM. P. Marcus, M. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The penn treebank. Comput. Linguist., 1993..\nK. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv: 1502.03044, 2015. L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville. Describing videos by. exploiting temporal structure. In ICCV, 2015. S. Zhang, Y. Wu, T. Che, Z. Lin, R. Memisevic, R. Salakhutdinov, and Y. Bengio. Architectural. complexity measures of recurrent neural networks. arXiv:1602.08210, 2016.\nmean of recurrent term mean of cell state 0.20 0.20 0.15 0.15 0.10 0.10 0.05 0.05 0.00 0.00 -0.05 -0.05 -0.10 -0.10 -0.15 -0.15 -0.20 -0.20 variance of recurrent term variance of cell state 2.0 0.009 0.008 1.5 0.007 0.006 1.0 0.005 0.004 0.5 0.003 0.002 0.0 0.001 0 10 20 30 40 50 0 10 20 30 40 50 time steps time steps\nFigure 5: Convergence of population statistics to stationary distributions on the Penn Treebank task. The horizontal axis denotes RNN time. Each curve corresponds to a single hidden unit. Only a random subset of units is shown. See Section 3 for discussion..\nIn Section 4 we investigated the effect of initial y on gradient flow. To show the practical implica- tions of this, we performed several experiments on the pMNIST and Penn Treebank benchmarks The resulting performances are shown in Figure 6.\nThe pMNIST training curves confirm that higher initial values of y are detrimental to the optimiza tion of the model. For the Penn Treebank task however, the effect is gone.\nWe believe this is explained by the difference in the nature of the two tasks. For pMNIST, the model absorbs the input sequence and only at the end of the sequence does it make a prediction on which it receives feedback. Learning from this feedback requires propagating the gradient all the way back through the sequence.\nIn the Penn Treebank task on the other hand, the model makes a prediction at each timestep. At. each step of the backward pass, a fresh learning signal is added to the backpropagated gradient. Essentially, the model is able to get off the ground by picking up short-term dependencies. This fails on pMNIST wich is dominated by long-term dependencies (Arjovsky et al., 2015)..\nWe evaluate the models on the question answering task using the CNN corpus (Hermann et al. 2015), with placeholders for the named entities. We follow a similar preprocessing pipeline as Her- mann et al. (2015). During training, we randomly sample the examples with replacement and shuffle the order of the placeholders in each text inside the minibatch. We use a vocabulary of 65829 words\nWe deviate from Hermann et al. (2015) in order to save computation: we use only the 4 most relevant sentences from the description, as identified by a string matching procedure. Both the training and validation sets are preprocessed in this way. Due to imprecision this heuristic sometimes strips the\n.05 .10 .15 .20\nPermuted MNIsT train Permuted MnisT valid 2.5 2.5 gamma 0.10 gamma 0.10 gamma 0.30 gamma 0.30 2.0 gamma 0.50 2.0 gamma 0.50 gamma 0.70 gamma 0.70 ennrry gamma 1.00 Crrrrn rrrny gamma 1.00 1.5 1.5 Crsss 1.0 1.0 0.5 0.5 0.0 0.0 0 10000 20000 30000 40000 50000 0 10000 20000 30000 40000 5000 training steps. training steps. PTB train PTB valid 1.10 1.10 gamma 0.10 gamma 0.10 1.05 gamma 0.30 gamma 0.30 gamma 0.50 1.08 gamma 0.50 gamma 0.70 lceer gamma 0.70 1.00 gamma 1.00 gamma 1.00 ea 1.06 0.95 Per 1.04 0.90 sq!q 1.02 0.85 0.80 1.00 0 5000 10000 15000 0 5000 10000 15000 training steps. training steps.\nFigure 6: Training curves on pMNIST and Penn Treebank for various initializations of\nanswers from the passage, putting an upper bound of 57% on the validation accuracy that can be achieved.\nFor the reported performances, the first three models (LSTM, BN-LSTM and BN-everywhere) are trained using the exact same hyperparameters, which were chosen because they work well for the baseline. The hidden state is composed of 240 units. We use stochastic gradient descent on mini batches of size 64, with gradient clipping at 10 and step rule determined by Adam (Kingma & Ba 2014) with learning rate 8 10-5\nFor BN-e* and BN-e**, we use the same hyperparameters except that we reduce the learning rate to 8 10-4 and the minibatch size to 40."}, {"section_index": "9", "section_name": "D HYPERPARAMETER SEARCHES", "section_text": "Table 5 reports hyperparameter values that were tried in the experiments\nTable 5: Hyperparameter values that have been explored in the experiments\nFor MNIST and pMNIST, the hyperparameters were varied independently. For Penn Treebank, we performed a full grid search on learning rate and hidden state size, and later performed a sensitivity\n(a) MNIST and pMNIST (b) Penn Treebank Learning rate: 1e-2, 1e-3, 1e-4 Learning rate: 1e-1, 1e-2, 2e-2, 1e-3 RMSProp momentum: 0.5, 0.9 Hidden state size: 800, 1000, 1200, 1500, 200 Hidden state size: 100, 200, 400 Batch size: 32, 64, 100, 128 Initial y: 1e-1, 3e-1, 5e-1, 7e-1, 1.0 Initial : 1e-1, 3e-1, 5e-1, 7e-1, 1.0 (c) Text8 (d) Attentive Reader Learning rate: 1e-1, 1e-2, 1e-3 Learning rate: 8e-3, 8e-4, 8e-5, 8e-6 Hidden state size: 500, 1000, 2000, 4000 Hidden state size: 60, 120, 240, 280\nanalysis on the batch size and initial y. For the text8 task and the experiments with the Attentive Reader, we carried out a grid search on the learning rate and hidden state size..\nThe same values were tried for both the baseline and our BN-LSTM. In each case, our reporte results are those of the model with the best validation performance."}]
SkyQWDcex
[{"section_index": "0", "section_name": "A CONTEXT-AWARE ATTENTION NETWORK FOR INTERACTIVE OUESTION ANSWERING", "section_text": "Huayu Li1 *, Martin Renqiang Min?, Yong Ge3, Asim Kadav?\nhli38@uncc.edu, renqiang@nec-labs.com, yongge@email.arizona.edu, asim@nec-labs.cor\nWe develop a new model for Interactive Question Answering (IQA), using Gated Recurrent-Unit recurrent networks (GRUs) as encoders for statements and ques tions, and another GRU as a decoder for outputs. Distinct from previous work our approach employs context-dependent word-level attention for more accurate statement representations and question-guided sentence-level attention for better context modeling. Employing these mechanisms, our model accurately under stands when it can output an answer or when it requires generating a supplemen tary question for additional input. When available, user's feedback is encoded and directly applied to update sentence-level attention to infer the answer. Extensive experiments on QA and IQA datasets demonstrate quantitatively the effectiveness of our model with significant improvement over conventional QA models."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The ultimate goal of Question Answering (QA) research is to build intelligent systems capable of naturally communicating with humans, which poses a major challenge for natural language pro-. cessing and machine learning. Inspired by recent success of sequence-to-sequence models with an encoder-decoder framework (Sutskever et al.2014] [Cho et al.[2014), researchers have attempted to apply variants of such models with explicit memory and attention to QA tasks, aiming to move a step further from machine learning to machine reasoning (Sainbayar et al.]2015) Kumar et al.. 2016,[Xiong et al.][2016). Similarly, all these models employ encoders to map statements and ques- tions to fixed-length feature vectors, and a decoder to generate outputs. Empowered by the adoption of memory and attention, they have achieved remarkable success on several challenging datasets,. including the recently acclaimed Facebook bAbI dataset..\nHowever, previous models suffer from the following important limitations. First, they fail to mod context-dependent meaning of words. Different words may have different meanings in different cor exts, which increases the difficulty of extracting the essential semantic logic flow of each sentenc n different paragraphs. Second, many existing models only work in ideal QA settings and fail t address the uncertain situations under which models require additional user input to gather complet information to answer a given question. As shown in Table[1] the example on the left is an ideal Q oroblem. We can clearly understand what the question is and then locate the relevant sentences t generate the answer. However, it is hard to answer the question in the right example, because ther are two types of bedrooms mentioned in the story and we do not know which bedroom the use refers to. These scenarios with incomplete information naturally appear in human conversations and thus, effectively handling them is a key capability of intelligent QA models.\n*Most of this work was done when the author was an intern at NEC Labs America"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "To address the challenges presented above, we propose a Context-aware Attention Network (CAN). to learn fine-grained representations for input sentences, and develop a mechanism to interact with. the user for comprehensively understanding a given question. Specifically, we employ two-level. attention applied at word level and sentence level to compute representations of all input sentences\nThe office is north of the kitchen. The master bedroom is east of the ga The garden is south of the kitchen. The guest bedroom is east of the offic Q: What is north of the kitchen? Q: What is the bedroom east of? A: Office A: Unknown\nThe context information extracted from the input story is allowed to influence the attention over each word, and governs the word semantic meaning contributing to a sentence representation. In addition. an interactive mechanism is activated to generate a supplementary question for the user when the model feels that it does not have complete information to answer a given question. User's feedback is then encoded and exploited to attend over all input sentences to infer the answer. Our proposed model CAN can be viewed as an encoder-decoder approach augmented with two-level attention and an interactive mechanism, rendering our model self-adaptive, as illustrated in Figure[1\nOur contributions in this paper are as follows: (i) We develop a new encoder-decoder model calle CAN for question answering with two-level attention. Due to the new attention mechanism, ou model avoids the necessity of multiple-hop attention, required by previous QA models, and know when it can readily output an answer and when it needs additional information. (ii) We augment th encoder-decoder framework for QA with an interactive mechanism for handling user's feedback which immediately changes sentence-level attention to infer the final answer without additiona model training. (iii) We introduce a new dataset based on the bAbI dataset, namely ibAbI, for IQ tasks. (iv) Extensive experiments show that our approach outperforms state-of-the-art models o both QA and IQA datasets. Specifically, our approach achieves 40% improvement over traditiona QA models (e.g., MemN2N and DMN+) on IQA datasets.\nInput Module Sentence Attention Mechnism Interactive Mechanism Word Attention Mechanism Master bedroom The master bedroom is east of the garden. 1 The guest bedroom is east of the office. Context Supplemetary Question The guest bedroom is west of the hallway Representation The bathroom is east of the master bedroom. Which bedroom, master one Decoder or guest one? Question Representation Question Module Answer What is the bedroom east of? Garden Encoder Answer Module\nWord Attention Mechanism The master bedroom is east of the garden The guest bedroom is east of the office.\nFigure 1: An example of QA problem using CAN"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Recent work on QA has been heavily influenced by research on various models with attention and/o memory. Most of these models employ an encoder-decoder framework, and have been successfully applied to image classification (Seo et al.]2016), image captioning (Xu et al.]2015] Mnih et al. 2014), machine translation (Cho et al.[2014] Bahdanau et al.]2015] Luong et al.[2015), documen classification (Yang et al.[2016), and textual/visual QA (Sainbayar et al.]2015) Yang et al.]2015 Lu et al.[2016] Kumar et al.[2016] [Xiong et al.2016). For textual QA in the form of statements question-answer triplets, Sainbayar et al.[(2015) utilizes an external memory module. It maps eacl nput sentence to an input representation space regarded as a memory component. The output repre sentation is calculated by summarizing over input representations with different attention weights. This single-layer memory can be extended to multi-layer memory by reasoning the content and the. question multiple times. Instead of simply stacking the memory layers,Kumar et al.(2016) have. ntroduced a dynamic memory network (DMN) to update the memory vectors through a modifie. GRU, in which the gate weight is trained in a supervised fashion. To improve DMN by train\nTable 1: Two examples of QA problems. Left is an ideal QA example, where the question is very clear. Right is an example with incomplete information, where the question is ambiguous and it is difficult to provide an answer only using the input statements..\ning without supervision,Xiong et al.(2016) encode input sentences with a bidirectional GRU anc. then utilize an attention-based GRU to summarize these input sentences. Neural Turing Machin (NTM) (Graves et al.] 2014), a model with content and location-based memory addressing mecha nisms, has also been used for QA tasks recently. There is other recent work about QA using externa. resources (Wu et al.]2015} Fader et al.]2014, Savenkov & Emory2016) Hermann et al.]2015 Golub & He[ 2016), and exploring dialog tasks (Weston2016f Bordes & Weston]2016).\nOur model in this paper also addresses textual QA in the form of statements-question-answer triplets. but it differs from prior work in two aspects. First, in our attention network, the word attention are. context-dependent for generating accurate sentence representations and the sentence attention are question-guided for generating context representation. Second, this new attention mechanism helps. our model understand when it can readily output an answer and when it can generate a supple mentary question for activating the user interaction. Incorporating user's feedback does not require. additional model training and this property makes our model highly self-adaptive..\nGated Recurrent Unit (GRU) (Cho et al.] 2014) is the basic building block of our model for IQA GRU has been widely adopted for many NLP tasks, such as machine translation (Bahdanau et al.. 2015) and language modeling (Zaremba et al.]2014). GRU improves Long Short-term Memory. (LSTM) (Hochreiter & Schmidhuber 1997) by removing the cell component and making each hid. den state adaptively capture the dependencies over different time scales using reset and update gates For each time step t with input x' and previous hidden state ht-1, we compute the updated hidden. state ht = G RU(ht-1, xt) by,\nrt =o(U,xt+W,ht-1+ br), zt =o(U,xt+W,ht-1+bz) ht =tanh(Unxt + Wn(rt O ht-1) + bn), ht=zOht-1+(1-zt)Oh\nwhere o is the sigmoid activation function, O is an element-wise product, U,, Uz, Un E RK D W,, W,, Wh E RKK, br, bz, bh E RK1, K is the hidden size and D is the input size..\nIn this section, we first illustrate the proposed model CAN (S 4.1), including the question modul (8 4.2), the input module($ 4.3) and the answer module ($ 4.4). We then describe each of these modules in detail. Finally, we elaborate the training procedure of CAN ($ 4.5)"}, {"section_index": "4", "section_name": "4.1 FRAMEWORK", "section_text": "Given a story represented by N input sentences (or statements), i.e., (l1, :.. , ln), and a question q,. our goal is to generate an answer a. Each sentence ly includes a sequence of N words, denoted as (wt, ... , wN.), and a question with Nq words is represented as (w1, ... , wN.). Let V denote the. size of dictionary, including the words from each lt, q and a, and end-of-sentence (EOs) symbols..\nThe whole framework of our model is shown in Figure[2l consisting of the following three key parts\nQuestion Module: The question module encodes the target question into a vector representation. Input Module: The input module encodes a set of input sentences into a vector representation.. Answer Module: The answer module generates an answer based on the outputs of question and. input modules. Unlike traditional QA models, it has two choices, either to output an answer im-. mediately or to interact with the user for further information. Hence, if the model lacks sufficient evidence for answer prediction based on the existing knowledge at current timestamp, an interac- tive mechanism is enabled. Specifically, the model generates a supplementary question, and the. user needs to provide a feedback. which is utilized to estimate an answer..\nInput Module Answer Module m Context Encoder f Interactive Mechanism t-1 t +1 GRU GRU GRU Output Question Module GRU GRU GRU Answer u 4 Period V Yt- Yt Yt+1 7 W Symbol Question at User's Mask Y2 a2 C ONt Output V N Feedback Question GRU GRU GRU GRU GRU GRU x1 FNa x 1 1 1 x x1 XNt GRU GRU GRU Sentence w9 w2 Encoder wt wt WN W\nInput Module. Answer Module m Context Encoder Interactive Mechanism t-1 GRU GRU GRU Output Question Module GRU GRU GRU Answer u 4 Period V Yt Yt+1l Symbol M Question at Mask Y2 dN User's Output Y1 N EOS> Feedback Question GRU GRU GRU GRU GRU GRU x1 52 XN a 1 x9 T XN GRU GRU GRU Sentence Encoder w2 WNq wt wt WN\nFigure 2: The illustration of the proposed model, consisting of a question module, an input module and an answer module. The question module maps the question sentence into a sentence level space The input module generates a context representation based on input sentences. The answer module. has a binary choice, either to generate an answer immediately or to take an interactive mechanism."}, {"section_index": "5", "section_name": "4.2 OUESTION MODULE", "section_text": "Suppose a question is a sequence of Ng words, we encode each word w; into a Kw-dimensiona vector space x using an embedding matrix Ww E RKwV, i.e., x - Ww[wj], where [wj is a one-hot vector associated with word ws. The sequence order within a sentence significantly affects each word's semantic meaning due to its dependence on the previous words. Thus, a GRU is employed by taking each word vector x, as input and updating the hidden state g? E RKn 1 as:\nIn addition, each word contributes differently to the representation of a question. For example, i1 a question 'Where is the football?', 'where' and 'football' play a critical role in summarizing this. sentence. Therefore, an attention mechanism is introduced to generate a question representatioi. by focusing on the important words for their semantic meaning. A positive weight j is placed or. each word to indicate the relative importance of contribution to the representation of the question Specifically, this weight is measured as the similarity of corresponding word annotation vector g.. and a word level latent vector v E RKn 1 for question which is jointly learned during the training. process. The question representation u E RK.1 is then generated by a weighted summation of the word annotation vectors and corresponding important weights, where we also use one-layer MLI. to transfer it from sentence-level space into context-level space,."}, {"section_index": "6", "section_name": "4.3 INPUT MODULE", "section_text": "Input module aims at generating a representation for input sentences, including a sentence encoder. and a context encoder. Sentence encoder computes a sentence representation, and context encoder. calculates a representation of input sentences, both of which are introduced in the following sections\n= GRUw(g}-1,x)\nwhere the subscript of GRU is used to distinguish other GRUs used in the following sections. The hidden state g! can be regarded as the annotation vector of word w, by incorporating the word order information. We also explore a variety of encoding schema, such as LSTM and RNN. However, LSTM is prone to overfitting due to large number of parameters, and RNN has a poor performance because of exploding and vanishing gradients (Bengio et al.1994).\nNq u = Wch Yj = softmax(vfg1) (2) b(q) j=1 so ftmax is defined as softmax(xi) = Wch E RKcxKn,and b(9) E RK"}, {"section_index": "7", "section_name": "4.3.1 SENTENCE ENCODER", "section_text": "For each input sentence lt, containing a sequence of Nt words (w1, :. . , wv, ), similar to the question. module, each word w, is embedded into word space x E RK 1 with the embedding matrix Ww,. and a recurrent neural network is used to capture the context information from the words which have already been generated in the same sentence. Let ht E RKn1 denote the hidden state which can. be interpreted as the word annotation in the input space. A GRU retrieves each word annotation by taking word vector as input and relying on previous hidden state,.\nIn Eq.(4), each word annotation vector takes the sequence order into consideration to learn its seman- tic meaning based on previous information within a sentence through a recurrent neural network. A question answering system is usually given multiple input sentences which often form a story to- gether. A single word has different meaning in the different stories. Learning a single sentence context at which a word is located is insufficient to understand the meaning of this word, in par ticular when the sentence is placed in a story context. In other words, only modeling a sequence of words prior to a word within a sentence may lose some important information which results ir the failure of the generation of sentence representation. Hence, we take the whole context into ac count as well to appropriately characterize each word and well understand this sentence's meaning Suppose st-1 E RKc1 is the annotation vector of previous sentence lt-1, which will be intro- duced in the next section. To incorporate context information generated by previous sentences, we feed word annotation h, and previous sentence annotation St-1 through a two-layer MLP, where a context-aware word vector e, E R Kc 1 is obtained as follows:\n, = o(Weetanh(WesSt-1 + Wehh, + b\nwhere Wee, Wes E RKcxKc and Weh E RK.xKn are weight matrices, and b(1), b(2) E RKcx1 are the bias terms. It is worth noting that st-1 is dependent on its previous sentence. Recursively, this sentence relies on its previous one as well. Hence, our model is able to encode the previous context. In addition, the sentence representation will focus on those words which are able to address the question. Inspired by this intuition, another word level attention mechanism is introduced to attend informative words about the question for generating a sentence's representation. As the question representation is utilized to guide the word attention, a positive weight a, associated with each word is computed as the similarity of the question vector u and the corresponding context-aware word vector ef. Then the sentence representation yt E RKn1 is generated by aggregating the word annotation vectors with different weights,"}, {"section_index": "8", "section_name": "4.3.2 CONTEXT ENCODER", "section_text": "Suppose a story is comprised of a sequence of sentences, i.e., (l1, : : . , ln ), each of which is encodec as a Kp-dimensional vector yt through a sentence encoder. As input sentences have a sequence or- der, simply using their sentence vectors for context generation cannot effectively capture the entire context of the sequence of sentences. To address this issue, a sentence annotation vector is intro duced to capture the previous context and this sentence's own meaning using a GRU. Given the sentence vector yt and the state st-1 of previous sentence, we get annotation vector st E RKc1 as:\nA GRU can learn a sentence's meaning based on previous context information. However, just re lying on GRU at sentence level using simple word embedding vectors makes it difficult to learr the precise semantic meaning for each word in the story. Hence, we introduce a context-aware at tention mechanism shown in Eq.(5) to properly encode each word for the generation of sentence representation, which guarantees that each word is reasoned under the specific context.\nOnce the sentence annotation vectors (s1,... ,Sy) are obtained as described above, a sentence. level attention mechanism is enabled to emphasize those sentences that are highly relevant to. the question. We can estimate the attention weight 3 with the similarity of the question and the\nht = GRU,(h\nNt Qf = softmax(uTet) aht. Yt = i=1\nSt = GRUs(St-1,Yt)\ncorresponding sentence. Hence, the context representation m is retrieved by summing over all sentence representations associated with corresponding attention weights, and given by:.\nN t = softmax(uTst), (8) Btst. m t=1"}, {"section_index": "9", "section_name": "4.4.1 ANSWER GENERATION", "section_text": "Given the question representation u and the context representation m, another GRU is used as the decoder to generate a sentence as the answer. To fuse u and m together, we sum these vectors rather than concatenating them to reduce the total number of parameters. Suppose x-1 E RKw1 is the predicted word vector in last step, GRU updates the hidden state z E IRK, 1 as follows,\nXk Zk = GRUa(Zk-1,[m+ u;Xk-1]\nwhere require that each sentence ends with a special EOS symbol, including question mask and perioc symbol, which enables the model to define a distribution over sentences of all possible lengths.\nOutput Choices. In practice, the system is not aways able to answer question immediately base on its current knowledge due to the lack of some crucial information bridging the gap betwee. question and context knowledge, i.e., incomplete issue. Therefore, we allow the decoder to mak a binary choice, either to generate an answer immediately, or to enable an interactive mechanisr Specifically, if the model has sufficiently strong evidence for a successful answer prediction based o the well-learned context representation and question representation, the decoder will directly outpu the answer. Otherwise, the system generates a supplementary question for user, where an exampl. is shown in Table|2] At this time, this user needs to offer a feedback which is then encoded to updat. the sentence-level attentions for answer generation. This procedure is our interactive mechanism.\nTable 2: An example of interactive mechanism. \"SQ\" denotes supplementary question\nThe sentence generated by the decoder ends with a special symbol, either a question mask or a. period symbol. Hence, this special symbol is utilized to make a decision. In other words, if EOS. symbol is a question mask, the generated sentence is regarded as a supplementary question and ar interactive mechanism is enabled; otherwise the generated sentence is the estimated answer and th prediction task is done. In the next section, we will introduce the details of interactive mechanism."}, {"section_index": "10", "section_name": "4.4.2 INTERACTIVE MECHANISM", "section_text": "The interactive process is summarized as follows: 1) The decoder generates a supplementary ques tion; 2) The user provides a feedback; 3) The feedback is used for answer prediction for the target question. Suppose the feedback contains a sequence of words, denoted as (wf ).Simi\nlar to the input module, each word w? is embedded to a vector x through an embedding matrix\nN m tSt t=1\nSimilar to bidirectional RNN, our model can be extended to use another sentence-level GRU that moves backward through time beginning from the end of the sequence\nThe answer module utilizes a decoder to generate an answer, where it has two output cases according to the understanding ability of both the question and the context. One is to generate the answer mmediately after receiving the context and question information. Another one is to generate a supplementary question and then use the user's feedback to predict the answer. This process is taken by an interactive mechanism.\nBased on the annotation vectors, a representation f E RKn 1 can be obtained by a simple attention mechanism where each word is considered to contribute equally, and given by:\nOur goal is to utilize the feedback representation f to generate an answer for the target question. The provided feedback improves the ability to answer the question by distinguishing the relevance. of each input sentence to the question. In other words, the similarity of specific input sentences in. the provided feedback make these sentences more likely to address the question. Hence, we refine. the attention weight of each sentence shown in Eq.(9) after receiving the user's feedback. given by.\nwhere Wrf E RK.Kn and b(f) E RKex1 are the weight matrix and bias vector, respectively.. Eq.(13) is an one-layer neural network to transfer feedback representation to context space. After. obtaining the newly learned attention weights, we update the context representation using the soft attention operation shown in Eq.(9). This updated context representation and question representation. will be used as the input for decoder to generate an answer. Note that for simplifying the problem, we. allow the decoder to only generate at most one supplementary question. In addition, one advantage. of using the user's feedback to update the attention weights of input sentences is that we do not need. to re-train the encoder again once a feedback is entering the system.."}, {"section_index": "11", "section_name": "4.5 TRAINING PROCEDURE", "section_text": "During training, all modules share an embedding matrix. There are three different GRUs employed for sentence encoding, context encoding and answer/supplementary question decoding. In othei words, the same GRU is used to encode the question, input sentences and the user's feedback The second one is applied to generate context representation and the third one is used as decoder Training can be treated as a supervised classification problem to minimize the cross-entropy error of. the answer sequence and the supplementary question sequence..\nIn this section, we evaluate our a. proach with many baseline methods based on various datasets\nDatasets. In this paper, we use two types of datasets to evaluate the performance of our approach One is traditional QA dataset, where we use Facebook bAbI English 1Ok dataset (Weston et al. 2015). It contains 20 different types of tasks with emphasis on different forms of reasoning anc induction. The second is the newly designed IQA dataset, where we extend bAbI to add interac tive QA and denote it as ibAbI. Overall, we generate three ibAbI datasets based on task 1 (singl supporting fact), task 4 (two argument relations), and task 7 (counting). Specifically, the forme two datasets focus on solving ambiguous actors/objects problem, and the latter one is to ask furthe information that assists answer prediction. Table3 shows three examples for our three ibAbI tasks.\nIn addition, we also mix IQA data and corresponding QA data together with different IQA ratios where the IQA ratio is ranging from 0.3 to 1 (with step as 0.1) and denoted as R1QA. For example, in. task 1, we randomly pick R1QA 100 percent data from ibAbI task 1, and then randomly select the. remaining data from bAbI task 1. R1QA = 1 indicates that the whole dataset only consists of IQA. problems; otherwise (i.e., ranging from 0.3 to 0.9) it consists of both types of QA problems. Overall, we have three tasks for ibAbI dataset, and eight sub-datasets for each task. In the experiments, 10k examples are used as training and another 1k examples are used as testing..\nWw. Then the corresponding annotation vector g E RKh1 is retrieved via a GRU by taking the embedding vector as input, and shown as follows:\nGRU.\nN f 1 f gd N d=1\nIQA task 1: IQA task 4: IQA task 7: John journeyed to the garden The master bedroom is east of the garden John grabbed the bread. Daniel moved to the kitchen The guest bedroom is east of the office. John grabbed the milk. The guest bedroom is west of the hallway. John grabbed the apple. The bathroom is east of the master bedroom Sandra went to the bedroom. Q: Where is he? Q: What is the bedroom east of? Q: How many special objects is John holding? SO: Who is he? SQ: Which bedroom, master one or guest one? SQ: What objects are you referring to? FB: Daniel FB: Master bedroom FB: Milk, bread A: Kitchen A: Garden A: Two\nTable 3: Examples of three different tasks on the generated ibAbI datasets. \"Q\"' indicates the target question. \"SQ' is the supplementary question. \"FB' refers to user's feedback. \"A' is the answer.\nExperiment Settings. We train our models using the Adam optimizer (Kingma & Ba]2014). Xavier initialization is used for all parameters except for word embeddings, which utilize random uniform initialization ranging from 3 to 3. The learning rate is set as 0.001. The grid search method is utilized to find optimal parameters, such as batch size and hidden size."}, {"section_index": "12", "section_name": "5.2 BASELINE METHODS", "section_text": "To demonstrate the effectiveness of our approach CAN, we com npare it with the following models:"}, {"section_index": "13", "section_name": "5.3 PERFORMANCE OF OUESTION ANSWERING", "section_text": "In this section, we evaluate model's ability for answer prediction based on traditional QA dataset (i.e., bAbI-10k). For this task, our model (denoted as CAN+QA) does not use the interactive mech- anism. As the output answers for this dataset only contain a single word, we adopt test error rate as evaluation metric. For DMN+ and MemN2N methods, we select the best performance over bAbI dataset reported in (Xiong et al.|2016). The results of various models across 20 tasks are reported in Table|4] We summarize the main observations as follows:\nOur approach is better than all baseline methods in each individual task. For example, it reduce the error by 4% compared to DMN+ in task 17. and compared to MemN2N. it reduces 18.4% an 4.8% error in task 17 and 18 respectively. We can achieve a better result primarily because ou approach can model the semantic logic flow for statements. Table|5|shows two examples in tas 17 and 18, where MemN2N predicts incorrectly while CAN+QA can make correct predictions In these two examples, the semantic logic determines the relationship between two objects men tioned in the question, such as chest and suitcase. In addition, [Kumar et al.[(2016) has shown tha memory networks with multiple hops are better than the one with single hop. Our strong result illustrate that our approach has more accurate context modeling even without multiple hops. EncDec performs the worst amongst all models over all tasks. EncDec concatenates the state ments and questions as a single input, resulting in the difficulty of training the GRU. For exampl EncDec is not good on task 2 and 3 because these two tasks have longer inputs than other tasks. . The results of DMN+ and MemN2N are much better than EncDec. It is not surprising that the can outperform EncDec, because they are specifically designed for question answering and do nc suffer from the problem mentioned above by treating input sentences separately. All models perform poorly on task 16.Xiong et al.(2016) points out that MemN2N with a simpl update for memory could achieve a near perfect error rate of 0.4 while a more complex metho will lead to a much worse result. This shows that a sophisticated modeling method makes\nDMN+: Xiong et al.(2016) improve Dynamic Memory Networks (Kumar et al.] 2016) by using stronger input and memory modules, where a bidirectional GRU is adopted to generate represen tations for statements and a neural network is used to update episodic memory multiple times. MemN2N: This is an extension of Memory Network with weak supervision as proposed in Sain. bayar et al. (2015). Here, an external memory module is used to encode the input statements and a recurrent attention mechanism is used to read the memory for answer prediction.. EncDec: We extend the encoder-decoder framework (Cho et al.]2014) to solve QA tasks as a baseline method. Specifically, EncDec uses a GRU to encode statements and questions, the enc. of hidden states is used as context representation, and another GRU to generate the output..\nTable 4: Performance comparison of various models in terms of test error rate (%) in QA dataset\nTable 5: Examples of bAbI task 17 (left) and 18 (right), where our model predicts correct answers while MemN2N makes wrong predictions\nTable 6: Examples of our model's results on QA tasks. Supporting facts are shown in the datasets which our model does not use during training. \"Weight' indicates the attention weight for sentence Our model can locate correct supporting sentences for long stories.\nIn this section, we evaluate the performance of various models based on IQA dataset (as described in Section|5.1). For testing, we simulate the interactive procedure by taking the predefined feedback as user's input for the generated supplementary question, and then generating an answer. All baseline methods do not have interactive part, so they take both statements and question as input and then. estimate an answer. We compare our approach (CAN+IQA) with baseline methods in terms of test error rate shown in Table7 From the results, we can achieve the following conclusions:.\ndifficult to achieve a good performance in certain simple tasks with such limited data. This ca be a possible reason for the poor performance of our model on this specific task as well.\nIn addition, different from MemN2N, we use a GRU to capture the semantic logic flow of input sentences, where the sentence-level attention can weaken the influence of unrelated sentences in a long story. Table6|shows two examples of our results with long stories. From the attention weights, we can see our model can correctly search relevant sentences in a long story to address a question\nTask CAN+QA DMN+ MemN2N EncDec 1 - Single Supporting Fact 0.0 0.0 0.0 52.0 2 - Two Supporting Facts 0.1 0.3 0.3 66.1 3 - Three Supporting Facts 0.2 1.1 2.1 71.9 4 - Two Arg. Relations 0.0 0.0 0.0 29.2 5 - Three Arg. Relations 0.4 0.5 0.8 14.3 6 - Yes/No Questions 0.0 0.0 0.1 31.0 7 - Counting 0.3 2.4 2.0 23.6 8 - Lists/Sets 0.0 0.0 0.9 28.8 9 - Simple Negation 0.0 0.0 0.3 39.1 10 - Indefinite Knowledge 0.0 0.0 0.0 45.0 11 - Basic Coreference 0.0 0.0 0.1 31.7 12 - Conjunction 0.0 0.0 0.0 35.0 13 - Compound Coref. 0.0 0.0 0.0 8.7 14 - Time Reasoning 0.0 0.2 0.1 67.2 15 - Basic Deduction 0.0 0.0 0.0 62.2 16 - Basic Induction 43.0 45.3 51.8 54.0 17 - Positional Reasoning 0.2 4.2 18.6 43.1 18 - Size Reasoning 0.5 2.1 5.3 9.0 19 - Path Finding. 0.0 0.0 2.3 89.6 20 - Agents Motivations 0.0 0.0 0.0 2.3\nOur method significantly outperforms all baseline methods. Specifically, we can achieve 0% te error rate in task 1 and task 4 with R1QA = 1.0 ; while the best result of baseline methods can onl get 40.5% test error rate. CAN+IQA benefits from more accurate context modeling, which allow it to correctly understand when to output an answer or require additional information. For tho. QA problems with incomplete information, it is necessary to gather the additional informatio from users. Randomly guessing may harm model's performance, which makes conventional Q models difficult to converge. But our approach uses an interactive procedure to obtain user feedback and allows the model to provide the correct answer. For the baseline methods, DMN+ and MemN2N perform similarly and do better than EncDe Their similar performance (which are worse than our approach) is due to the limitation that the could not learn the accurate meaning of statements and questions with limited resource and the have trouble training the models. But they are superior over EncDec as they treat each inpi sentence separately instead of modeling very long inputs.\nIn addition, we also quantitatively evaluate the quality of supplementary question generated by our approach where the details can be found in Appendix[A\nTable 7: Performance comparison of various models in terms of test error rate (%) based on interac tive question answering datasets with different IQA ratios."}, {"section_index": "14", "section_name": "5.5 OUALITATIVE ANALYSIS OF INTERACTIVE MECHANISM", "section_text": "In this section, we qualitatively show the attention weights over input sentences generated by ou model on both QA and IQA data. We train our model (CAN+IQA) on task 1 of ibAbI dataset witl Q 1QA = 0.9, and randomly select one IQA example from the testing data. Then we do the prediction on this IQA problem. In addition, we change this instance to a QA problem by replacing the questior \"Where is she?' with \"Where is Sandra?\", and then do the prediction as well. The prediction results on both QA and IQA problems are shown in Table[8] From the results, we observe the following 1) The attention that uses user's feedback focuses on the key relevant sentence while the attentior without feedback only focuses on an unrelated sentence. This happens because utilizing user's feedback allows the model to understand a question better and locate the relevant input sentences This illustrates the effectiveness of an interactive mechanism on addressing questions that require additional information. 2) The attention on both two problems can finally focus on the relevan sentences, showing the usefulness of our model for solving different types of QA problems.\nIn this paper, we present a self-adaptive model, CAN, which learns more accurate representations for statements and questions. More importantly, our model is aware what it knows and what it does not know within the context of the story, and takes an interactive mechanism to answer a question Hence, our model takes an important step towards having a natural and intelligent conversation\nRIQA RIQA RIQA R1QA RIQA R1QA R1QA RIQA Methods = 1.0 = 0.9 = 0.8 = 0.7 = 0.6 = 0.5 = 0.4 = 0.3 IQA Task 1 CAN+IQA 0.00 0.00 0.10 0.50 0.60 0.70 2.10 0.40 DMN+ 42.2 42.1 33.0 28.9 25.0 19.9 17.3 11.6 MemN2N 40.5 38.9 34.4 30.0 23.9 18.4 16.9 13.9 EncDec 53.6 54.3 52.9 53.5 51.8 50.1 45.1 44.8 IQA Task 4 CAN+IQA 0.00 1.30 0.10 0.60 0.60 1.10 1.40 1.20 DMN+ 53.5 56.1 50.4 40.7 34.5 27.4 23.4 16.6 MemN2N 50.4 50.1 41.8 36.1 29.5 25.3 18.7 15.8 EncDec 55.9 54.9 52.5 49.2 45.9 38.9 30.4 24.2 IQA Task 7 CAN+IQA 0.30 2.10 2.50 1.80 2.00 0.70 0.20 0.30 DMN+ 54.1 50.3 47.7 42.3 38.1 33.9 27.7 27.6 MemN2N 54.6 52.0 46.3 40.8 36.1 32.4 23.3 19.6 EncDec 55.5 50.9 48.6 44.7 39.1 32.3 31.9 26.6\nTable 8: Examples of sentence attention weights obtained by our model in both QA and IQA data \"Before IM\"' indicates the sentence attention weights over input sentences before the user provides a feedback. \"After IM\"' indicates the sentence attention weights updated by user's feedback. The attention weights with value as O.00 are very small. The results show that our approach can attenc the key relevant sentences for both QA and IQA problems.\nwith humans. In the future, we plan to employ more powerful attention mechanisms with explici unknown state modeling and multi-round feedback-guided fine-tuning to make the model fully self aware, self-adaptive, and self-taught. We also plan to expand our results to harder co-reference anc interactive visual QA tasks with uncertainty modeling."}, {"section_index": "15", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. In ICLR, 2015\nSatanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improvec correlation with human judgments. In ACL workshop, 2005.\nAlex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401 2014.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In NIPS, pp. 1693- 1701, 2015.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8) 1735-1780, 1997.\nIQA Data Input Sentences Support QA Data Before IM After IM Mary journeyed to the kitchen. 0.00 0.99 0.00 Sandra journeyed to the kitchen. 0.00 0.00 0.00 Mary journeyed to the bedroom. 0.00 0.00 0.00 Sandra moved to the bathroom. 0.00 0.00 0.00 Sandra travelled to the office. yes 0.99 0.00 0.99 Mary journeyed to the garden. 0.00 0.00 0.00 Daniel travelled to the bathroom. 0.00 0.00 0.00 Mary journeyed to the kitchen. 0.00 0.00 0.00 John journeyed to the office. 0.00 0.00 0.00 Mary moved to the bathroom. 0.00 0.00 0.00 Q: Where is Sandra? Q: Where is she? A: Office SQ: Who is she? FB: Sandra A: Office\n0.00 0.00 0.00 Q: Where is Sandra? A: Office.\nQ: Where is Sandra? A: Office\n. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. Trans. Neur. Netw., 5(2):157-166, 1994. ISSN 1045-9227.\nAnkit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. In ICML, pp. 1378-1387, 2016.\nJiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attentior for visual question answering. CoRR, abs/1606.00061, 2016\nMinh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention based neural machine translation. CoRR, abs/1508.04025, 2015.\nDenis Savenkov and Eugene Agichtein Emory. When a knowledge base is not enough: Question answering over knowledge bases with external text data. In SIGIR, pp. 235-244, 2016.\nPaul Hongsuck Seo, Zhe Lin, Scott Cohen, Xiaohui Shen, and Bohyung Han. Hierarchical attention networks. CoRR, abs/1606.02393. 2016.\nJason Weston. Dialog-based language learning. NIPs. 2016\nKelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov. Richard S. Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. CoRR, abs/1502.03044. 2015.\nZichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alexander J. Smola. Stacked attention networks for image question answering. CoRR, abs/1511.02274, 2015.\nWojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization CoRR. abs/1409.2329. 2014.\nSukhbaatar Sainbayar, Szlam Arthur, Weston Jason, and Fergus Rob. End-to-end memory networks In NIPS, pp. 2440-2448, 2015.\nJason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698, 2015.\nZichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. Hierarchical attention networks for document classification. In HLT, pp. 1480-1489, 2016."}, {"section_index": "16", "section_name": "SUPPLEMENTARY OUESTION ANALYSIS", "section_text": "We quantitatively evaluate the quality of supplementary question generated by our model based on. IQA dataset. All baseline methods are designed to only predict an answer, none of them can generate a question. Thus we design another baseline method to generate supplementary question based on EncDec, and denote it as EncDec*. Specifically, in training procedure, EncDec* takes statements. and questions as input. If the supplementary question is available, it is used as output; otherwise. the corresponding answer is viewed as output. Similar to our approach, the EOS symbol is used to. determine whether the generated sentence is question or not, where the question ends with question. mark and the answer ends with period symbol..\nof IQA problems which can be correctly estimated, and AnsAcc : Na is the fraction of remainin\nFrom the results, we can observe that 1) Two models can almost correctly determine whether i is time to output a question or not; 2) Two models are able to generate the correct supplementary questions whose contents exactly match with the ground truth. There is no surprise that EncDec* also performs well in generating question, because it is specifically designed for only outputting questions. Thus, if given enough training data, EncDec* could predict good questions. However the limitation is that it cannot predict a supplementary question and an answer at the same time Different from EncDec*, our approach can accurately know when to output an answer or when to generate a supplementary question.\nTable 9: Performance comparison of the generated supplementary question quality with R1QA a. 0.8. Both two methods achieve 100% under all metrics in all tasks with other different R1QA values\nTo test model's ability to generate supplementary question, we define some following metrics. Sup. pose the number of problems is N, and the number of problems having supplementary question is V all accuracy. In addition, the widely used BLEU (Papineni et al.]2002) and METEROR (Banerjee & Lavie] 2005) are also adopted to evaluate the quality of generated supplementary question. The results of our method and baseline method are presented in Table9\nNs. Then N. = N . is fhe number ot. eAcc is the fraction Oroblems.IelS\nIQA Ratio Method SQueAcc AnsAcc SQueAnsAcc BLEU-1 BLEU-4 METEOR IQA Task 1 CAN+IQA 100% 100% 100% 100% 100% 100% 0.8 EncDec* 100% 99.5% 99.9% 100% 100% 100% IQA Task 4 CAN+IQA 100% 100% 100% 100% 100% 100% 0.8 EncDec* 100% 100% 100% 100% 100% 100% IQA Task 7 CAN+IQA 100% 100% 100% 100% 100% 100% 0.8 EncDec* 100% 100% 100% 100% 100% 100%"}]
SJGPL9Dex
[{"section_index": "0", "section_name": "UNDERSTANDING TRAINABLE SPARSE CODING VIA MATRIX FACTORIZATION", "section_text": "Thomas Moreau"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Sparse coding is a core building block in many data analysis and machine learning pipelines. Typically it is solved by relying on generic optimization techniques, such as the Iterative Soft Thresholding Algorithm and its ac- celerated version (ISTA, FISTA). These methods are optimal in the class of first-order methods for non-smooth, convex functions. However, they dc not exploit the particular structure of the problem at hand nor the input data distribution. An acceleration using neural networks, coined LISTA was proposed in Gregor & Le Cun (2010), which showed empirically that one could achieve high quality estimates with few iterations by modifying the parameters of the proximal splitting appropriately. In this paper we study the reasons for such acceleration. Our mathematica. analysis reveals that it is related to a specific matrix factorization of the Gram kernel of the dictionary, which attempts to nearly diagonalise the kernel with a basis that produces a small perturbation of the l ball. Wher this factorization succeeds, we prove that the resulting splitting algorithr enjoys an improved convergence bound with respect to the non-adaptive version. Moreover, our analysis also shows that conditions for acceleratior occur mostly at the beginning of the iterative process, consistent with nu merical experiments. We further validate our analysis by showing that on dictionaries where this factorization does not exist, adaptive acceleration fails."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Feature selection is a crucial point in high dimensional data analysis. Different technique have been developed to tackle this problem efficiently, and amongst them sparsity ha emerged as a leading paradigm. In statistics, the LASsO estimator (Tibshirani, 1996 orovides a reliable way to select features and has been extensively studied in the last tw decades (Hastie et al. (2015) and references therein). In machine learning and signal process ing, sparse coding has made its way into several modern architectures, including large scal computer vision (Coates & Ng, 2011) and biologically inspired models (Cadieu & Olshausen 2012). Also, Dictionary learning is a generic unsupervised learning method to perform non linear dimensionality reduction with efficient computational complexity (Mairal et al., 2009) All these techniaues heavily rely on the resolution of l1-regularized least squares.\n1 z*(x) = arg min Fx(z) Ix-DzII2+X||zl|1\nThis problem is convex and can therefore be solved using convex optimization machinery. Proximal splitting methods (Beck & Teboulle, 20o9) alternate between the minimization of the smooth and differentiable part using the gradient information and the minimization of the non-differentiable part using a proximal operator (Combettes & Bauschke, 2011). These methods can also be accelerated by considering a momentum term, as it is done in FISTA\nJoan Bruna\nThe l1-sparse coding problem is defined as solving, for a given input x E Rn and dictionary D E Rnm, the following problem:\n(Beck & Teboulle. 2009: Nesterov. 2005). Coordinate descent (Friedman et al., 2007: Oshe. & Li, 2009) leverages the closed formula that can be derived for optimizing the problem (1. for one coordinate zi given that all the other are fixed. At each step of the algorithm, one. coordinate is updated to its optimal value, which yields an inexpensive scheme to perforn. each step. The choice of the coordinate to update at each step is critical for the performance. of the optimization procedure. Least Angle Regression (LARS) (Hesterberg et al., 2008) is. another method that computes the whole LASsO regularization path. These algorithms al. provide an optimization procedure that leverages the local properties of the cost functior. iteratively. They can be shown to be optimal among the class of first-order methods fo. generic convex, non-smooth functions (Bubeck, 2014).\nBut all these results are given in the worst case and do not use the distribution of the. considered problem. One can thus wonder whether a more efficient algorithm to solve (1) exists for a fixed dictionary D and generic input x drawn from a certain input data. distribution. In Gregor & Le Cun (2010), the authors introduced LISTA, a trained version. of ISTA that adapts the parameters of the proximal splitting algorithm to approximate the. solution of the LASSO using a finite number of steps. This method exploits the common. structure of the problem to learn a better transform than the generic ISTA step. As ISTA. is composed of a succession of linear operations and piecewise non linearities, the authors. use the neural network framework and the backpropagation to derive an efficient procedure. solving the LASSO problem. In Sprechmann et al. (2012), the authors extended LISTA. to more generic sparse coding scenarios and showed that adaptive acceleration is possible. under general input distributions and sparsity conditions..\nInspired by the LISTA architecture. our mathematical analysis reveals that adaptive accel eration is related to a specific matrix factorization of the Gram matrix of the dictionary B = DTD as B = A'SA R ,where A is unitary, S is diagonal and the residual is positive semidefinite: R a 0. Our factorization balances between near diagonalization by asking that ||R|| is small and small perturbation of the l1 norm, i.e. ||Az||1 - ||z||1 is small. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys a conver- gence rate with improved constants with respect to the non-adaptive version. Moreover, our analysis also shows that acceleration is mostly possible at the beginning of the iterative process, when the current estimate is far from the optimal solution, which is consistent with numerical experiments. We also show that the existence of this factorization is not only sufficient for acceleration, but also necessary. This is shown by constructing dictionaries whose Gram matrix diagonalizes in a basis that is incoherent with the canonical basis, and verifying that LISTA fails in that case to accelerate with respect to ISTA.\nIn our numerical experiments, we design a specialized version of LISTA called FacNet, with more constrained parameters, which is then used as a tool to show that our theoretical anal ysis captures the acceleration mechanism of LISTA. Our theoretical results can be appliec to FacNet and as LISTA is a generalization of this model, it always performs at least as well showing that the existence of the factorization is a sufficient certificate for acceleration by\nIn this paper, we are interested in the following question: Given a finite computational budget, what is the optimum estimator of the sparse coding? This question belongs to the general topic of computational tradeoffs in statistical inference. Randomized sketches (Alaoui & Mahoney, 2015; Yang et al., 2015) reduce the size of convex problems by projecting expensive kernel operators into random subspaces, and reveal a tradeoff between computa- tional efficiency and statistical accuracy. Agarwal (2012) provides several theoretical results on perfoming inference under various computational constraints, and Chandrasekaran & Jordan (2013) considers a hierarchy of convex relaxations that provide practical tradeoffs between accuracy and computational cost. More recently, Oymak et al. (2015) provides sharp time-data tradeoffs in the context of linear inverse problems, showing the existence of a phase transition between the number of measurements and the convergence rate of the resulting recovery optimization algorithm. Giryes et al. (2016) builds on this result to produce an analysis of LISTA that describes acceleration in conditions where the iterative procedure has linear convergence rate. Finally, Xin et al. (2016) also studies the capabilities of Deep Neural networks at approximating sparse inference. The authors show that unrolled iterations lead to better approximation if one allows the weights to vary at each layer, con- trary to standard splitting algorithms. Whereas their focus is on relaxing the convergence hypothesis of iterative thresholding algorithms, we study a complementary question, namely when is speedup possible, without assuming strongly convex optimization. Their results are consistent with ours, since our analysis also shows that learning shared layer weights is less effective.\nLISTA. Reciprocally, we show that for cases where no acceleration is possible with FacNet the LISTA model also fail to provide acceleration, linking the two speedup mechanisms. This numerical evidence suggest that the existence of our proposed factorization is sufficient and somewhat necessary for LISTA to show good results..\nThe rest of the paper is structured as follows. Section 2 presents our mathematical analysis and proves the convergence of the adaptive algorithm as a function of the quality of the matrix factorization. Finally, Section 3 presents the generic architectures that will enable the usage of such schemes and the numerical experiments, which validate our analysis over a range of different scenarios.\nACCELERATING SPARSE CODING WITH SPARSE MATRIX FACTORIZATIONS\nIn this section we describe our setup for accelerating sparse coding based on the Proximal Splitting method. Let C Rn be the set describing our input data, and D E Rnxm be a dictionary, with m > n. We wish to find fast and accurate approximations of the sparse coding z*(x) of any x E , defined in (1) For simplicity, we denote B = DTD and y = D'x to rewrite (1) as\nz*(x) = arg min Fx(z B(y-z)+Xz1 E(z) G(z)\nFk(z)=E(zk)+(zk-y)'B(z-zk)+Lk|z-zk+X|[z1\nThe computation of zk+1 remains separable by replacing the quadratic form LI by any diagonal form. However, the Gram matrix B = D'D might be poorly approximated via diagonal forms for general dictionaries. Our objective is to accelerate the convergence of this algorithm by finding appropriate factorizations of the matrix B such that\nB~ A'SA. and IAz1 ~ z1\nF(z)=E(zk)+(zk-y)'B(z-zk)+QB(z,zk)\nargminF(z,zk) = A'argmin (zk-y)'BA(u-Azk)+Qs(u,Az A' arg min Qs. u,Azk-S-1AB(zk-\nwhere we use the variable change u = Az. As S is diagonal positive definite, (5) is separabl and can be computed easily, using a linear operation followed by a point-wise non linea soft-thresholding. Thus, any couple (A, S) ensures an computationally cheap scheme. Th. question is then how to factorize B using S and A in an optimal manner, that is, such tha. the resulting proximal splitting sequence converges as fast as possible to the sparse codin. solution.\nFor clarity, we will refer to F, as F and to z*(x) as z*. The classic proximal splitting technique finds z* as the limit of sequence (zk)k, obtained by successively constructing a surrogate loss Fk(z) of the form\nwhere A is unitary and S is diagonal positive definite. Given a point zk at iteration k, we can rewrite F(z) as\nFor that purpose, let us define\nSA(z) = Az|1 z1 and R=A'sA-B\n1 F(z,zk) =F(z)+ 'R(z-zk)+8A(z)\nBy imposing that R is a positive semidefinite residual one immediately obtains the following bound.\nProposition 2.1. Suppose that R = A SA- B is positive definite, and define\nF(zk+1)-F(z*) R|[zkz*2+oA(z*)-0A(zk+1)\nProof. By definition of zr+1 and using the fact that R > 0 we have\nwhere the first line results from the definition of zk+1 and the third line makes use of R positiveness.\n8A(z)|=X|I|Az||1-l|z1 A|(A-I)z1 XV2max(l|Az||o,IzIo) l|A-I|L I|z||2\nwhere we have used the Cauchy-Schwartz inequality x[1 < /[x[ox|2 in the last equation In particular, (10) shows that unitary matrices in the neighborhood of I with A - I small have small l1 commutation error dA but can be inappropriate to approximate general B. matrix.\noA(z)-oA(z]< A[zI[1-I[zI1+ XAz[1-I[AzI\nand zl[1-I[z1[z-zT vl[z-z'oz-zll\nWe will now establish convergence results based on the previous factorization. These bounds will inform us on how to best choose the factors Ak and Sk in each iteration.\nThe quantity dA(z) thus measures how invariant the l1 norm is to the unitary operator A whereas R corresponds to the residual of approximating the original Gram matrix B by our factorization ATSA . Given a current estimate zk, we can rewrite\nF(zk+1)-F(z*) < F(zk+1)-F(zk+1,zk)+F(z*,zk)-F(z*) Zk+1-Zk)R(Zk+1-Zk)-0A(Zk+1)+ (z*-zk)R(z*-zk) + 0A(z*) z*-zk)R(z*-zk)+(0A(z*)-8A(zk+\nThis simple bound reveals that to obtain fast approximations to the sparse coding it is. sufficient to find S and A such that |R|| is small and that the l1 commutation term dA. is small. These two conditions will be often in tension: one can always obtain R = 0 by using the Singular Value Decomposition of B = AJSoAo and setting A = Ao and S = So. However, the resulting Ao might introduce large commutation error dAo. Similarly, as the.\nThe commutation error also depends upon the sparsity of z and Az . If both z and Az are sparse then the commutation error is reduced, which can be achieved if A is itself a sparse unitary matrix. Moreover, since\nin z1. An uniform upper bound for this constant is (1+ ||A||1)/m, but it is typically much smaller when z and Az are both sparse. Equation (8) defines an iterative procedure determined by the pairs {(Ax, Sk)}k. The fol lowing theorem uses the previous results to compute an upper bound of the resulting sparse coding estimator.\nTheorem 2.2. Let Ag, Sk be the pair of unitary and diagonal matrices corresponding t iteration k, chosen such that Rk = ATSkAk - B a 0. It results that.\n(z* - zo) +2LAo(z1)l[z* - z1L F(zk)- F(z* 2k 2k\nwhere LA(z) denote the local lipschitz constant of SA at z\nLAo(z1) B I[Ro + 2 2\nMore generally, given a current estimate zk, searching for a factorization (Ax, Sk) will im prove the upper bound when"}, {"section_index": "3", "section_name": "2.3 INTERPRETATION", "section_text": "In this section we analyze the consequences of Theorem 2.2 in the design of fast sparse coding approximations, and provide a possible explanation for the behavior observed numerically\n(14) reveals that the optimum matrix factorization in terms of minimizing the upper bounc. depends upon the current scale of the problem, that is, of the distance ||z* - zkll. At the. beginning of the optimization, when [z* - zk is large, the bound (14) makes it easier tc explore the space of factorizations (A, S) with A further away from the identity. Indeed the bound tolerates larger increases in LA(zk+1), which is dominated by.\nk-1 (2LA;(zi+1)||z*-zi+1|2+(z*-zi)(Ri-1- Ri)(z* vith Q = i=1 k-1 B = (i+1)((Zi+1-zi)Ri(Zi+1-zi)+28A(Zi+1)-28A( i=0\n2k\nB Rk+ 2\nWe emphasize that this is not a guarantee of acceleration, since it is based on improving an upper bound. However, it provides a simple picture on the mechanism that makes non-asymptotic acceleration possible.\nLA(Zk+1) < X(V|zk+1||o +Vl|Azk+1 0\n1 This quantity exists as dA is a difference of convex. See proof of ?? in appendices for precisions\nX W(0) We (2) V (a) ISTA - Recurrent Neural Network (b) LISTA - Unfolded network\nFigure 1: Network architecture for ISTA/LISTA. The unfolded version (b) is trainable through backpropagation and permits to approximate the sparse coding solution efficiently\ni.e. the sparsity of both z1 and Ao(z1). On the other hand, when we reach intermediate solutions zk such that z* - zk is small with respect to LA(zk+1), the upper bound is. minimized by choosing factorizations where A is closer and closer to the identity, leading tc the non-adaptive regime of standard ISTA (A = Id)..\nThis is consistent with the numerical experiments, which show that the gains provided by learned sparse coding methods are mostly concentrated in the first iterations. Once the estimates reach a certain energy level, section 3 shows that LISTA enters a steady state in which the convergence rate matches that of standard ISTA.\nThe natural follow-up question is to determine how many layers of adaptive splitting are suflicient before entering the steady regime of convergence. A conservative estimate of this quantity would require an upper bound of ||z* - zk|| from the energy bound F(zk) - F(z*) Since in general F is convex but not strongly convex, such bound does not exist unless one can assume that F is locally strongly convex (for instance for sufficiently small values of F)\n1 -z*)(ASA-B)(z min z*)+OA(z)-0A(Z1,i A,S; AA=I,ASA-B>0 N\nTherefore, adapting the factorization to a particular dataset, as opposed to enforcing it uniformly over a given ball B(z*;R) (where the radius R ensures that the initial value zo E B(z*; R)), will always improve the upper bound (9). Studying the gains resulting from the adaptation to the input distribution will be let for future work"}, {"section_index": "4", "section_name": "3.1 ADAPTIVE OPTIMIZATION NETWORKS ARCHITECTURES", "section_text": "LISTA/LFISTA In Gregor & Le Cun (2010), the authors introduced LISTA, a neura network constructed by considering ISTA as a recurrent neural net. At each step, ISTA. performs the following 2-step procedure :\n1. Uk+1 = Zk (Dzk step k of ISTA W g 2. Zk+1 = h(uk+1) where he(u) = sign(u)((u|- 0)\n2The code can be found at https://github.com/tomMoral/AdaptiveOptin\nX W(0) W(2)\nThis section provides numerical arguments to analyse adaptive optimization algorithms and their performances, and relates them to the theoretical properties developed in the previous section. All the experiments were run using Python and Tensorflow. For all the experiments. the training is performed using Adagrad (Duchi et al., 2011). The code to reproduce the figures is available online2\nThis procedure combines a linear operation to compute ux+1 with an element-wise non linearity. It can be summarized as a recurrent neural network, presented in Figure 1a. with tied weights. The autors in Gregor & Le Cun (2010) considered the architecture K network, as presented in Figure 1b. The layers $e are defined as\nZk+1=y(zk):=Ahxs-1 (Azk-S-1A(DDzk- D'x))\nwith S diagonal and A unitary, the parameters of the k-th layer. The parameters obtainec after training such a network with back-propagation can be used with the theory devel oped in Section 2. Up to the last linear operation AI of the network, this network is a. re-parametrization of LISTA in a more constrained parameter space. Thus, LISTA is a. generalization of this proposed network and should have performances at least as good as FacNet, for a fixed number of layers..\nThe optimization can also be performed using backpropagation. To enforce the unitar constraints on A(k), the cost function is modified with a penalty:\nK f(O)=ExFx(K(x))+\nLinear model Finally, it is important to distinguish the performance gain resulting fron choosing a suitable starting point and the acceleration from our model. To highlights th gain obtain by changing the starting point, we considered a linear model with one laye. such that zout = A(o)x. This model is learned using SGD with the convex cost functior. f(A(0)) = ||(I - DA(0))x||? + A|A(0)x|1 . It computes a tradeoff between starting from the sparsest point O and a point with minimal reconstruction error y . Then, we observe the. performance of the classical iteration of ISTA using zout as a stating point instead of O .."}, {"section_index": "5", "section_name": "8.2 SYNTHETIC PROBLEMS WITH KNOWN DISTRIBUTIONS", "section_text": "1...m following a Bernoulli-Gaussian model. The coefficients z = (z1,..., Zm) are constructed with z; = ba, where b; ~ B(p) and a; ~ N(0,oIm) , where p controls the sparsity of the data. The values are set to m=100, n=64 for the dictionary dimension, p = 5/m for the. sparsity level and o=10 for the activation coefficient generation parameters. The sparsity\nZk+1 = $O(zk) := ho(Wgzk + Wex)\nA similar algorithm can be derived from FISTA, the accelerated version of ISTA to obtain LFISTA (see Figure 5 in Appendix A ). The architecture is very similar to LISTA, now. with two memory tapes:\nZk+1 = he(Wqzk + Wmzk-1+ Wex)\nFactorization network Our analysis in Section 2 suggests a refactorization of LISTA in more a structured class of parameters. Following the same basic architecture, and using (5) the network FacNet, K is formed using layers such that:\nwith O = A(K)S(k) )k=1...k the parameters of the K layers and a scaling factor for the regularization. The resulting matrix A(k) is then projected on the Stiefel Manifold using a SVD to obtain final parameters, coherent with the network structure\nGaussian dictionary In order to disentangle the role of dictionary structure from the. role of data distribution structure. the minimization problem is tested using a synthetic. generative model with no structure in the weights distribution. First, m atoms d, E Rn are. drawn iid from a multivariate Gaussian with mean 0 and covariance In and the dictionary\n103 ISTA L-ISTA 10 (*z)- (z) onnenn aio) ISTA L-ISTA FacNet 102 10 Linear Linear FacNet 101 0 10 FISTA L-FISTA FISTA * L-FISTA 10 10 10 10 10 10 10 10 10 10 10 100 101 102 100 101 102 # iteration/layers k # iteration/layers k\nFigure 2: Evolution of the cost function F(zk) - F(z*) with the number of layers or the. number of iteration k for different sparsity level. (left) p = 1/2o and (riqht)o = 1/4\n(*z)I -(z) uoooounf gso) 102 ISTA L-ISTA 10 FISTA FacNet 10 10 10 10 10 10 LO. 100 101 102 10 # iteration/layers k m\nFigure 3: Evolution of the cost function F(zk) - F(z*) with the number of layers or the number of iteration k for a problem generated with an adversarial dictionary..\nregularization is set to X=0.01. The batches used for the training are generated with th model at each step and the cost function is evaluated over a fixed test set. not used in th training.\nFigure 2 displays the cost performance for methods ISTA/FISTA/Linear relatively to thei iterations and for methods LISTA/LFISTA/FacNet relatively to the number of layers usec to solve our generated problem. Linear has performances comparable to learned methods. with the first iteration but a gap appears as the number of layers increases, until a point. where it achieves the same performances as non adaptive methods. This highlights that the adaptation is possible in the subsequent layers of the networks, going farther than choosing. a suitable starting point for iterative methods. The first layers permit to achieve a large. gain over the classical optimization strategy, by leveraging the structure of the problem. This appears even with no structure in the sparsity patterns of input data, in accordance. with the results in the previous section. We also observe diminishing returns as the number. of layers increases. This results from the phase transition described in Subsubsection 2.3.1 as the last layers behave as ISTA steps and do not speed up the convergence. The 3 learnec lgorithms are always performing at least as well as their classical counterpart, as it wa. stated in Theorem 2.2. We also explored the effect of the sparsity level in the training anc. learning of adaptive networks. In the denser setting, the arbitrage between the l1-norm anc. the squared error is easier as the solution has a lot of non zero coefficients. Thus in this. setting, the approximate method is more precise than in the very sparse setting where the. approximation must perform a fine selection of the coefficients. But it also yield lower gair. at the beggining as the sparser solution can move faster..\nThere is a small gap between LISTA and FacNet in this setup. This can be explained from the extra constraints on the weights that we impose in the FacNet, which effectively reduce the parameter space by half. Also, we implement the unitary constraints on the matrix A by a soft regularization (see (19)), involving an extra hyper-parameter that also contributes to the small performance gap. In any case, these experiments show that our analysis accounts for most of the acceleration provided by LISTA, as the performance of both methods are similar, up to optimization errors..\nAdversarial dictionary The results from Section 2 show that problems with a gram matrix composed of large eigenvalues associated to non sparse eigenvectors are harder to accelerate. Indeed, it is not possible in this case to find a quasi diagonalization of the matrix B that\n102 ISTA L-ISTA *z) - ( FISTA 10 FacNet 10 (z) uo!onuny gso? 10 102 10 10 105 100 101 102 10 # iteration/layers k m\n10 ISTA L-ISTA 10 ISTA L-ISTA zI 10 *z)1- Linear FacNet 10 Linear FacNet 10 10 FISTA L-FISTA FISTA L-FISTA 10 10 10 10 1.0 10 10 10 10 10 101 102 10 101 10 # iteration/layers k # iteration/layers k (a) Pascal VOC 2008 (b) MNIST\ndoes not distort the li norm. It is possible to generate such a dictionary using Harmoni. Analysis. The Discrete Fourier Transform (DFT) distorts a lot the l1 ball, since a very. sparse vector in the temporal space is transformed in widely spread spectrum in the Fouriel domain. We can thus design a dictionary for which LISTA and FacNet performances shoulc.\nThe resulting performances are reported in Figure 3. The first layer provides a big gain b. changing the starting point of the iterative methods. It realizes an arbitrage of the tradeo between starting from O and starting from y . But the next layers do not yield any extr. gain compared to the original ISTA algorithm. After 4 layers, the cost performance of bot adaptive methods and ISTA are equivalent. It is clear that in this case, FacNet does nc accelerate efficiently the sparse coding, in accordance with our result from Section 2. LIST. also displays poor performances in this setting. This provides further evidence that FacNe and LISTA share the same acceleration mechanism as adversarial dictionaries for FacNe. are also adversarial for LISTA.\nWavelet encoding for natural images A highly structured dictionary composed of trans. lation invariant Haar wavelets is used to encode 8x8 patches of images from the PASCAI. VOC 2008 dataset. The network is used to learn an efficient sparse coder for natural im ages over this family. 500 images are sampled from dataset to train the encoder. Training. batches are obtained by uniformly sampling patches from the training image set to feec. the stochastic optimization of the network. The encoder is then tested with 1oooo patche sampled from 100 new images from the same dataset..\nLearned dictionary for MNIST To evaluate the performance of LISTA for dictionary learning, LISTA was used to encode MNIST images over an unconstrained dictionary learned a priori using classical dictionary learning techniques. The dictionary of 100 atoms was learned from 10000 MNIST images in grayscale rescaled to 17x17 using the implemen tation of Mairal et al. (2009) proposed in scikit-learn, with A = 0.05. Then, the network were trained through backpropagation using all the 60ooo images from the training set o MNIST. Finally, the perfornance of these encoders were evaluated with the 1o0o0 images o the training set of MNIST.\nThe Figure 4 displays the cost performance of the adaptive procedures compared to non-. adaptive algorithms. In both scenario, FacNet has performances comparable to the one of. LISTA and their behavior are in accordance with the theory developed in Section 2. The gains become smaller for each added layer and the initial gain is achieved for dictionary. either structured or unstructured. The MNIST case presents a much larger gain compare. to the experiment with natural images. This results from the difference of structure of the input distribution, as the MNIST digits are much more constrained than patches from natural images and the network is able to leverage it to find a better encoder. In the MNIST. case, a network composed of 12 layers is sufficient to achieve performance comparable to ISTA with more than 1000 iterations..\nFigure 4: Evolution of the cost function F(zk) - F(z*) with the number of layers or the number of iteration k for two image datasets.."}, {"section_index": "6", "section_name": "4 CONCLUSIONS", "section_text": "In this paper we studied the problem of finite computational budget approximation of sparse. coding. Inspired by the ability of neural networks to accelerate over splitting methods on the first few iterations, we have studied which properties of the dictionary matrix and the data. distribution lead to such acceleration. Our analysis reveals that one can obtain acceleratior by finding approximate matrix factorizations of the dictionary which nearly diagonalize its. Gram matrix, but whose orthogonal transformations leave approximately invariant the l1. ball. By appropriately balancing these two conditions, we show that the resulting rotated. proximal splitting scheme has an upper bound which improves over the ISTA upper bound. under appropriate sparsity.\nIn order to relate this specific factorization property to the actual LISTA algorithm, we have. ntroduced a reparametrization of the neural network that specifically computes the factor ization, and incidentally provides reduced learning complexity (less parameters) from the. original LISTA. Numerical experiments of Section 3 show that such reparametrization re. covers the same gains as the original neural network, providing evidence that our theoretica. analysis is partially explaining the behavior of the LISTA neural network. Our acceleratior. scheme is inherently transient, in the sense that once the iterates are sufficiently close tc. the optimum, the factorization is not effective anymore. This transient effect is also consis-. tent with the performance observed numerically, although the possibility remains open tc. find alternative models that further exploit the particular structure of the sparse coding. Finally, we provide evidence that successful matrix factorization is not only sufficient but. also necessary for acceleration, by showing that Fourier dictionaries are not accelerated.\nDespite these initial results, a lot remains to be understood on the general question o optimal tradeoffs between computational budget and statistical accuracy. Our analysis sc far did not take into account any probabilistic consideration (e.g. obtain approximations tha hold with high probability or in expectation). Another area of further study is the extensior of our analysis to the FISTA case, and more generally to other inference tasks that ar currently solved via iterative procedures compatible with neural network parametrizations such as inference in Graphical Models using Belief Propagation or other ill-posed inverse problems."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Alekh Agarwal. Computational Trade-offs in Statistical Learning. PhD thesis, University of California, Berkeley, 2012.\nAmir Beck and Marc Teboulle. A Fast Iterative Shrinkage-Thresholding Algorithm fo Linear Inverse Problems. SIAM Journal on Imaqing Sciences. 2(1):183-202. 2009\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12 2121-2159, 2011.\nJerome Friedman, Trevor Hastie. Holger Hofling, and Robert Tibshirani. Pathwise coordi nate optimization. The Annals of Applied Statistics, 1(2):302-332. 2007..\nTim Hesterberg, Nam Hee Choi, Lukas Meier, and Chris Fraley. Least angle and 1 penalizec regression: A review. Statistics Surveys, 2:61-93, 2008.\nRobert Tibshirani. Regression Shrinkage and Selection via the Lasso. Journal of the roya statistical society. Series B (methodoloqical), 58(1):267-288, 1996.\nYun Yang. Mert Pilanci, and Martin J Wainwright. Randomized sketches for kernels: Fast and optimal non-parametric regression. preprint, arXiv:1501(06195). 2015"}, {"section_index": "8", "section_name": "A LEARNED FISTA", "section_text": "A similar algorithm can be derived from FISTA, the accelerated version of ISTA to obtain. LFISTA (see Figure 5 ). The architecture is very similar to LISTA, now with two memory. taps: It introduces a momentum term to improve the convergence rate of ISTA as follows.\n1.Yk=Zk 2. Zk+1 Yk 3. tk+ 2\n1. Yk =Zk- 2. Zk+1 Y k\nBy substituting the expression for yk into the first equation, we obtain a generic recurrent architecture very similar to LISTA, now with two memory taps, that we denote by LFISTA\nZk+1 = he(W(k) A -1\nX W(0) W(1) e 1 m m\nFigure 5: Network architecture for LFISTA. This network is trainable through backpropa gation and permits to approximate the sparse coding solution efficiently.\nThis model is equivalent to running K-steps of FISTA when its parameters are initializec with\ntk-1 1 W R tk I 7 K B A m I 1 A D L\n1 W(k) B L W 1(k) B m W\nThe parameters of this new architecture, presented in Figure 5 , are trained analogously as in the LISTA case."}, {"section_index": "9", "section_name": "B PROOFS", "section_text": "Fzk+1)-F(z*)=f(1)-f(O)= f'(t)dt< f'(1)\nOf(1)=(OF(zk+1),Zk+1-\nOF(z) =OF(z,zk)- R(z-zk)-d8Az and 0 E OF(zk+1,Zk\nOF(Zk+1) =-R(zk+1zk)-d8A(Zk+1)\nF(Zk+1)- F(z*) <(z*-Zk+1)'R(zk+1-zk) +(O8A(Zk+1),(z*-Zk+1)\nz*-Zk+1)R(Zk+1-Zk) * _ Zk) R(z*zk)-(z*zk+1)R(z*Zk+1\nX W(0) W(1) W(3) e e 9 A7 m m\nLemma B.1. Suppose that R = ASA- B is positive definite, and define Zk+1 = arg min F(z, zk) , and (20) A(z) =||Az|1 -||z|1. Then we have F(Zk+1)-F(z*) *Zk)R(z*-zk)-(z*-Zk+1)R(z*-Zk+1))+(O0A(2k+1),zk+1-z*) (21) Proof.We define f(t) =F (tzk+1+(1-t)z*), t E [0, 1] . Since F is convex, f is also convex in [0,1]. Since f(0) = F(z*) is the global minimum, it results that f'(t) is increasing in (0, 1], and hence F(zk+1) - F(z*) = f(1) - f(0) = f'(t) dt f'(1) , where f'(1) is any element of af(1). Since dA(z) is a difference of convex functions, its subgradient can be defined as a limit of infimal convolutions Hiriart-Urruty (1991). We haVe 0f(1) =(0F(Zk+1),Zk+1-z*) and since OF(z)=0F(z,zk)-R(z-zk)-dA(z) and 0 E 0F(zk+1,2k) it results that F(Zk+1) =-R(Zk+1- Zk)-d8A(Zk+1) and thus F(Zk+1)-F(z*) (z*-Zk+1)'R(zk+1-zk) +(8A(Zk+1),(z*-Zk+1)). (22) (21) is obtained by observing that (z* - Zk+1) R(Zk+1- Zk) -zk)R(z* -zk) -(z* -Zk+1)R(z*-Zk+1 23 thanks to the fact that R a 0.\nZk+1 = argmin F(z, zk) , and\nF(zk+1)-F(z*) (z*-zk)R(z*-Zk)-(z*-Zk+1)R(z*-Zk+1)+(A(Zk+1),zk+1-z\nf(t)=F tzk+1+(1-t ,te [0,1].\nTheorem B.2. Let Ax, Sk be the pair of unitary and diagonal matrices corresponding to iteration k, chosen such that Rk = ASkAk - B r 0. It results that\n1017 with 2k 2K Q =) (2(VAn(zn+1),(z*-zn+1))+(z*-zn)(Rn-1- Rn)(z n=1 k-1 =)`(n+1 Zn+1 - %r )'Rn(zn+1- zn)+28An(Zn+1) - 20 n=0\n(VoAn(zn+1),(z*-zn+1))+ 25 z*zo)'Ro(z*zo)-(z*zk)'Rk-1(z*-Zk (Rn-1 - Rn)(z* - zn)\n(z* z0)Ro(z* - z0) +2(V8Ao(z1),(z* - z1)) O F(zk) - F(z*) 2k 2k\n(2(V8An(2n+1),(z*-zn+1))+(z*-zn)(Rn-1-Rn)(z* 3 = )`(n+1)((zn+1-zn)`Rn(zn+1-zn)+28An(zn+1)-28An(z Proof: The proof is adapted from (Beck & Teboulle, 2009), Theorem 3.1. From Lemma B.1. ve start by using (21) to bound terms of the form F(zn) - F(z*):. F(zn)-F(z*) {V8An(2n+1),(z*-zn+1))+(z*-zn)Rn(z*-zn)-(z*-zn+1)Rn(z*-zn+1 Adding these inequalities for n = 0... k - 1 we obtain k-1 kF(z*) (25) F(zn `(VoAn(zn+1),(z*-zn+1))+ +(z*-zo)Ro(z*-z0)-(z*- zk)Rk-1(z*-zk) +2 (z* - zn)T(Rn-1- Rn)(z*- zn) . On the other hand, we also have F(zn)-F(zn+1) F(zn)-F(zn,zn)+F(zn+1,2n)-F(zn+1) = -oAn(zn)+0An(2n+1)+J(zn+1-zn)`Rn(2n+1-zn), vhich results in 2(n + 1)(n+1 - 2n)Rn(2n+1- 2n) + (n+1)(F(zn)-F(zn+1)) +(n+1)(0An(zn+1)-0An(zn) kF(zk) (n+1) (zn+1-zn)Rn(zn+1-zn)+OAn(2n+1)-0An(2n Combining (25) and (26) we obtain (27) 2k 2k vith (2(V8An(2n+1),(z*-zn+1)+(z*-zn)(Rn-1-Rn)(z* =(n+1) ((zn+1-zn)Rn(zn+1-zn)+20An(zn+1)-2oAn(zn Corollary B.3. If Ak =I, Sk =|B|I for k > 0 then 2k (28) Proof: We verify that in that case, Rn-1 - Rn = 0 and for n > 1 and dAn = 0 for n > 0 .. 13\nwhich results in k- `(n+1)(F(zn)-F(zn+1)) (n+1)(zn+1-zn)'Rn(zn+1-zn)+ (26 n=0 n=0 +(n+1)(0An(2n+1)-8An(zn k-1 kF(zk) (zn+1-zn)TRn(zn+1-zn) +0An(Z2n+1)- 8An(zn (n+1)\nk-] Q =) 2(V8An(zn+1),(z*-zn+1))+(z*-zn)'(Rn-1- Rn)(z* n=1 k-1 =)`(n+1) Zn+1-zn)`Rn(Zn+1-zn)+20An(zn+1)-2oAr n=0"}]
HkyYqU9lx
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Neural sequence to sequence transduction became a prominent approach to natural language pro. cessing tasks like morphological inflection generation (Faruqui et al.|2016) and automatic summa- rization (Rush et al.|2015) among others. A common way to improve the vanilla encoder-decoder framework for sequence to sequence tasks is the (soft) attention mechanism (Bahdanau et al.2014) which enables the decoder to attend at specific elements in the encoded sequence, overcoming the issues in encoding very long sequences to a single vector..\nIt was also shown that the attention mechanism effectively learns an alignment between the inpui and the output sequences which is naturally found in the data, and practically learns this alignment. to attend at the relevant elements of the input. However, in many NLP tasks like automatic translit-. eration or morphological inflection generation, the data is roughly monotonically aligned - meaning. the ability of the soft attention model to attend at the entire input sequence may be sub-optimal for such tasks, while also requiring a relatively large amount of training examples which is not always. available (especially for morphologically rich, low resource languages)..\nThere have been several works on neural sequence transduction with monotonic assumptions. One approach is to train an alignment-aware RNN-based transducer which is composed of two indepen- dent RNN's - one over the input sequence and one over the output sequence. The output distribution is computed by feeding a pair of aligned RNN states through an MLP, where the alignment is de- fined by null symbols in the output (Graves 2012) or by a parameterized transition probability (Yu et al.]2016). In both cases training is performed by marginalizing over all possible alignments using a forward-backward procedure. This approach lacks an attention mechanism, as a dependency be- tween the input and output RNN's would make the inference intractable. Other related approaches employ modifications to the soft-attention mechanism, like attending on a fixed sized window over the input sequence (Jaitly et al.]2015) or '\"smoothing\" and \"sharpening\" the soft-attention weight distribution in different manners (Chorowski et al.||2015). These works are motivated by the need to attend over the very long input sequences found in speech recognition. We suggest that for shorter input sequences like the characters of a word in natural language, a simple, hard attention mecha- nism over the elements of a bi-directional encoder may be sufficient.\nMore traditional approaches to monotonic sequence transduction in the NLP literature were hand engineered finite state transducers (FST) (Koskenniemi]1983, Kaplan & Kay1994) which relie on expert knowledge, or weighted finite state transducers (Mohri et al.]1997) Eisner!. 2002) whicl combined expert knowledge with data-driven parameter tuning. While the FST approaches may."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "work well even on small datasets due to their engineered structure, it may be cumbersome to use them while conditioning on the entire output history as it requires a very large set of states, resulting. in a model conditioning only on the last predicted output symbol (Rastogi et al.|2016)..\nWe propose a model which handles the above issues by directly modeling a monotonic alignment between the input and output sequences which is used to perform hard attention. The model consists of an encoder-decoder neural network with a dedicated control mechanism: in each step, the decodei is fed with a single attended input state and either writes a symbol to the output sequence or advances the attention pointer to the next input state from the bi-directionally encoded sequence, as described visually in Figure1\nThis modeling suits the natural monotonic alignment between the input and output very well, as the network learns to attend at the relevant inputs before writing the output which they are alignec to. A bi-directional encoder together with the hard attention mechanism enables to condition on the entire input sequence, as each element in the input is represented using a concatenation of a forward LSTM and a backward LSTM over the input sequence. Since each element representation is aware of the entire context, non-monotone relations are also captured, which is important in tasks where segments in the output sequence are a result of long range dependencies in the input sequence. The recurrent nature of the decoder, together with a dedicated feedback connection that passes the lasi prediction to the next decoder step explicitly, enables the model to also condition on the entire output history at each prediction step. The hard attention mechanism allows the network to jointly align and transduce while using a focused representation at each step, rather then the weighted sum of representations used in the soft attention model. A simple training procedure using independently learned alignments enables training the network with correct alignments from the first gradient- based update, using a convenient cross-entropy loss.\nTo evaluate our model, we perform extensive experiments on three previously studied datasets for the morphological inflection generation task, which involves generating a target word (e.g. \"hartestem',. the German word for \"hardest'), given a source word (e.g. \"hart', the German word for \"hard') and the morpho-syntactic attributes of the target (POS=adjective, gender=masculine, type=superlative. etc.). Several studies showed that inflection generation is beneficial for phrase-based machine trans-. lation (Chahuneau et al.|2013) and more recently for neural machine translation (Garcia-Martinez et al.[2016). We show that while our model is on par or better than the previous neural and non-. neural state-of-the-art models on the task, it is also performing significantly better for very small training sets, being the first neural model to surpass the performance of a weighted FST model with latent variables specifically tailored for the task (Dreyer et al. 2o08). Finally, we analyze and com-. pare our model and the soft attention model, showing how they function very similarly with respect. to the alignments and representations they learn, in spite of our model being much simpler.."}, {"section_index": "2", "section_name": "2.1 MOTIVATION", "section_text": "We would like to transduce the input sequence, x1:n E * into the output sequence, y1:m E . where x and y are the input and output vocabularies, respectively. Imagine a machine with read only, random access to the encoding of the input sequence, and a single pointer that determines the current read location. We can then model the sequence transduction as a series of write operation. and pointer movement operations. In the case where the alignment between the sequences is mono. tonic, the pointer movement can be controlled by a single \"move one step forward' operation (step which we add to the output vocabulary. We implement this behavior using an encoder-decoder neu ral network, with a control mechanism which determines in each step of the decoder whether it i. time to predict an output symbol or promote the attention pointer the next element of the encodec input.\nIn prediction time, we seek the output sequence y1:m. E *, for which:\nY1:m = arg max p(y' x1:n,\nWhere: x E * is the input sequence and: f = {f1, ..., fm} is a set of features influencing the transduction task (for example, in the inflection generation task these would be the desired morpho- syntactic features of the output sequence). Since we want our model to force a monotonic alignment between the input and the output, we instead look for a sequence of actions: s1:q E *, where: s = y U{step}. This sequence is the step/write action sequence required to go from x1:n to y1:m according to the monotonic alignment between them. In this case we define:\nI1 S1:q = arg max p(s'|x1:n, f) = arg max p(s[S0...S-1, x1:n, J S s' Es'\nn 0 T e t step n step 0 0 step T step e step </W> + 4 <W> step n step 0 step step e step + F + + + 4 + pos=V mood=IMPER num=PL aspect=IPFV <W> n e T b <W>\nn 0 T e + t step n step 0 0 step T step e step </W> <W> step step 0 0 step step e step 4 + + pos=V mood=IMPER num=PL aspect=IPFV <W> n e T b <W>\nFigure 1: The hard attention network architecture. A round tip expresses concatenation of the inputs it receives. The attention is promoted to the next input element once a step action is predicted.\nNotation We use bold letters for vectors and matrices. We treat LSTM as a parameterized functior LSTM(x1...xn.) mapping a sequence of input vectors x1...Xn to a an output vector hn\nEncoder For every element in the input sequence: x1:n = x1...xn, we take the corresponding embedding: ex...exn, where: ex, E RE. These embeddings are parameters of the model which will be learned during training. We then feed the embeddings into a bi-directional LSTM encoder (Graves & Schmidhuber2005) which results in a sequence of vectors: x1:n = x1...Xn, where each concatenation of the forward LSTM and the backward LSTM outputs when fed with ex .\nDecoder Once the input sequence is encoded, we feed the decoder RNN, LSTMdec, with three inputs at each step:\nThose three inputs are concatenated into a single vector z; = [xa,f, yi-1] E R2H+F-m+E, which is fed into the decoder, providing the decoder output vector: LSTMdec(z1...z;) E RH. Finally, to.\nS1:q = arg max NN(x1:n, f,O) S\nWhere the network's parameters O are learned using a set of training examples. We will now describe the network architecture.\n1. The current attended input, xa E R2H, initialized with the first element of the encoded sequence, x1. 2. A set of feature embeddings that influence the generation process, concatenated to a single vector: f = [f1..fm] E RFm. 3. yi-1 E RF, which is the embedding of the predicted output symbol in the previous decoder Step.\nmodel the distribution over the possible actions, we project the decoder output to a vector of elements, followed by a softmax layer:\nControl Mechanism When the most probable action is step, the attention is promoted so xa con. tains the next encoded input representation to be used in the next step of the decoder. This process is demonstrated visually in Figure|1\nFor every example: (x1:n, Y1:m, f) in the training data, we should produce a sequence of step and. write actions s1:q to be predicted by the decoder, which is dependent on the alignment between. the input and the output - the network must attend at all the input elements aligned to an output element before writing it. While recent work in sequence transduction advocate jointly training the alignment and the decoding (Bahdanau et al.[2014fYu et al.[2016), we instead show that in our case it is worthwhile to decouple these stages and learn the hard alignment before hand, using it to guide the training of the encoder-decoder network and enabling the use of correct alignments for. the attention mechanism from the beginning of the network training process. For that purpose, we. first run a character level alignment process on the training data. We use the character alignment mode1 of Sudoh et al.[(2013) which is based on a Chinese Restaurant Process which weights single alignments (character-to-character) in proportion to how many times such an alignment has been seen elsewhere out of all possible alignments. Specifically, we use the implementation provided by the organizers of the SIGMORPHON2016 shared task'|Once we have the character level alignment per input-output sequence pair in the training set, we deterministically infer the sequence of actions S1:q that results in the desired output by attending at all the input elements aligned to an output element (using the step action) before writing it. We then train the network to predict this sequence of actions by using a conventional cross entropy loss function per example:.\nWe perform extensive experiments with three previously studied morphological inflection generation datasets to evaluate our hard attention model in various settings. In all experiments we report the results of the best performing neural and non-neural baselines which were previously published on those datasets to our knowledge. The implementation details for the models are available in the supplementary material section of this paper. The source code for the models is available on github2\n3The acronyms stand for: 13SIA=1st/3rd person, singular, indefinite, past;13SKE=1st/3rd person, subjunc tive, present; 2PIE=2nd person, plural, indefinite, present;13PKE=1st/3rd person, plural, subjunctive, present;. 2PKE=2nd person, plural, subjunctive, present; z=infinitive; rP=imperative, plural; pA=past participle..\np(s; = c) = softmax,(R: LSTMdec(z1...z) + b\nL(x1:n, Y1:m, f, O) = - ) log softmax,(R . LSTMdec(z1...z) + b Sj ES1:q\nCELEX In order to examine if our model fits the task, we first evaluate it on a very small dataset. to see if it avoids the tendency to overfit on few training examples. For this purpose we report exact match accuracy on the German inflection generation dataset compiled byDreyer et al. (2008) from. the CELEX database (Baayen et al.1993). The dataset includes only 500 training examples for each. of the four inflection types: 13SIA->13SKE, 2PIE->13PKE, 2PKE->z, and rP->pA which we refer to as 13SIA, 2PIE, 2PKE and rP, respectively|3|We compare our model to three competitive baselines that reported results on this dataset: the Morphological Encoder-Decoder (MED) of Kann & Schutze (2016b) which is based on the soft-attention model ofBahdanau et al.(2014), the neural-weighted. FST of Rastogi et al.(2016) which uses stacked bi-directional LSTM's to weigh its arcs (wFsT),. and the model of|Dreyer et al.(2008) which uses a weighted FST with latent-variables structured. particularly for morphological string transduction tasks (LAT). Following previous reports on this. dataset, we use the same data splits asDreyer et al.(2008), dividing the data for each inflection type. into five folds, each consisting of 500 training, 1000 development and 1000 test examples. We train. a separate model for each fold and report exact match accuracy, averaged over the five folds..\nWiktionary To neutralize the negative effect of very small training sets on the performance o. the different learning approaches, we also evaluate our model on the dataset created byDurrett &. DeNero (2013), which contains up to 360k training examples per language. It was built by extract ing Finnish, German and Spanish inflection tables from Wiktionary, used in order to evaluate thei system based on string alignments and a semi-CRF sequence classifier with linguistically inspire features. We also used the expansion made by Nicolai et al.(2015) to include French and Dutcl inflections as well. Their system also performs an align-and-transduce approach, extracting rule. from the aligned training set and applying them in inference time with a proprietary character se. quence classifier. In addition to those systems we also compare to the results of the recent neura approaches of Faruqui et al.(2016), which did not use an attention mechanism, andYu et al.(2016 which coupled the alignment and transduction tasks, requiring a beam search decoding procedure..\nSIGMORPHON As different languages show different morphological phenomena, we also ex- periment with how our model copes with this variety using the morphological inflection dataset from the SIGMORPHON2016 shared task (Cotterell et al.]2016). Here the training data consists of ten languages, with five morphological system types (detailed in Table 3): Russian (RU), Ger- man (DE), Spanish (ES), Georgian (GE), Finnish (FI), Turkish (TU), Arabic (AR), Navajo (NA) Hungarian (HU) and Maltese (MA) with roughly 12,800 training and 1600 development examples per language. We compare our model to two soft attention baselines on this dataset: MED (Kann & Schutze[[2016a), which was the best participating system in the shared task, and our implementation of the global (soft) attention model presented byLuong et al.(2015).\nTable 1: Results over the CELEX dataset\nOn the low resource setting (CELEX), our model significantly outperforms both the recent neura models of Kann & Schutze[(2016b) and|Rastogi et al.(2016) and the morphologically aware latent variable model of|Dreyer et al.(2008), as detailed in Table[1 It is also, to our knowledge, the first model that surpassed in overall accuracy the latent variable model on this dataset. We explain our advantage over the soft attention model by the ability of the hard attention control mechanism tc harness the monotonic alignments found in the data, while also conditioning on the entire output history which wasn't available in the FST models. Figure|2|plots the train-set and dev-set accuracies of the soft and hard attention models as a function of the training epoch. While both models perform similarly on the train-set (with the soft attention model fitting it slightly faster), the hard attentior model performs significantly better on the dev-set. This shows the soft attention model's tendency to overfit on the small dataset, as it has significantly more parameters and modeling power and is not enforcing the monotonic assumption of the hard attention model.\nTable 2: Results over the Wiktionary datasets\nDE-N DE-V ES-V FI-NA FI-V FR-V NL-V Avg. DDN13 88.31 94.76 99.61 92.14 97.23 98.80 90.50 94.47 NCK15 88.6 97.50 99.80 93.00 98.10 99.20 96.10 96.04 FTND16 88.12 97.72 99.81 95.44 97.81 98.82 96.71 96.34 YBB16 87.5 92.11 99.52 95.48 98.10 98.65 95.90 95.32 Hard 88.87 97.35 99.79 95.75 98.07 99.04 97.03 96.55\nOn the large training set experiments (Wiktionary), our model is the best performing model on. German verbs, Finnish nouns/adjectives and Dutch verbs, resulting in the highest reported average. accuracy across all the inflection types when compared to the four previous neural and non-neural state of the art baselines, as detailed in Table2 This shows the robustness of our model also with large amounts of training examples, and the advantage the hard attention mechanism provides over the encoder-decoder approach of Faruqui et al.(2016) which does not employ an attention mech-.\nanism. Our model is also significantly more accurate than the model of|Yu et al.(2016), showing the advantage in using independently learned alignments to guide the network's attention from the. beginning of the training process.\nTable 3: Results over the SIGMORPHON 2016 morphological inflection dataset. The text above each language lists the morphological phenomena it includes: circ.=circumfixing agg.=agglutinative, v.h.=vowel harmony, c.h.=consonant harmony.\nsoft-train 0.5 hard-train soft-dev hard-dey 0 0 20 40 epoch\nFigure 2: Learning curves for the soft and. hard attention models on the first fold of the CELEX dataset\nIn order to see if the alignments our model predict fit the monotonic alignment structure found in the data, and are they more suitable for the task when compared to the alignments found by the soft attention model, we examined alignment predictions of the two models from the CELEX dataset depicted in Figure [3] First, we notice the alignments found by the soft attention model are also monotonic, encouraging our modeling approach for the task. We also notice how our model learns to handle morphological phenomena like deletion, as can be seen on the right part of Figure[3] showing\nsuffixing+stem changes circ. suffixing+agg.+v.h.. c.h. templatic RU DE ES GE FI TU HU NA AR MA Avg. MED 91.46 95.8 98.84 98.5 95.47 98.93 96.8 91.48 99.3 88.99 95.56 Soft 92.18 96.51 98.88 98.88 96.99 99.37 97.01 95.41 99.3 88.86 96.34 Hard 92.21 96.58 98.92 98.12 95.91 97.99 96.25 93.01 98.77 88.32 95.61\nAs can be seen in Table[3] on the SIGMORPHON 2016 dataset our model performs better than both soft-attention baselines for the suffixing+stem-change languages (Russian, German and Spanish) and is slightly less accurate than our implementation of the soft attention model on the rest of the languages, which is now the best performing model on this dataset to our knowledge\nVe explain this by looking at the languages from a linguistic typology point of view, as detailed. nCotterell et al. (2016). Since Russian, German and Spanish employ a suffixing morphology. vith internal stem changes, they are more suitable for monotonic alignment as the transformations. hey need to model are the addition of suffixes and changing characters in the stem. The rest of. he languages in the dataset employ more context sensitive morphological phenomena like vowel. armony and consonant harmony, which require to model long range dependencies in the input. equence which better suits the soft attention mechanism. While our implementation of the soft. ttention model and MED are very similar model-wise, we hypothesize that our soft attention results re better due to the fact that we trained the model for 100 epochs and picked the best performing. nodel on the development set, while the MED system was trained for a fixed amount of 20 epochs. although trained on both train and development data)..\nfeats + 1 O > < f 0 9 + step + + step - - step - e - e 9 e step 9 e g e > e step < feats e t e > < - e 9 + e > - < step e - step e 9 e step e 9 step step e e step\nFigure 3: A comparison of the alignments as predicted by the soft attention (left) and the hard attention (right) models on examples from the CELEX dataset.\nthe alignments for the inflection: legte->lege. This inflection requires the model to delete the fourth. character of the input sequence. We can see the model learned to delete by reading and writing. each character until it reaches the t character, which is deleted by performing two consecutive step. operations. Another notable morphological transformation is the one-to-many alignment, found on the example in the left: flog->fliege, where the model needs to transform a character in the input.. o, to two characters in the output, ie. We can see the model learns to do this by performing two. consecutive write operations after the step operation of the relevant character to be replaced. We also. notice that in this case, the soft attention model performs a slightly different alignment by aligning. the character i to o and the character g to the sequence eg, which is not the expected alignment in. this case from a linguistic point of view..\nSoft Attention-Encoded Inputs by Character C h n 2 le n n u he u 9 t. d nn tt1t ee S te h9 e C bo C3 nuu f's pt C Cp a C u tye C X e C zh r b e ++ en b 20 n - t the re e e ee e a aaeonb eee te ao a e e te a heb a O a a 2 -3 -2 1 0 1 2 3 4 Hard Attention -Encoded Inputs by Character e e e e ee e e eeg e e e e ee e e a ge ee tt at 4 ZC aa t Za tI C ac 7 ZdA C CCd hh h It Fppd Tnn O PoP00 n Jh V/ b A n S_S 2 nb S SSS $6sS 3 -2 1 0 1 2 3 4 5\nFigure 4: SVD dimension reduction to 2D of 500 character representations in context from the encoder. for both the soft attention (top) and the hard atten- tion (bottom) models. Colors indicate which character is encoded.\nWhen witnessing the success of the hard and soft attention models in sequence to sequence trans duction, the following questions arise: how does the models manage to learn monotonic alignments? perhaps the network learns to encode the sequential position as part of its encoding of an input el ement? In an attempt to answer those questions, we performed the following analysis. We took 500 continuous character representations in context from each model, where every representatior is a vector in R200 which is the output of the bi-LSTM encoder of the model. Every vector of this form carries information of a specific character with its context. We perform dimension reductior to reduce those two sets of vectors, each set in: R500 200 into: R500 2 using SVD. We then plot the 2D character-in-context representations and color them in two ways. First, we color the 2D repre sentations by the characters they represent, where we have different color for each character in the alphabet (Figure4). In the second plot we color the representations by the location of the characte they represent in the input sequence: here blue implies the character is closer to the beginning of the sequence and red implies it is closer to the end (Figure5).\nSoft Attention-Encoded Inputs by Location C C C & h m p 2 S n Ke C h u S K - n d an pr t e fs bo ryu C P6 S tte e S 5 C m tt O e + et zh ee e tt rb nen h - in n Pe retr i8 e - a ee te h ao a a e 09a a O a a 3 -2 -1 0 1 2 3 Hard Attention-Encoded Inputs by Location e ee e e e ee e eee ee e e e eeee e ge e e ee e U tt La a4 ZC + o aa t | :t ac 7 Zda C - Ccd I tj dk kK k 1 Apjpd hIh n r sk sS 2 n mn nb 5 sn ns SSS $ S 3 -2 -1 0 1 2 3 4 5\nFigure 5: SVD dimension reduction to 2D of 500 character representations in context from the encoder, for both the soft attention (top) and hard attention. (bottom) models. Colors indicate the lo- cation of the character..\nWe can see that while both models tend to cluster similar character representations together (Figure 4), the hard attention model tends to have more dense character clusters. This is explained by look- ing at the location information in Figure 5] while both models encode the positional information to some extent, this information is much more pronounced in the soft-attention mechanism, where the X dimension correlates very strongly with the position information. It seems that the soft-attention mechanism encourages the encoder to encode positional information in its representation. In con- trast, our hard-attention model has other means of obtaining the position information in the decoder using the step actions, and indeed does not encode it as strongly in the continuous representations This behavior allows it to perform well even with fewer examples, as the location information is represented in the network training phase implicitly using the step actions.\nPrevious works on neural sequence transduction include the RNN Transducer (Graves!. 2012) which uses two independent RNN's over monotonically aligned sequences to compute a probability over. the possible output symbols in each step, including a null symbol. The model byYu et al.(2016] improves this approach by replacing the null symbol with a learned transition probability. Both. models are trained using a forward-backward approach, marginalizing over all possible alignments.. Our model differs from the above by learning the alignments independently, which enables a depen-. dency between the encoder and decoder - the hard attention mechanism. This provided improved results while also greatly simplifying the model training, enabling learning through simple cross-. entropy loss. Jaitly et al.(2015) proposed the Neural Transducer model for online speech recogni. tion, which is also trained on external alignments, similarly to our approach. They divide the input. into blocks of a constant size and perform soft attention separately on each block when predicting. the output symbols.Lu et al.[(2016) used a combination of an RNN encoder together with a CRF. layer to model the dependencies in the output sequence. A line of work on attention based speech. recognition|Chorowski et al.[(2015); Bahdanau et al.(2016) proposed two relevant improvements to the vanilla attention mechanism: The first adds location awareness by using the previous attention. weights when computing the next ones, and the second prevents the model from attending on too. many or too few inputs using\"sharpening'\"' and \"smoothing'' techniques on the soft-attention weight. distributions."}, {"section_index": "3", "section_name": "7 CONCLUSION", "section_text": "We presented the hard attention model for sequence to sequence transduction of monotonically. aligned sequences and evaluated it on the well-studied task of morphological inflection generation.. The model employs an explicit alignment model learned independently in training time which is. used to teach a neural network to perform both alignment and transduction when decoding with a hard attention mechanism. We showed that our model performs better or on par with more complex. soft attention and neural transduction models on various morphological inflection datasets, forming. a new state of the art on the CELEX dataset and the Wiktionary dataset and outperforming the best. system in the SIGMORPHON2016 inflection generation task. Future work may include experiment- ing with different external alignment methods, or applying the model to other tasks which require a monotonic align-and-transduce approach like abstractive summarization or transliteration..\nFor the morphological inflection task, previous approaches usually make use of manually con. structed Finite State Transducers (Koskenniemi 1983] Kaplan & Kay1994], which require expert knowledge, or machine learning methods (Yarowsky & Wicentowski]200of Dreyer & Eisner!2011. Durrett & DeNero[2013] Hulden et al.[[2014] [Ahlberg et al.[2015} Nicolai et al.2015) with spe cific assumptions about the set of possible processes that are needed to create the output sequence,. requiring feature engineering. More recently, Faruqui et al.(2016) used encoder-decoder neural net- works for the task, encoding the input sequence into a vector and decoding it one character at a time into the output sequence.Kann & Schutze[(2016a b) explored the soft attention model proposed for machine translation (Bahdanau et al.[|2014) which gave the best results in the SIGMORPHON 2016 shared task (Cotterell et al.[2016). Another notable contribution is the work on weighting finite state transducers with neural context (Rastogi et al.]2016). There, the arcs of an FST are scored by optimizing a global loss function over all the possible paths in the state graph while modeling. contextual features with bi-directional LSTM's.."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. CoRR, abs/1409.0473, 2014.\nRyan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. The SIGMORPHON 2016 shared task-morphological reinflection. In Proceedings oj the 2016 Meeting of SIGMORPHON, August 2016.\nMarkus Dreyer and Jason Eisner. Discovering morphological paradigms from plain text using a dirichlet process mixture model. In EMNLP, pp. 616-627, 2011.\nMarkus Dreyer, Jason R Smith, and Jason Eisner. Latent-variable modeling of string transductions with finite-state methods. In Proceedings of the conference on empirical methods in natural language processing, pp. 1080-1089, 2008.\nJason Eisner. Parameter estimation for probabilistic finite-state transducers. In Proceedings of the 40th annual meeting on Association for Computational Linguistics, pp. 1-8. 2002\nManaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. Morphological inflection gener ation using character sequence to sequence learning. In NAACL HLT 2016, 2016.\nMans Hulden, Markus Forsberg, and Malin Ahlberg. Semi-supervised learning of morphologica paradigms and lexicons. In EACL, pp. 569-578, 2014.\nKatharina Kann and Hinrich Schutze. Single-model encoder-decoder with explicit morphologica representation for reinflection. In ACL, August 2016a.\nKatharina Kann and Hinrich Schutze. Med: The lmu system for the sigmorphon 2016 shared task on morphological reinflection. 2016b.\nRonald M. Kaplan and Martin Kay. Regular models of phonological rule systems. Computationa Linguistics, 20(3):331-378, 1994.\nMercedes Garcia-Martinez, Loic Barrault, and Fethi Bougares. Factored neural machine translation arXiv preprint arXiv:1609.04621, 2016\nKimmo Koskenniemi. Two-level morphology: A general computational model of word-form recog nition and production. Technical report, 1983\nMehryar Mohri, Fernando Pereira, and Michael Riley. A rational design for a weighted finite-state transducer library. In International Workshop on Implementing Automata. pp. 144-158. 1997\nPushpendre Rastogi, Ryan Cotterell, and Jason Eisner. Weighting finite-state transductions with neural context. In Proc. of NAACL, 2016.\nAlexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. 2015.\nDavid Yarowsky and Richard Wicentowski. Minimally supervised morphological analysis by mul timodal alignment. In ACL, 2000..\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012."}, {"section_index": "5", "section_name": "TRAINING DETAILS, IMPLEMENTATION AND HYPER PARAMETERS", "section_text": "Io train our models, we used the train portion of the datasets as-is and evaluated the model whicr performed best on the development portion of the dataset, without conducting any specific pre- processing steps on the data. We train the models for a maximum of 100 epochs over the training set. To avoid long training time, we trained the model for 20 epochs for datasets larger than 50k examples, and for 5 epochs for datasets larger than 200k examples. The models were implemented using the python bindings of the dynet toolkit4|We trained the network by optimizing the expected output sequence likelihood using cross-entropy loss as mentioned in equation |5 For optimization we used ADADELTA (Zeiler2012) without regularization. We updated the weights after every example. We used the dynet toolkit implementation of an LSTM network with two layers, each having 100 entries in both the encoder and decoder. The character embeddings were also vectors with 100 entries for the CELEX experiments, and with 300 entries for the SIGMORPHON and Wiktionary experiments. The morpho-syntactic attribute embeddings were vectors of 20 entries in all experiments. We did not use beam search while decoding for both the hard and soft attention models as it is significantly slower and did not show clear improvement in previous experiments we conducted. In all experiments, for both the hard and soft attention models, we report results using an ensemble of 5 models with different random initializations by using majority voting on the final sequences the models predicted, as reported in Kann & Schutze(2016b). This was done to perform fair comparison to the models of Kann & Schutze(2016b a); Faruqui et al.(2016) which also performed a similar ensembling technique."}]
Bkbc-Vqeg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Automatically discovering words and other elements of linguistic structure from continuous speec has been a longstanding goal in computational linguists, cognitive science, and other speech prc cessing fields. Practically all humans acquire language at a very early age, but this task has prove to be an incredibly difficult problem for computers. While conventional automatic speech recogn tion (ASR) systems have a long history and have recently made great strides thanks to the revival c deep neural networks (DNNs), their reliance on highly supervised training paradigms has essentiall restricted their application to the major languages of the world, accounting for a small fraction of th more than 7,000 human languages spoken worldwide (Lewis et al.2016). The main reason for thi limitation is the fact that these supervised approaches require enormous amounts of very expensiv human transcripts. Moreover, the use of the written word is a convenient but limiting conventior since there are many oral languages which do not even employ a writing system. In constrast, in fants learn to communicate verbally before they are capable of reading and writing - so there is n inherent reason why spoken language systems need to be inseparably tied to text.\nThe key contribution of this paper has two facets. First, we introduce a methodology capable of no only discovering word-like units from continuous speech at the waveform level with no additiona text transcriptions or conventional speech recognition apparatus. Instead, we jointly learn the se mantics of those units via visual associations. Although we evaluate our algorithm on an Englisl corpus, it could conceivably run on any language without requiring any text or associated ASR ca pability. Second, from a computational perspective, our method of speech pattern discovery runs i1 linear time. Previous work has presented algorithms for performing acoustic pattern discovery ii continuous speech (Park & Glass2008] Jansen et al.2010) Jansen & Van Durme2011) withou the use of transcriptions or another modality, but those algorithms are limited in their ability to scale by their inherent O(n2) complexity, since they do an exhaustive comparison of the data against it self. Our method leverages correlated information from a second modality - the visual domain - tc guide the discovery of words and phrases. This enables our method to run in O(n) time, and we demonstrate it scalability by discovering acoustic patterns in over 522 hours of audio data."}, {"section_index": "1", "section_name": "LEARNING WORD-LIKE UNITS FROM JOINT AUDIO- VISUAL ANALYSIS", "section_text": "Given a collection of images and spoken audio captions, we present a method for. discovering word-like acoustic units in the continuous speech signal and ground. ng them to semantically relevant image regions. For example, our model is able to detect spoken instances of the words \"lighthouse\"' within an utterance and as- sociate them with image regions containing lighthouses. We do not use any for of conventional automatic speech recognition, nor do we use any text transcrip. tions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only tc recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images.."}, {"section_index": "2", "section_name": "1.2 PREVIOUS WORK", "section_text": "A sub-field within speech processing that has garnered much attention recently is unsupervise speech pattern discovery. Segmental Dynamic Time Warping (S-DTW) was introduced by Park Glass(2oo8), which discovers repetitions of the same words and phrases in a collection of untrar scribed acoustic data. Many subsequent efforts extended these ideas(Jansen et al.|2010) Jansen Van Durme2011] Dredze et al.2010 Harwath et al.2012]Zhang & Glass2009.Alternativ approaches based on Bayesian nonparametric modeling (Lee & Glass2012fOndel et al.] 2016 employed a generative model to cluster acoustic segments into phoneme-like categories, and relate works aimed to segment and cluster either reference or learned phoneme-like tokens into word-lik and higher-level units (Johnson2008} Goldwater et al. 2009[ Lee et al.2015)\nIn parallel, the computer vision and NLP communities have begun to leverage deep learning to create multimodal models of images and text. Many works have focused on generating annotations or text captions for images (Socher & Li]2010] Frome et al.2013] Socher et al.[2014] Karpathy et al.[2014] Karpathy & Li]2015f|Vinyals et al.[[2015} Fang et al.[2015] Johnson et al.[2016). One interesting intersection between word induction from phoneme strings and multimodal modeling of images and text is that of|Gelderloos & Chrupaa(2016), who uses images to segment words within. captions at the phoneme string level. Several recent papers have taken these ideas beyond text. and attempted to relate images to spoken audio captions directly at the waveform level (Harwath & Glass2015] Harwath et al.]2016).\nWhile supervised object detection is a standard problem in the vision community, several recent. works have tackled the problem of weakly-supervised or unsupervised object localization (Bergamo et al.[2014;Cho et al. 2015}Zhou et al.2015 Cinbis et al. 2016).Although the focus of this. work is discovering acoustic patterns, in the process we jointly associate the acoustic patterns with. clusters of image crops. which we demonstrate capture visual patterns as well..\nWe employ a corpus of over 200,000 spoken captions for images taken from the Places205 dataset. (Zhou et al.|[2014), corresponding to over 522 hours of speech data. The captions were collected us-. ing Amazon's Mechanical Turk service, in which workers were shown images and asked to describe. them verbally in a free-form manner. Our data collection scheme is described in detail in|Harwath et al.(2016), but the experiments in this paper leverage nearly twice the amount of data. For training. our multimodal neural network as well as the pattern discovery experiments, we use a subset of. 214,585 image/caption pairs, and we hold out a set of 1,000 pairs for evaluating the performance. of the multimodal network's retrieval ability. Because we lack ground truth text transcripts for the. data, we used Google's Speech Recognition public API to generate proxy transcripts which we use. when analyzing our system. Note that the ASR was only used for analysis of the results, and was. not involved in any of the learning."}, {"section_index": "3", "section_name": "AUDIO-VISUAL EMBEDDING NEURAL NETWORKS", "section_text": "We first train a deep multimodal embedding network similar in spirit to the one described in |Har wath et al.(2016), but with a more sophisticated architecture. The model is trained to map entire image frames and entire spoken captions into a shared embedding space; however, as we will show, the trained network can then be used to localize patterns corresponding to words and phrases within the spectrogram, as well as visual objects within the image by applying it to small sub-regions of the image and spectrogram. The model is comprised of two branches, one which takes as input im- ages, and the other which takes as input spectrograms. The image network is formed by taking the off-the-shelf VGG 16 layer network (Simonyan & Zisserman2014) and replacing the softmax clas sification layer with a linear transform which maps the 4096-dimensional activations of the second fully connected layer into our 1024-dimensional multimodal embedding space. In our experiments, the weights of this projection layer are trained, but the layers taken from the VGG network below it are kept fixed. The second branch of our network analyzes speech spectrograms as if they were black and white images. Our spectrograms are computed using 40 log Mel filterbanks with a 25ms Hamming window and a 1Oms shift. Therefore. the input to this branch always has 1 color channel\nand is always 40 pixels high (corresponding to the 40 Mel filterbanks), but the width of the spec trogram varies depending upon the duration of the spoken caption, with each pixel corresponding tc approximately 10 milliseconds worth of audio. The specific network architecture we use is showr below, where C denotes the number of convolutional channels, W is filter width, H is filter height and S is pooling stride.\n1. Convolution with C=128, W=1, H=40, ReLU 2. Convolution with C=256, W=11, H=1, ReLU, maxpool with W=3, H=1, S=2 3. Convolution with C=512, W=17, H=1, ReLU, maxpool with W=3, H=1, S=2 4. Convolution with C=512, W=17, H=1, ReLU, maxpool with W=3, H=1, S=2 5. Convolution with C=1024. W=17. H=1. ReLU 6. Meanpool over entire caption width followed by L2 normalization\nIn practice during training, we restrict the caption spectrograms to all be 1024 frames wide (i.e.. 1Osec of speech) by applying truncation or zero padding; this introduces computational savings and was shown in Harwath et al.[(2016) to only slightly degrade the performance. Additionally, both the. images and spectrograms are mean normalized before training. The overall multimodal network is. formed by tying together the image and audio branches with a layer which takes both of their outpu. vectors and computes an inner product between them, representing the similarity score between a given image/caption pair. We train the network to assign high scores to matching image/captior. pairs, and lower scores to mismatched pairs. The objective function and training procedure we use. is identical to that described in Harwath et al.(2016), but we briefly describe it here.."}, {"section_index": "4", "section_name": "4 FINDING AND CLUSTERING AUDIO-VISUAL CAPTION GROUNDINGS", "section_text": "Although we have trained our multimodal network to compute embeddings at the granularity o. entire images and entire caption spectrograms, we can easily apply it in a more localized fashion. In the case of images, we can simply take any arbitrary crop of an original image and resize i. to 224x224 pixels. The audio network is even more trivial to apply locally, because it is entirel. convolutional and the final mean pooling layer ensures that the output will be a 1024-dim vector n matter the extent of the input. The bigger question is where to locally apply the networks in order tc. discover meaningful acoustic and visual patterns.\nGiven an image and its corresponding spoken audio caption, we use the term grounding to refer to extracting meaningful segments from the caption and associating them with an appropriate sub- region of the image. For example, if an image depicted a person eating ice cream and its caption contained the spoken words \"A person is enjoying some ice cream,' an ideal set of groundings would entail the acoustic segment containing the word \"person\"' linked to a bounding box around the per- son, and the segment containing the word \"ice cream' linked to a box around the ice cream. We use a constrained brute force ranking scheme to evaluate all possible groundings (with a restricted gran ularity) between an image and its caption. Specifically, we divide the image into a grid, and extract all of the image crops whose boundaries sit on the grid lines. Because we are mainly interested in extracting regions of interest and not high precision object detection boxes, to keep the number of proposal regions under control we impose several restrictions. First, we use a 10x10 grid on each image regardless of its original size. Second, we define minimum and maximum aspect ratios as 2:3\nB (0) = max(0, S SP + 1) + max(0, S, SP + 1) j=1\nWe train the neural network with 50 epochs of stochastic gradient descent using a batch size B = 128, a momentum of 0.9, and a learning rate of 1e-5 which is set to geometrically decay by a factor. between 2 and 5 every 5 to 10 epochs.\nand 3:2 so as not to introduce too much distortion and also to reduce the number of proposal boxes. Third, we define a minimum bounding width as 30% of the original image width, and similarly a minimum height as 30% of the original image height. In practice, this results in a few thousand proposal regions per image.\nTo extract proposal segments from the audio caption spectrogram, we similarly define a 1-dim grid along the time axis, and consider all possible start/end points at 10 frame (pixel) intervals. We impose minimum and maximum segment length constraints at 50 and 100 frames (pixels), implying that our discovered acoustic patterns are restricted to fall between O.5 and 1 second in duration. The number of proposal segments will vary depending on the caption length, and typically number in the several thousands. Note that when learning groundings we consider the entire audio sequence, and do not incorporate the 1Osec duration constraint imposed during the first stage of learning.\nFigure 1: An example of our grounding method. The left image displays a grid defining the allowed start and end coordinates for the bounding box proposals. The bottom spectrogram displays several audio region proposals drawn as the families of stacked red line segments. The image on the right and spectrogram on the top display the final output of the grounding algorithm. The top spectrogram also displays the time-aligned text transcript of the caption, so as to demonstrate which words were captured by the groundings. In this example, the top 3 groundings have been kept, with the colors. indicating the audio segment which is grounded to each bounding box..\nOnce we have extracted a set of proposed visual bounding boxes and acoustic segments for a giver image/caption pair, we use our multimodal network to compute a similarity score between each unique image crop/acoustic segment pair. Each triplet of an image crop, acoustic segment, and similarity score constitutes a proposed grounding. A naive approach would be to simply keep the top N groundings from this list, but in practice we ran into two problems with this strategy. First many proposed acoustic segments capture mostly silence due to pauses present in natural speech We solve this issue by using a simple voice activity detector (VAD) which was trained on the TIMIT corpus(Garofolo et al.]1993). If the VAD estimates that 40% or more of any proposed acoustic segment is silence, we discard that entire grounding. The second problem we ran into is the faci that the top of the sorted grounding list is dominated by highly overlapping acoustic segments. This makes sense, because highly informative content words will show up in many different groundings with slightly perturbed start or end times. To alleviate this issue, when evaluating a grounding from the top of the proposal list we compare the interval intersection over union (IOU) of its acoustic segment against all acoustic segments already accepted for further consideration. If the IOU exceeds a threshold of O.1, we discard the new grounding and continue moving down the list. We stop accumulating groundings once the scores fall to below 50% of the top score in the \"keep' list, 0r when 10 groundings have been added to the \"keep\"' list, whichever comes first. Figure[1displays a pictorial example of our grounding procedure.\nSEASIDE THERIEVERAL FLOATINGHE THE.E ARE ANDTHE IS WHITE PICTURE SEAGULLS IN WATER SKIES ARE CLEAR IN DISTANCE A LIGHTHOUSE 4.26 6.06 4.07\nOnce we have completed the grounding procedure, we are left with a small set of regions of interes in each image and caption spectrogram. We use the respective branches of our multimodal networl to compute embedding vectors for each grounding's image crop and acoustic segment. We ther employ k-means clustering separately on the collection of image embedding vectors as well as th collection of acoustic embedding vectors. The last step is to establish an affinity score between eacl image cluster I and each acoustic cluster A; we do so using the equation\nwhere i is an image crop embedding vector, a is an acoustic segment embedding vector, and Pair(i, a) is equal to 1 when i and a belong to the same grounding pair, and O otherwise. After clustering, we are left with a set of acoustic pattern clusters, a set of visual pattern clusters, and a set of linkages describing which acoustic clusters are associated with which image clusters. In the next section, we investigate the properties of these clusters in more detail.\nWe trained our multimodal network on a set of 214,585 image/caption pairs, and vetted it with an image search (given caption, find image) and annotation (given image, find caption) task similar to the one used in Harwath et al.(2016); Karpathy et al.(2014); Karpathy & Li (2015). The image annotation and search recall scores on a 1,o0o image/caption pair held-out test set are shown in Table [1] and are compared against the model architecture used in Harwath et al.(2016). We then performed the grounding and pattern clustering steps on the entire training dataset. This resulted in a total of 1,161,305 unique grounding pairs.\nRanking a set of k = 500 acoustic clusters by their variance, Table|3|displays some statistics for the 50 lowest-variance clusters. We see that most of the clusters are very large and highly pure, and their labels reflect interesting object categories being identified by the neural network. We additionally compute the coverage of each cluster by counting the total number of instances of the cluster label anywhere in the training data, and then compute what fraction of those instances were captured by the cluster. We notice many examples of high coverage clusters, e.g. the \"skyscraper' cluster captures 84% of all occurrences of the word \"skyscraper' anywhere in the training data, while the \"baseball\"' cluster captures 86% of all occurrences of the word \"baseball'. This is quite impressive given the fact that no conventional speech recognition was employed, and neither the multimodal neural network nor the grounding algorithm had access to the text transcripts of the captions.\nTo get an idea of the impact of the k parameter as well as a variance-based cluster pruning threshold. based on Figure2] we swept k from 250 to 2000 and computed a set of statistics shown in Table 4 We compute the standard overall cluster purity evaluation metric in addition to the average cov- erage across clusters. The table shows the natural tradeoff between cluster purity and redundancy\nAffinity(L, A) = ia.Pair(i,a) iEI aEA\nn order to evaluate the acoustic pattern discovery and clustering, we wish to assign a label to each. luster and cluster member, but this is not completely straightforward since each acoustic segment. nay capture part of a word, a whole word, multiple words, etc. Our strategy is to force-align the. Joogle recognition hypothesis text to the audio, and then assign a label string to each acoustic. egment based upon which words it overlaps in time. The alignments are created with the help of a Kaldi (Povey et al.2011) speech recognizer based on the standard WSJ recipe and trained using the. Google ASR hypothesis as a proxy for the transcriptions. Any word whose duration is overlapped. 30% or more by the acoustic segment is included in the label string for the segment. We then. mploy a majority vote scheme to derive the overall cluster labels. When computing the purity of a. luster, we count a cluster member as matching the cluster label as long as the overall cluster label. ppears in the member's label string. In other words, an acoustic segment overlapping the words \"the. ighthouse' would receive credit for matching the overall cluster label \"lighthouse'. Several example lusters and a breakdown of the labels of their members are shown in Table[2] We investigated some. imple schemes for predicting highly pure clusters, and found that the empirical variance of the. luster members (average squared distance to the cluster centroid) was a good indicator. Figure2 lisplays a scatter plot of cluster purity weighted by the natural log of the cluster size against the. mpirical variance. Large, pure clusters are easily predicted by their low empirical variance, while. t high empirical variance is indicative of a garbage cluster..\nindicated by the average cluster coverage) as k is increased. In all cases, the variance-based clus ter pruning greatly increases both the overall purity and average cluster coverage metrics. We also. notice that more unique cluster labels are discovered with a larger k.\nNext, we examine the image clusters. Figure 3 displays the 9 most central image crops for a set of 10 different image clusters, along with the majority-vote label of each image cluster's associated audio cluster. In all cases, we see that the image crops are highly relevant to their audio cluster label. We include many more example image clusters in Appendix A.\nFinally, we wish to examine the semantic embedding space in more depth. We took the top 150 clusters from the same k = 500 clustering run described in Table|3|and performed t-SNE (van der Maaten & Hinton!2008) analysis on the cluster centroid vectors. We projected each centroid down to 2 dimensions and plotted their majority-vote labels in Figure|4] Immediately we see that different clusters which capture the same label closely neighbor one another, indicating that distances in the embedding space do indeed carry information discriminative across word types (and suggesting that a more sophisticated clustering algorithm than k-means would perform better). More interestingly we see that semantic information is also reflected in these distances. The cluster centroids for \"lake, 'river, \"body, \"water, \"waterfall, \"pond, and \"pool'' all form a tight meta-cluster, as do \"restau rant, \"store,'\"shop,' and \"shelves,\"' as well as \"children,' \"girl, \"woman,' and \"man.' Many other semantic meta-clusters can be seen in Figure|4] suggesting that the embedding space is capturing information that is highly discriminative both acoustically and semantically.\nTable 1: Results for image search and annotation on the Places audio caption data (214k training. pairs, 1k testing pairs). Recall is shown for the top 1, 5, and 10 hits. The model we use in this. paper is compared against the meanpool variant of the model architecture presented in Harwath. et al.(2016). For both training and testing, the captions were truncated/zero-padded to 10 seconds.\nModel R @1 Harwath et al. 2016 0.090 This work 0.112 8 X X X X X X X X X X X X X X X X X X +4 X X X X 6 X X X X xX XX X X XX X XX X X X X X XX X + X + X X X X + X X X X XX Xx X X XX X X X X XXX X XXX X X X X X X X X X X X X +4 X X X X X 1 XX X X X X X 0 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Empirical Variance of Cluster Members\nX X X X X X X X X + X X 6 X X X X 5 X X XX XX XX X X X X XX X X + X XX X X X XX X X XK X 3 XX ++ X X X X X XXX X X X X XX X X X X X XX X X X X X X X X 1 X Xx X X + X 0 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Empirical Variance of Cluster Members\nFigure 2: Scatter plot of audio cluster purity. weighted by log cluster size against cluster variance for k = 500 (least-squares line su- perimposed).\nIn this paper, we have demonstrated that a neural network trained to associate images with the wave- forms representing their spoken audio captions can successfully be applied to discover and cluster acoustic patterns representing words or short phrases in untranscribed audio data. An analogous procedure can be applied to visual images to discover visual patterns, and then the two modali\nWord Count Word Count ocean 2150 castle 766 (silence) 127 (silence) 70 the ocean 72 capital 39 blue ocean 29 large castle. 24 body ocean 22 castles 23 16 (noise) 21 oceans ocean water 16 council 13 (noise) 15 stone castle 12 of ocean 14 capitol 10 oceanside 14 old castle 10\nTable 2: Examples .of the..breakdown. of word/phrase identities of several acoustic clusters\nsky grass sunset ocean river castle couch wooden lighthouse train\nFigure 3: The 9 most central image crops from several image clusters, along with the majority-vote label of their most associated acoustic pattern cluster.\nTable 3: Top 50 clusters with k = 500 sorted by increasing variance. Legend: Ccis acoustic cluster size, |C| is associated image cluster size, Pur. is acoustic cluster purity, o2 is acoustic cluster variance, and Cov. is acoustic cluster coverage. A dash (-) indicates a cluster whose majority label is silence.\nies can be linked, allowing the network to learn e.g. that spoken instances of the word \"train' ar associated with image regions containing trains. This is done without the use of a conventional au omatic speech recognition system and zero text transcriptions, and therefore is completely agnosti to the language in which the captions are spoken. Further, this is done in O(n) time with respec o the number of image/caption pairs, whereas previous state-of-the-art acoustic pattern discover lgorithms which leveraged acoustic data alone run in O(n2) time. We demonstrate the success o our methodology on a large-scale dataset of over 214,000 image/caption pairs, comprising over 52 nours of spoken audio data. We have shown that the shared multimodal embedding space learnec y our model is discriminative not only across visual object categories, but also acoustically and se nantically across spoken words. To the best of our knowledge, this paper contains by far the larges cale speech pattern discovery experiment ever performed, as well as the first ever successful effor\nTrans |Cc| |Ci| Pur. 02 Cov. Trans |Cc| |Ci| Pur. 02 Cov. 1059 3480 0.70 0.26 snow 4331 3480 0.85 0.26 0.45 desert 1936 2896 0.82 0.27 0.67 kitchen 3200 2990 0.88 0.28 0.76 restaurant 1921 2536 0.89 0.29 0.71 mountain 4571 2768 0.86 0.30 0.38 black 4369 2387 0.64 0.30 0.17 skyscraper 843 3205 0.84 0.30 0.84 bridge 1654 2025 0.84 0.30 0.25 tree 5303 3758 0.90 0.30 0.16 castle 1298 2887 0.72 0.31 0.74 bridge 2779 2025 0.81 0.32 0.41 2349 2165 0.31 0.33 ocean 2913 3505 0.87 0.33 0.71 1 table 3765 2165 0.94 0.33 0.23 windmill 1458 3752 0.71 0.33 0.76 window 1890 2795 0.85 0.34 0.21 river 2643 3204 0.76 0.35 0.62 water 5868 3204 0.90 0.35 0.27 beach 1897 2964 0.79 0.35 0.64 flower 3906 2587 0.92 0.35 0.67 wall 3158 3636 0.84 0.35 0.23 sky 4306 6055 0.76 0.36 0.34 street 2602 2385 0.86 0.36 0.49 golf course 1678 3864 0.44 0.36 0.63 field 3896 3261 0.74 0.36 0.37 tree 4098 3758 0.89 0.36 0.13 lighthouse 1254 1518 0.61 0.36 0.83 forest 1752 3431 0.80 0.37 0.56 church 2503 3140 0.86 0.37 0.72 people 3624 2275 0.91 0.37 0.14 baseball 2777 1929 0.66 0.37 0.86 field 2603 3922 0.74 0.37 0.25 car 3442 2118 0.79 0.38 0.27 people 4074 2286 0.92 0.38 0.17 shower 1271 2206 0.74 0.38 0.82 people walking 918 2224 0.63 0.38 0.25 wooden 3095 2723 0.63 0.38 0.28 mountain 3464 3239 0.88 0.38 0.29 tree 3676 2393 0.89 0.39 0.11 1976 3158 0.28 0.39 snow 2521 3480 0.79 0.39 0.24 - water 3102 2948 0.90 0.39 0.14 rock 2897 2967 0.76 0.39 0.26 2918 3459 0.08 0.39 night 3027 3185 0.44 0.39 0.59 station 2063 2083 0.85 0.39 0.62 chair 2589 2288 0.89 0.39 0.22 building 6791 3450 0.89 0.40 0.21 city 2951 3190 0.67 0.40 0.50\nFigure 4: t-SNE analysis of the 150 lowest-variance audio pattern cluster centroids for k = 500 Displayed is the majority-vote transcription of the each audio cluster. All clusters shown contained a minimum of 583 members and an average of 2482, with an average purity of .668\nto learn the semantics of the discovered acoustic patterns by grounding them to patterns which are jointly discovered in another modality (images).\nThe future directions in which this research could be taken are incredibly fertile. Because our method creates a segmentation as well as an alignment between images and their spoken captions, a genera- tive model could be trained using these alignments. The model could provide a spoken caption for an arbitrary image, or even synthesize an image given a spoken description. Modeling improvements are also possible, aimed at the goal of incorporating both visual and acoustic localization into the neural network itself. Additionally, by collecting a second dataset of captions for our images in a dif- ferent language, such as Spanish, our model could be extended to learn the acoustic correspondences for a given object category in both languages. This paves the way for creating a speech-to-speech translation model not only with absolutely zero need for any sort of text transcriptions, but also with zero need for directly parallel linguistic data or manual human translations."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Alessandro Bergamo, Loris Bazzani, Dragomir Anguelov, and Lorenzo Torresani. Self-taught object localiza tion with deep networks. CoRR, abs/1409.3964, 2014. URLhttp: / /arxiv.0rg/abs/1409.3964 Minsu Cho, Suha Kwak, Cordelia Schmid, and Jean Ponce. Unsupervised object discovery and localization in the wild: Part-based matching with bottom-up region proposals. In Proceedings of CVPR, 2015.\nTable 4: Clustering statistics of the acoustic clusters for various values of k and different settings of the variance-based cluster pruning threshold. Legend: C = number of clusters remaining after pruning, = number of datapoints after pruning, Pur = purity, L] = number of unique cluster labels, AC = average cluster coverage\n2 <0.9 2 < 0.65 k x| Pur L] AC |c| |x| Pur L AC 250 249 1081514 .364 149 .423 128 548866 .575 108 .463 500 499 1097225 .396 242 .332 278 623159 .591 196 .375 750 749 1101151 .409 308 .406 434 668771 .585 255 .450 1000 999 1103391 .411 373 .336 622 710081 .568 318 .382 1500 1496 1104631 .429 464 .316 971 750162 .566 413 .366 2000 1992 1106418 .431 540 .237 1354 790492 .546 484 .271 bridgge buildiryilding buildihyilding traitrack red brick street cathedral archway church sngXow city skyscraper construstige <silence> rook stage castle shower house fire stanPase statue stone tower house ski ski resort cemetery couch lighthouse ceiling ice white sign. statipgn sunset alleyway <silencerking black cockpit cliff close pond road closet locker room windQWndow dirt<silence> boxing <silence> leaves palm trefee tree window lake body boatook desert <silence> beach plantrden tree windmill river <silenaea flower tree water water ocean standing cloud yellow water machine distance bushes <silence> waterfall <silence> stSty pool bedroom field. diving diningatio mountntain field walkway wall indonside door playground gra95ass cabyet store chaiair people walking kitchen children gglf ceurse shop shelves people. restaurant girl eople bowling alley people table woman table table man man horest <silence> man wooden\nbuildiagliding traitrack red brick street cathedral archway church sng%ow city skyscraper construstige <silence> roock stage castle shower house fire stapease statue stone tower house ski ski resort cemetery couch lighthouse ceiling ice white sign. statigh sunset alleyway night nearking black highwagllway <siler cockpit cliff close pond road closet locker room indqndow boxing <silence> leaves dirt<silence> palm trefee window lake body boatook beach desert <silence> tree indmill river flower tree ocean <silenaed plant. tree standing cloud garden water water low water distance machine bushes <silence> <silence> waterfall stity pool bedroom ield. living diningatio mduntantain field walkway wall indonside door playground grag5ass canyon store cabinet chaiair people walking golf sourne shop shelves kitchen children peoplseople restaurant girl bowling alley people table woman table table man man <silence>\nbeach cliff pool desert field chair table staircase statue stone church forest mountain skyscraper trees waterfall windmills window city bridge flowers wall archway baseball man boat shelves cockpit girl children building rock kitchen plant hallway"}]
B1-Hhnslg
[{"section_index": "0", "section_name": "PROTOTYPICAL NETWORKS FOR FEW-SHOT LEARNING", "section_text": "Jake Snell1* Kevin Swersky2& Richard S. Zemel\nA recent approach to few-shot classification called matching networks has demon- strated the benefits of coupling metric learning with a training procedure that mim- ics test. This approach relies on an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space in which few-shot classification can be performed by computing Euclidean distances to pro- totype representations of each class, rather than individual points. Our method is competitive with state-of-the-art few-shot classification approaches while being much simpler and more scalable with the size of the support set. We empirically demonstrate the performance of our approach on the Omniglot and miniImageNet datasets. We further demonstrate that a similar idea can be used for zero-shot learning, where each class is described by a set of attributes, and achieve state-of- the-art results on the Caltech UCSD bird dataset."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "One-shot classification (Miller et al.|2000f Lake et al.||2011) Koch]2015) (and more generally, few shot classification) is a problem in which a classifier must be adapted to accommodate new classe. not seen in training, given only a single (n) example(s) of these classes. A classical approach such as retraining the model on the new data, would severely overfit. While the problem is quite difficult, it has been demonstrated that people have the ability to successfully perform one-shot classification (Lake et al.]2011). Nonparametric models such as nearest neighbors are useful ir one-shot classification because they naturally adapt to new data, however this comes at the cost of storing the entire set of examples per class, the \"support set\".\nIn this paper, we propose a few-shot learning classifier based on a relatively simple idea: there exists an embedding, where points belonging to a class cluster around a single prototype. This inductive bias is a useful one to combat overfitting for one-shot tasks. Our approach also comes with the benefit that it is very simple to implement, and computationally fast. In order to do this, we learn a non-linear mapping of the input into an embedding space using a neural network, and take the class"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "To overcome this, much progress has been made recently in applying metric learning (Goldberger et a1.12004 I Kulis2012 Bellet et al.]2013) to one-shot tasks. Most recently, (Vinyals et al.[2016 proposed a metric learning approach that they call matching networks. This approach uses an atten- tion mechanism over a learned embedding of the support set in order to predict class labels for the points to be classified, a.k.a the \"query set\"'. It optionally allows the embeddings to be conditioned on other points in the support set (\"full context embeddings') or for the embeddings to be fine-tuned at test time. A particularly interesting feature of the matching networks model is that it utilizes sampled mini-batches called \"episodes\"' during training, where each episode is designed to mimic the one-shot task. This makes the training problem more faithful to the test environment. Match- ing networks however optionally utilize additional components such as an attention-based LSTM to change the embedding based on the support set. This complexity makes implementation more difficult in addition to the aforementioned poor scaling characteristics due to computing attention over the entire support set.\nprototype to be the mean of the support set in the embedding space. Classification is then performed by simply finding the nearest prototype to the embedded query point. We find that this approach yields competitive results with matching networks and other one-shot learning approaches, despite being much simpler.\nA related problem is known as zero-shot learning, where instead of being given a small number of examples of a new class at test-time, each class comes with a set of meta-information, ofter attributes, that give a high level description of that class. The idea then is to learn a mapping from input examples to the high-level attributes of their member class. We adapt the idea of prototypical networks to this setting by learning a secondary embedding of the attribute vector such that the image embeddings and attribute embeddings lie within the same space. In this case, we use the attribute embedding as the class prototype, rather than the class mean."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Neighborhood Components Analysis (NCA) (Goldberger et al.] 2004) learns a Mahalanobis dis tance to maximize K-nearest-neighbour's (KNN) leave-one-out accuracy in the transformed space A distribution over the neighbors of each data point is computed according to a softmax over th corresponding Mahalanobis distances. This distribution is marginalized to form a distribution ove class assignments and the projection matrix is updated via gradient descent to maximize the prob ability of the true class. (Salakhutdinov & Hinton2007) extend NCA by using a neural networl to perform the transformation. Our approach is similar in that we optimize a softmax based on dis tances in the transformed space. Ours differs because it is a softmax over classes, rather than points computed from Euclidean distances to each class's prototype representation. This is more appro priate for few-shot learning for two reasons: (a) the number of support points can vary by class and (b) each class has a succinct representation independent of the number of data points, and thi representation can optionally be updated in an online manner.\nOur approach is similar to the nearest class mean approach of (Mensink et al.|2013) from the metri learning literature, where each class is represented by the mean of its examples, and classification i performed by finding the prototype that is closest to the query point. Their approach was developec to rapidly incorporate new classes into a classifier without retraining, however it relies on a linea embedding and is designed to handle the case where the novel classes come with many examples In our approach, we utilize neural networks to learn a non-linear embedding of the features and we couple this with episodic training in order to handle the one-shot scenario. Mensink et al.|do attemp to extend their approach to perform non-linear classification, but they do this by allowing classes t have multiple prototypes. They find these prototypes in a pre-processing step by using k-means or the input space, and then perform a multi-modal variant of their linear embedding. By contrast, we learn a non-linear embedding in an end-to-end manner with no such pre-processing, producing non-linear classifier that still only requires one prototype per class.\nIn matching networks (Vinyals et al.]2016) they propose a meta-learning strategy in which training. mimics test by stochastically creating one-shot \"episodes'. We adopt the same strategy when train ing our models. They, like us, use neural networks to non-linearly transform data points into a space. that is more amenable to classification. However, matching networks make predictions by comput. ing attention weights over each point in the support set. This becomes computationally expensive as the size of the support set grows. Our approach, on the other hand, first summarizes each class ir. the support set by a prototype and then computes a distribution over classes. Ours thus has flexibil. ity in the way the prototypes are computed and can handle additional support points gracefully by. updating prototypes online.\nThe neural statistician (Edwards & Storkey 2016) extends the variationa1 autoencoder (Kingma & Welling2013) to learn generative models of datasets rather than individual points. One component of the neural statistician is the \"statistic network\"' which summarizes a set of data points into a statistic vector. It does this by encoding each point within a dataset, taking a sample mean, and applying a post-processing network to obtain an approximate posterior over the statistic vector. Edwards & Storkey|test their model for one-shot classification on the Omniglot dataset (Lake et al. 2011) by considering each character to be a separate dataset and making predictions based on the class whose approximate posterior over the statistic vector had minimal KL-divergence from the test point. Like the neural statistician, we also produce a summary statistic for each class. However, ours\nis a discriminative model which is more appropriate because our primary task, one-shot learning is also discriminative. Discriminative training has the added benefit of lending our model mor flexibility in both the way we compute summary statistics and use them to make predictions at tes time.\nThere are many other approaches to one-shot learning that employ very different techniques fror ours.Koch uses siamese networks to predict the probability that two images belong to the sam class. Lake et al. devise a hierarchical Bayesian generative model of how a handwritten characte is created in order to perform one-shot learning on the Omniglot dataset.Santoro et al. propos memory augmented neural networks (MANN) that reference an external memory in a similar fashio to neural Turing machines (Graves et al.|2014). This allows them to store support examples in a external memory and reference them later when making classification decisions. They also introduc a form of episodic training, similar to that in matching networks.\nGiven a test point x, prototypical networks forms a distribution over classes based on a softmax ove the Euclidean distances between its embedding and the prototypes:\nexp(lfe(x) - ckl2\nLearning proceeds by maximizing the log probability of the true class y.\nmax log p(y[x) D\nWe train in an episodic manner similar to[Vinyals et al.(2016) by randomly selecting a subset of classes from the training set, then choosing a subset of examples within each class to act as the support set and the remainder to serve as test points.."}, {"section_index": "4", "section_name": "3.1 PROTOTYPE NORMALIZATION", "section_text": "In episodic training, the support set is randomly chosen from among the training points. In datasets. with high variability this can lead to a large variance in the class prototypes, c, between episodes. In order to reduce this variability, we found that it can sometimes be beneficial to normalize the always lie on the unit sphere, although the query points are still allowed to be embedded off of the. unit sphere. Normalization has two benefits: the reduction in variance helps to greatly speed up. training, while the restriction of the prototypes to the unit sphere confers additional regularization..\nA simple analysis is useful in gaining insight into the nature of the learned classifier (a similar analysis appears in|Mensink et al.(2013)). When we use Euclidean distance to measure the distance. between a query point and the class prototypes, then the loss function in (2) is equivalent to a linear. classifier with a particular parameterization. To see this, we expand the term within the exponent:.\nAt prediction time we are given a support set of N labeled examples: S = {(xi, yi)}N1 = S1 U. . .U SK where Sk = {(x, y) E S | y = k}. Our method computes a class representation ck, or prototype, of each class through an embedding function fe(x) parameterized by learnable parameters 0:\n1 fe(x) Ck |Sk] (x,y)ESk\nThe first term in Equation (4) is constant with respect to the class k, so it does not affect the softmax probabilities. We can write the remaining terms as a linear classifier as follows:\nWe can view this through the lens of meta-learning, where the model is predicting the weight and biases of a linear classifier using a simple function of the mean of the embedded support set By contrast, the predictive function in matching networks is a generalization of a nearest neighboi classifier, rather than a linear classifier.\nA natural question is whether it makes sense to use multiple prototypes per class instead of jus. one. If each support point were to be considered a prototype, then this would be analogous to doin nearest neighbor classification in the embedding space, which would be computationally expensive On the other hand, if the number of prototypes per class is fixed, then this would require a parti. tioning scheme. This has been proposed in Mensink et al.[(2013) and Rippel et al.[(2016), howeve both methods require a separate partitioning phase that is decoupled from the weight updates, whil. our approach is simple to learn with ordinary gradient methods. Finally, the equivalence to a lin. ear classifier suggests that this may be sufficient, as all of the required non-linearity can be learne. within the embedding function. Indeed, this is the approach that state-of-the-art neural networl classification systems currently use, e.g., (Krizhevsky et al.]2012)."}, {"section_index": "5", "section_name": "3.3 DESIGN CHOICES", "section_text": "There are still a number of design choices that need to be made with this model in order to achieve optimal performance. One such choice is in deciding how many classes we would like the classifier to operate over during each training episode. For example, at test time we might be evaluating on 5-way classification, but at training time we could train each episode with 20-way classification. We found in general that training on a larger number of classes per episode improves performance, even if the number of classes we need to decide between at test-time is fewer.\nFinally, we need to specify whether to use prototype normalization. We found that normalization generally acts as a regularizer and speeds up training.."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "l|fe(x)-ck|2=(fe(x)-ck)(fox)-ck) =-fo(x)fo(x)+2c fo(x)-c Ck\n2cf fo(x)- cf Ck =wk fo(x) + bk Wk = 2Ck bk = -Cj Ck\nWhen using prototype normalization, the biases bk will all be 1, and the class weights ws will be. restricted to have a norm of 2. In this case, using Euclidean distance becomes proportional to cosine distance.\nAnother choice involves the possible decoupling of the n in n-shot between training and testing. We could train on 1-shot, but test on 5-shot or vice-versa. We found that it is typically better to match. the shot at training and testing; that is, when it comes to the shot, to match the training procedure to. the test procedure. We demonstrate this empirically in the Experiments section below.."}, {"section_index": "7", "section_name": "4.1 OMNIGLOT", "section_text": "Omniglot (Lake et al.2011) is a dataset of 1623 handwritten characters collected from 50 alphabets. There are 20 examples associated with each character, where each example was drawn by a differen1 human subject. We follow the procedure of (Vinyals et al.2016) by augmenting the characters with. rotations in multiples of 90 degrees and using 1200 characters for training and the remainder for evaluation. Our embedding architecture mirrors that of Matching Nets and is composed of four blocks of a 64 filter 3 3 convolution, batch normalization (Ioffe & Szegedyl2015), a ReLU nonlinearity and a 2 2 max-pooling, resulting in a 64-dimensional output space. The results of. our model trained to perform Omniglot classification are shown in Table[1\nWe trained prototypical networks using episodes designed for 1-shot learning, i.e., the support sets during training consist of a single input example, and we train using 20-way classification. Our re- sults are as good or better than those reported in matching networks, and to our knowledge represent the state-of-the-art on this dataset using these splits\n5-way 20-way Model 1-shot 5-shot 1-shot 5-shot Pixels 41.7% 63.2% 26.7% 42.6% Baseline Classifier 80.0% 95.0% 69.5% 89.1% Neural Statistician (Edwards & Storkey)2016)* 88% 95% 1 - Matching Nets (non-FCE, no fine-tune) 98.1% 98.9% 93.8% 98.5% Prototypical Nets (1-shot) 98.1% 99.5% 94.2% 98.6%\nTable 1: Omniglot few-shot classification accuracy. *Note that the Neural Statistician used non standard class splits."}, {"section_index": "8", "section_name": "4.2 miniIMAGENET", "section_text": "The miniImageNet dataset (Vinyals et al.]2016) is derived from the larger ImageNet dataset (Deng et al.]2009). It consists of 60,000 color images of size 84 X 84 divided into 100 classes with 600 examples each. It is designed for testing one-shot learning algorithms, where 80 classes are chosen for training, and 20 for testing.\nClassification results for miniImageNet are shown in Table[2] The embedding architecture we usec for miniImagenet is the same as our experiments for Omniglot, though here it results in a 1600. dimensional output space due to the increased size of the images. We trained two versions of pro totypical networks, one with episodes containing a single support examples per class (denoted by. 1-shot) and one with five support examples per class (denoted by 5-shot). All episodes contained 20. randomly sampled classes, as 20-way classification represents a more difficult task than 5-way. We. evaluated both models on 1-shot and 5-shot for 5-way and 20-way classification at test and find that. each model performs best on the number of support examples it was trained for..\nTable 2: miniImageNet classification accuracy"}, {"section_index": "9", "section_name": "4.3 CUB ZERO-SHOT CLASSIFICATION", "section_text": "In order to assess the suitability of our approach for zero-shot learning, we also run experiments or the Caltech-UCSD Birds (CUB) 200-2011 dataset (Welinder et al.|2010). In the zero-shot setting the goal is to classify query images in the absence of any support examples. Instead, class metadata (such as attributes or a textual description) is provided for each of the test classes. We adapt ou\nTable 3: CUB-200 zero-shot classification accuracy for methods utilizing attribute vectors as clas. metadata.\nfew-shot approach to the zero-shot setting by learning to jointly embed images and metadata in a shared space. The embedded metadata serve as class prototypes and classification is performed by embedding the query image and selecting the class whose prototype is nearest in the Euclidean Space.\nThe CUB dataset contains 11,788 images of 200 bird species. We closely follow the procedure of. Reed et al.[(2016) in preparing the data. We use their splits to divide the classes into disjoint sets of 100 training, 50 validation, and 50 test. For images we use 1,024-dimensional features extracted by applying GoogLeNet (Szegedy et al.]2015) to middle, upper left, upper right, lower left, and lower. right crops of the original and horizontally-flipped image' At test time we use only the middle crop. of the original image. For class metadata we use the 312-dimensional continuous attribute vectors. provided with the CUB dataset. These attributes encode various characteristics of the bird species. such as their color, shape, and feather patterns..\nWe learned a simple linear mapping on top of both the 1,024-dimensional image features and the 312-dimensional attribute vectors to produce a 1,024-dimensional output space. We apply prototype normalization to the embedded attributes so that the class prototypes are always of unit length. This serves as a form of regularization to help our embedding functions generalize better. The model parameters were optimized according to our objective via SGD with Adam (Kingma & Ba]2014 at learning rate of 10-4 and weight decay of 10-5. Early stopping on validation loss was used to determine the optimal number of epochs for retraining on the training + validation set.\nFigure 1shows a t-SNE (Maaten & Hinton 2008) visualization of attribute embeddings learnec using prototypical networks for zero-shot classification. We can see that the embeddings group the bird species by characteristics such as their color and shape."}, {"section_index": "10", "section_name": "5 CONCLUSION", "section_text": "We have proposed a simple method called prototypical networks for few-shot learning based on th. idea that we can represent each class by the mean of its examples in a representation space learne by a neural network. We train these networks to specifically perform well in the few-shot settin by using episodic training. Prototypical networks are simple to implement, and computationall efficient. We showed that this approach is equivalent to predicting the weights of a linear classifier. where the weights and biases are a function of the prototypes. Prototypical networks achieve state of-the-art results on the Omniglot dataset, and competitive results on the miniImagenet dataset. W further showed how this approach can be adapted to the zero-shot setting by taking an embedding o. an attribute vector for each class to be the prototype. This approach achieves state-of-the-art result on zero-shot classification of the Caltech UCsD birds dataset..\nTable 3| shows that of methods utilizing attributes as class metadata, we achieve state-of-the-art results by a large margin. Our approach is much simpler than that of other recent approaches (Liao et al.|2016) which train an SVM on a learned feature space obtained by fine-tuning AlexNet (Krizhevsky et al.f2012). These zero-shot classification results demonstrate that our approach is general enough to be applied even when the data points (images) are from a different domain rela tive to the classes (attributes)."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Sachin Ravi and Hugo Larochelle for help in setting up the Omniglot and miniImage data. We would also like to thank Renjie Liao for assistance with the CUB-200 zero-sho procedure and Oriol Vinyals for confirming details regarding the Omniglot and miniImagenet splits and matching nets architectures."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. Label-embedding for. attribute-based classification. In Computer Vision and Pattern Recognition. pp. 819-826. 2013\nAurelien Bellet, Amaury Habrard, and Marc Sebban. A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709, 2013.\nFigure 1: A t-SNE visualization of the attribute embeddings learned by a prototypical network on the CUB dataset. Each image is an arbitrarily chosen example from the corresponding test class The learned space successfully clusters unseen bird species by characteristics such as color, shape and pattern.\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009 IEEE Conference on, pp. 248-255. IEEE, 2009.\nJacob Goldberger, Geoffrey E. Hinton, Sam T. Roweis, and Ruslan Salakhutdinov. Neighbourhoo. components analysis. In Advances in Neural Information Processing Systems, pp. 513-520, 2004\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.\nBrenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. One shot learning of simple visual concepts. In CogSci, 2011.\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579-2605, 2008\nRuslan Salakhutdinov and Geoffrey E. Hinton. Learning a nonlinear embedding by preserving class neighbourhood structure. In A1STATS, pp. 412-419, 2007.\nAdam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta learning with memory-augmented neural networks. In International Conference on Machine Learning, pp. 1842-1850, 2016.\nRenjie Liao, Alexander Schwing, Richard Zemel, and Raquel Urtasun. Learning deep parsimonious representations. Advances in Neural Information Processing Systems, 2016.\nThomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriela Csurka. Distance-based image. classification: Generalizing to new classes at near-zero cost. IEEE transactions on pattern anal ysis and machine intelligence, 35(11):2624-2637, 2013.\nErik G Miller, Nicholas E Matsakis, and Paul A Viola. Learning from one example through shared densities on transforms. In CVPR, volume 1, pp. 464-471. IEEE, 2000."}]
r1YNw6sxg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Visual servoing is a classic problem in robotics that requires moving a camera or robot to match a target configuration of visual features or image intensities. Many robot control tasks that combin perception and action can be posed as visual servoing, including navigation (DeSouza & Kak|2002 Chen et al.]2006), where a robot must follow a desired path; manipulation, where the robot mus servo an end-effector or a camera to a target object to grasp or manipulate it (Malis et al.]1999 Corke1993 Hashimoto1993Hosoda & Asada1994Kragic & Christensen 2002); and various other problems, as surveyed in Hutchinson et al.(1996). Most visual servoing methods assume ac cess to good geometric image features (Chaumette & Hutchinson]|2006f|Collewet et al.||2008f|Caror et al.] 2013) and require knowledge of their dynamics, which are typically obtained from domaii knowledge about the system. Using such hand-designed features and models prevents exploitatioi of statistical regularities in the world, and requires manual engineering for each new system.\nIn this work, we study how learned visual features, learned predictive dynamics models, and re inforcement learning can be combined to learn visual servoing mechanisms. We focus on target following, with the goal of designing algorithms that can learn a visual servo using low amounts of"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "data of the target in question, so as to be easy and quick to adapt to new targets. Successful targe following requires the visual servo to tolerate moderate variation in the appearance of the targe1 including changes in viewpoint and lighting, as well as occlusions. Learning invariances to all sucl distractors typically requires a considerable amount of data. However, since a visual servo is typ ically specific to a particular task, it is desirable to be able to learn the servoing mechanism ver quickly, using a minimum amount of data. Prior work has shown that the features learned by larg convolutional neural networks on large image datasets, such as ImageNet classification (Deng et al 2009), tend to be useful for a wide range of other visual tasks (Donahue et al.]2014). We explor whether the usefulness of such features extends to visual servoing.\nTo answer this question, we propose a visual servoing method that uses pre-trained features, in. our case obtained from the VGG network (Simonyan & Zisserman2014) trained for ImageNet. classification. Besides the visual features, our method uses an estimate of the feature dynamics in. visual space by means of a bilinear model. This allows the visual servo to predict how motion of. the robot's camera will affect the perceived feature values. Unfortunately, servoing directly on the. high-dimensional features of a pre-trained network is insufficient by itself to impart robustness on. the servo: the visual servo must not only be robust to moderate visual variation, but it must also be able to pick out the target of interest (such as a car that the robot is tasked with following) from. irrelevant distractor objects. To that end, we propose a sample-efficient fitted Q-iteration procedure. that automatically chooses weights for the most relevant visual features. Crucially, the actual ser-. voing mechanism in our approach is extremely simple, and simply seeks to minimize the Euclidean. distance between the weighted feature values at the next time step and the target. The form of the. servoing policy in our approach leads to an analytic and tractable linear approximator for the Q-. function, which leads to a computationally efficient fitted Q-iteration algorithm. We show that we. can learn an effective visual servo on a complex synthetic car following benchmark using just 20. training trajectory samples for reinforcement learning. We demonstrate substantial improvement. over a conventional approach based on image pixels or hand-designed keypoints, and we show an. improvement in sample-efficiency of more than two orders of magnitude over standard model-free. deep reinforcement learning algorithms.\nThe environment for the synthetic car following benchmark is available online as the package CitySim3'] and the code to reproduce our method and experiments is also available onlinq2] Sup plementary videos of all the test executions are available on the project's website3"}, {"section_index": "2", "section_name": "2 RELATED WORK", "section_text": "Visual servoing is typically (but not always) performed with calibrated cameras and carefully de. signed visual features. Ideal features for servoing should be stable and discriminative, and much. of the work on visual servoing focuses on designing stable and convergent controllers under the. assumption that such features are available (Espiau et al.[2002] Mohta et al.]2014] Wilson et al. 1996). Some visual servoing methods do not require camera calibration (Jagersand et al.] 1997. Yoshimi & Allen] 1994), and some recent methods operate directly on image intensities (Caron. et al.2013), but generally do not use learning to exploit statistical regularities in the world and. improve robustness to distractors.\nLearning is a relatively recent addition to the repertoire of visual servoing tools. Several method.. have been proposed that apply ideas from reinforcement learning to directly acquire visual servoing. controllers (Lampe & Riedmiller2013} Sadeghzadeh et al.]2015).However, such methods have not been demonstrated under extensive visual variation, and do not make use of state-of-the-ar convolutional neural network visual features. Though more standard deep reinforcement learning. methods (Lange et al.[2012f Mnih et al.[2013 Levine et al.2016f Lillicrap et al.2015) could ir principle be applied to directly learn visual servoing policies, such methods tend to require large. numbers of samples to learn task-specific behaviors, making them poorly suited for a flexible visua. servoing algorithm that can be quickly repurposed to new tasks (e.g. to following a different object).\nInstead, we propose an approach that combines learning of predictive models with pre-trained visual. features. We use visual features trained for ImageNet (Deng et al.]2009) classification, though any. pre-trained features could in principle be applicable for our method, so long as they provide a suit- able degree of invariance to visual distractors such as lighting, occlusion, and changes in viewpoint Using pre-trained features allows us to avoid the need for large amounts of experience, but we must. still learn the policy itself. To further accelerate this process, we first acquire a predictive model that. allows the visual servo to determine how the visual features will change in response to an action. General video prediction is an active research area, with a number of complex but data-hungry mod-. els proposed in recent years (Oh et al.]2015f [Watter et al.]2015] Mathieu et al.]2015]Xue et al. 2016Lotter et al.l2016 [Jia et al.2016Walker et al.[[2016[Vondrick et al.[[2016]\nHowever, we observe that convolutional response maps can be interpreted as images and, unde mild assumptions, the dynamics of image pixels during camera motion can be well approximated by means of a bilinear model (Censi & Murray|2015). We therefore train a relatively simple bilinear model for short-term prediction of visual feature dynamics, which we can use inside a very simple visual servo that seeks to minimize the error between the next predicted feature values and a targei image.\nUnfortunately, simply training predictive models on top of pre-trained features is insufficient to produce an effective visual servo, since it weights the errors of distractor objects the same amount as the object of interest. We address this challenge by using an efficient Q-iteration algorithm to train the weights on the features to maximize the servo's long-horizon reward. This method draws on ideas from regularized fitted Q-iteration (Gordon]1995] Ernst et al.]2005] Farahmand et al.]2009] and neural fitted Q-iteration (Riedmiller2005) to develop a sample-efficient algorithm that can directly estimate the expected return of the visual servo without the use of any additional function approximator.\nLearning this policy amounts to learning the robot dynamics and the distance metric |I II\nTo learn the robot dynamics, we assume that we have access to a dataset of paired observations anc controls xt, ut, Xt+1. This data is relatively easy to obtain as it involves collecting a stream of the robot's observations and controls. We use this dataset to learn a general visual dynamics model that can be used for any task\nTo learn the distance metric, we assume that the robot interacts with the world and collects tuples of the form xt, ut, Ct, Xt+1, X*. At every time step during learning, the robot observes x and takes. action ut. After the transition, the robot observes xt+1 and receives an immediate cost ct. This cost. is task-specific and it quantifies how good that transition was in order to achieve the goal. At the beginning of each trajectory, the robot is given a goal observation x+, and it is the same throughout the trajectory. We define the goal feature map to be the featurization of the goal observation. We learn the distance metric using reinforcement learning and we model the environment as a Markov Decision Process (MDP). The state of the MDP is the tuple of the current observation and the episode's target observation, St = (xt, x*), the action ut is the discrete-time continuous control of. the robot, and the cost function maps the states and action (st, ut, St+1) to a scalar cost ct..\nLet y, be a featurization of the camera's observations x and let y. be some given goal feature map. For the purposes of this work, we define visual servoing as the problem of choosing controls ut for a fixed number of discrete time steps t as to minimize the error y* - yt.\nWe use a relatively simple gradient-based servoing policy that uses one-step feature dynamics f : {yt, ut} -> yt+1. The policy chooses the control that minimizes the distance between the goal feature map and the one-step prediction:\n(xt,x+) = arg min||y- f(yt,u)||2\n2 y 2 yt+1 2 yt+1 d 1) yt 1 1 yt+1 16 Tc 1 t+1 16 d + 0 y yt+1 0) 0 0 32 32 h h + t Xt 128 Xt+1 .Y 1\nFigure 1: Multiscale bilinear model. The function h maps images x to feature maps y(o), the operator d downsamples the feature l), and the bilinear function f(l) predicts the next maps y( C- feature y(l). The number of channels for each feature map is nc, regardless of the scale l."}, {"section_index": "3", "section_name": "1 VISUAL FEATURES DYNAMICS", "section_text": "We learn a multiscale bilinear model to predict the visual features of the next frame given the curren. image from the robot's camera and the action of the robot. An overview of the model is shown i1 Figure 1| The learned dynamics can then be used for visual servoing as described in|Section 5."}, {"section_index": "4", "section_name": "4.1 VISUAL FEATURES", "section_text": "We consider both pixels and semantic features for the visual representation. We define the functior h to relate the image x and its feature y = h (x). Our choice of semantic features are derived from the VGG-16 network (Simonyan & Zisserman2014), which is a convolutional neural network trained for large-scale image recognition on the ImageNet dataset (Deng et al.]2009). Since spatial invariance is undesirable for servoing, we remove some of the max-pooling layers and replace the convolutions that followed them with dilated convolutions, as done byYu & Koltun(2015). The modified VGG network is shown inFigure 2 We use the model weights of the original VGG-16. network, which are publicly available as a Caffe model (Jia et al.]2014). The features that we use are the outputs of some of the intermediate convolutional layers, that have been downsampled to a 32 32 resolution (if necessary) and standarized with respect to our training set..\nWe use multiple resolutions of these features for servoing. The idea is that the high-resolution repre sentations have detailed local information about the scene, while the low-resolution representations have more global information available through the image-space gradients. The features at level l of the multiscale pyramid are denoted as y(l). The features at each level are obtained from the features below through a downsampling operator d(y(l = y(l) that cuts the resolution in half."}, {"section_index": "5", "section_name": "4.2 BILINEAR DYNAMICS", "section_text": "The features yt) are used to predict the corresponding level's features yt?1 at the next time step, + (l) We use a bilinear model to represent these dynamics, motivated by prior work (Censi & Murray2015). In. order to servo at different scales, we learn a bilinear dynamics model at each scale. We consider two variants of the bilinear model in previous work in order to reduce the number of model parameters.\nThe first variant uses fully connected dynamics as in previous work but models the dynamics of each channel independently. When semantic features are used, this model interprets the feature maps as\nFigure 2: Dilated VGG-16 network The intermediate feature maps drawn in a lighter shade are outputs of max- pooling layers. The features maps in. the conv4 and conv5 blocks are out- puts of dilated convolutions with dila-. tion factors of 2 and 4, respectively.\nbeing abstract images with spatial information within a channel and different entities or factors o. variation across different channels. This could potentially allow the model to handle moving objects occlusions, and other complex phenomena.\nThe fully connected bilinear model is quite large, so we propose a bilinear dynamics that enforces. sparsity in the parameters. In particular, we constrain the prediction to depend only on the features that are in its local spatial neighborhood, leading to the following locally connected bilinear model:.\nYt+1,c=yt.c+ W(l)\nWe optimize for the dynamics while keeping the feature representation fixed. This is a supervised learning problem, which we solve with ADAM (Kingma & Ba]2014). The training set, consisting of triplets xt, ut, Xt+1, was obtained by executing a hand-coded policy that moves the robot around. the target with some Gaussian noise.."}, {"section_index": "6", "section_name": "LEARNING VISUAL SERVOING WITH REINFORCEMENT LEARNING", "section_text": "We propose to use a multiscale representation of semantic features for servoing. The challenge wher introducing multiple scales and multi-channel feature maps for servoing is that the features do no necessarily agree on the optimal action when the goal is unattainable or the robot is far away fror the goal. To do well, it's important to use a good weighing of each of the terms in the objective Since there are many weights, it would be impractically time-consuming to set them by hand, sc we resort to learning. We want the weighted one-step lookahead objective to encourage good long term behavior, so we want this objective to correspond to the state-action value function Q. So we propose a method for learning the weights based on fitted Q-iteration."}, {"section_index": "7", "section_name": "5.1 SERVOING WITH WEIGHTED MULTISCALE FEATURES", "section_text": "Instead of attempting to build an accurate predictive model for multi-step planning, we use the simple greedy servoing method in[Equation (1)] where we minimize the error between the target and predicted features for all the scales. Typically, only a few objects in the scene are relevant, so the errors of some channels should be penalized more than others. Similarly, features at different scales nioht need to be weiohted differentlv Thus\nL W c (Xt,X+) = arg min u C 1=0\nwhere : denotes the cardinality operator and the constant 1/|y)| normalizes the feature errors by its spatial resolution. We also use a separate weight X, for each control coordinate j. This optimization can be solved efficiently since the dynamics is linear in the controls (see|Appendix A)\n4 The locally connected operator, with a local neighborhood of nf nf (analogous to the filter size i convolutions), is defined as:.\nThe parameters are the 4-dimensional tensor W. and the matrix B! for each channel c. scale l, and control coordinate j. The last two terms are biases that allow to model action-independent visual changes, such as moving objects. The * is the locally connected operator, which is like a convolution but with untied filter weights\nThe loss that we use for training the bilinear dynamics is the sum of the losses of the predicted predicted features and the actual features of that level, e(l) = (C\nnight need to be weighted differently. Thus, we use a weighting w. > 0 per channel c and scale l:\nkn+[nf/2] kw+[nf/2] Wkn,kw,in-kn,iw-kwYin,iw ,ku in=kn-[nf/2] iw=kw-[nf/2]\nReinforcement learning methods that learn a Q-function do so by minimizing the Bellman error\nCt + y min Q (St+1, u u\nIt is typically hard or unstable to optimize for both Q-functions that appear in the Bellman error. of |Equation (4)] so it is usually optimized by iteratively optimizing the current Q-function while. keeping the target Q-function constant. However, we notice that for a given state, the action that. minimizes its Q-values is the same for any non-negative scaling a of 0 and for any bias b. Thus, tc speed up the optimization of the Q-function, we first set a(k- ) and b(k-) by jointly solving for a. and b of both the current and target Q-function:.\nN 9k-1 e(k-1).b a>0.b\nThis is similar to how, in policy evaluation, state values can be computed by solving a linear system We regularize the parameters with an l-2 penalty, weighted by v > 0. We use the term FQI iteratior to refer to each iteration k of optimizing the Bellman error, and we use the notation (k-) to denote an intermediate step between iterations (k-1) and (k). The parameters 0 can then be updated witl (k-) = a(k-t)g(k-1). Then, we update 0(k) and b(k) by optimizing for 0 and b of the current O-function while keeping the parameters of the target O-function fixed:.\nA min mir t+1; 0>0,b\nAlgorithm 1 FQI with initialization of policy-independent parameters\ne.b(St,u)=$(St,u)'0+b,(St.\nWe denote the state of the MDP as st = (xt, x+) and add a bias b to the Q-function. The servoing. policy is then simply e(st) = arg minu Qe.s(st, u). For reinforcement learning, we optimized for the weights 0 but kept the feature representation and its dynamics fixed.\ni) (i) (i)1N In fitted Q-iteration, the agent iteratively gathers a dataset . of N samples , St+1i according to an exploration policy, and then minimizes the Bellman error using this dataset. We use the term sampling iteration to refer to each iteration j of this procedure. At the beginning of each sampling iteration, the current policy with added Gaussian noise is used as the exploration policy\n2: for s = 1,..., S do > sampling iterations 3: i 4: for k = 1,..., K do > FQI iterations Fit Q(k-) and b(k-) using (5) 5: Q(k-) Q(k-)Q(k-1) 6: Fit 0(k) and b(k) using (6) 7: q(0) 0(K) 8:\nFigure 3: Cars used to learn the dynamics and the feature weights. They were also used in some of the test experiments\nCosts of Executions when Follow1ng Cars Seen During Training Costs of Executions when Following Novel Cars 9 9 8 8 6 6 4 4 3 3 2 2 1 11 0 0 pixel, pixel, VGG VGG VGG VGG VGG pixel, pixel, VGG VGG VGG VGG VGG fully locally conv1_2 conv2.2 conv3_3 conv4_3 conv5_3 fully locally conv1_2 conv2 2 conv3_3 conv4_3 conv5_3 connected connected connected connected Feature Dynamics Feature Dynamics\nFigure 5: Costs of test executions using various feature dynamics models, where the feature weights are op. timized with FQI. We test on cars that were used during learning (left plot) and on novel cars that were only. used at test time (right plot). The reported values are the mean and standard error across 100 trajectories, of up to 100 time steps each. The policies based on pixel intensities use either fully connected or locally connected. dynamics, whereas all the policies based on VGG features use locally connected dynamics. The policies basec. on deeper VGG features generally achieve better performance, except for the deepest feature representation. VGG conv5_3, which is not as suitable for approximating Q-values. The policies based on pixel intensities and. VGG conv5_3 features perform worse on the novel cars. However, VGG features conv1_2 through conv4_3. achieve some degree of generalization on the novel cars.."}, {"section_index": "8", "section_name": "6 EXPERIMENTS", "section_text": "We evaluate the performance of the model for visual servoing in a simulated environment. Th simulated quadcopter is governed by rigid body dynamics. The robot has 4 degrees of freedom corresponding to translation along three axis and yaw angle. This simulation is inspired by tasks ir which an autonomous quadcopter flies above a city, with the goal of following some target objec (e.g., a car).\nThe dynamics for each of the features were trained using a dataset of 1ooo0 samples (corresponding. to 100 trajectories) with ADAM (Kingma & Ba2014). A single dynamics model was learned foi. each feature representation for all the training cars (Figure 3). This training set was generated by. executing a hand-coded policy that navigates the quadcopter around a car for 100 time steps pe. trajectory, while the car moves around the city.\nWe used the proposed FQI algorithm to learn the weightings of the features and control regularizer At every sampling iteration, the current policy was executed with Gaussian noise to gather data from 10 trajectories. All the trajectories in our experiments were up to 100 time steps long. The immediate cost received by the agent encodes the error of the target in image coordinates (details in|Appendix B). Then, the parameters were iteratively updated by running K = 10 iterations o FQI. We ran the overall algorithm for only S = 2 sampling iterations and chose the parameters that achieved the best performance on 10 validation trajectories. These validation trajectories wer. obtained by randomly choosing 10 cars from the set of training cars and randomly sampling initia states, and executing the policy with the parameters of the current iteration. All the experiment share the same set of validation trajectories.\nFigure 4: Novel cars used only in the test experi- ments. They were never seen during training or vali. dation.\nTable 1: Sample observations from test executions in our experiments with the novel cars, and the costs fo. each trajectory, for different feature dynamics. We use the weights learned by our FQI algorithm. In each row. we show the observations of every 10 steps and the last one. The first observation of each trajectory is usec as the target observation. The trajectories shown here were chosen to reflect different types of behaviors. The servoing policy based on pixel feature dynamics can generally follow cars that can be discriminated based or RGB pixel intensities (e.g., a yellow car with a relatively uniform background). However, it performs poorly when distractor objects appear throughout the execution (e.g., a lamp) or when they appear in the target image (e.g., the crosswalk markings on the road). On the other hand, VGG conv4_3 features are able to discriminate the car from distractor objects and the background, and the feature weights learned by the FQI algorithm are able to leverage this. Additional sample executions with other feature dynamics can be found in Table 3Jin the Appendix.\nWe compare the servoing performance for various feature dynamics models, where the weights are. optimized with FQI. We execute the learned policies on 100 test trajectories and report the average. cost of the trajectory rollouts on Figure 5 The cost of a single trajectory is the (undiscounted) sum of costs ct. We test the policies with cars that were seen during training as well as with a set of novel. cars (Figure 4), to evaluate the generalization of the learned dynamics and optimized policies..\nFrom these results, we notice that policies based on deeper VGG features, up to VGG conv4_3 generally achieve better performance. However, the deepest feature representation, VGG conv5_3,. is not as suitable for approximating Q-values. We hypothesize that this feature might be too spatially invariant and it might lack the necessary spatial information to differentiate among different car. positions. The policies based on pixel intensities and VGG conv5_3 features perform worse on the novel cars. However, VGG features conv1_2 through conv4_3 achieve some degree of generalization on the novel cars.\nWe show sample trajectories in Table 1 The policy based on pixel-intensities is susceptible to occlusions and distractor objects that appear in the target image or during executions. This is because distinguishing these occlusions and distractors from the cars cannot be done using just RGB features\nFeature Observations from Test Executions Cost Dynamics 0.95 pixel, locally 6.26 connected 14.49 Iin 0.38 VGG 0.48 conv4_3 118 1.02\nThe test trajectories were obtained by randomly sampling 100 cars (with replacement) from one of. the two sets of cars, and randomly sampling initial states (which are different from the ones used for validation). For consistency and reproducibility, the same sampled cars and initial states were used across all the test experiments, and the same initial states were used for both sets of cars. These test trajectories were never used during the development of the algorithm or for choosing. hyperparameters.\nFigure 6: Comparison of costs on test executions of prior methods against our method based on VGG conv4_3. feature dynamics. These costs are from executions with the training cars; the costs are comparable when testing with the novel cars (Table 2). The first two methods use classical image-based visual servoing (IBVS). with feature points from an off-the-shelf keypoint detector and descriptor extractor (ORB features), and with. feature points extracted from bounding boxes predicted by a state-of-the-art visual tracker (C-COT tracker). respectively. The third method trains a convolutional neural network (CNN) policy end-to-end with Trust. Region Policy Optimization (TRPO). The other methods use the servoing policy based on VGG conv4_3 feature. dynamics, either with unweighted features or weights trained with TRPO for either 2 or 50 iterations. In the case of unweighted features, we learned the weights and a single weight w with the cross entropy method. (CEM). We report the number of training trajectories in parenthesis for the methods that require learning. For. TRPO, we use a fixed number of training samples per iteration, whereas for CEM and FQI, we use a fixed. number of training trajectories per iteration. We use a batch size of 4000 samples for TRPO, which means that. at least 40 trajectories were used per iteration (since trajectories can terminate early, i.e. in less than 100 time. Steps)."}, {"section_index": "9", "section_name": "6.3 COMPARISON OF WEIGHTINGS FROM OTHER OPTIMIZATION METHODS", "section_text": "We compare our policy using conv4_3 feature dynamics, with weights optimized by FQI, agains policies that use these dynamics but with either no feature weighting or weights optimized by othe algorithms.\nFor the case of no weighting, we use a single feature weight w but optimize the relative weightin. of the controls with the cross entropy method (CEM) (De Boer et al.|2005). For the other cases. we learn the weights with Trust Region Policy Optimization (TRPO) (Schulman et al.|2015). Sinc the servoing policy is the minimizer of a quadratic objective (Equation (3)), we represent the polic. as a neural network that has a matrix inverse operation at the output. We train this network for. and 50 sampling iterations, and use a batch size of 4000 samples per iteration. All of these method use the same feature representation as ours, the only difference being how the weights w and A ar. chosen.\nWe report the average costs of these methods on the right of[Figure 6] In 2 sampling iterations the policy learned with TRPO does not improve by much, whereas our policy learned with FQ significantly outperforms the other policies. The policy learned with TRPO improves further in 5( iterations; however, the cost incurred by this policy is still about one and a half times the cost of ou policy, despite using more than 100 times as many trajectories."}, {"section_index": "10", "section_name": ".4 COMPARISON TO PRIOR METHODS", "section_text": "For one of the prior methods, we train a convolutional neural network (CNN) policy end-to-end with TRPO. The policy is parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fully- connected layers, with ReLU activations except for the output layer; the convolutional layers use\n5 prior methods that methods that use VGG conv4_3 do not use learned features and their learned feature dynamics locally connected feature dynamics 4 1 0 ORB C-COT CNN unweighted feature feature ours, feature visual +TRPO feature dynamics dynamics feature points tracker ( 20000) dynamics +TRPO +TRPO dynamics IBVS IBVS +CEM ( 80) ( 2000) +FQI (1500) (20) Feature Representation and Optimization Method\n16 filters (4 4, stride 2) each and the first 2 fully-connected layers use 32 hidden units each. Tl policy takes in raw pixel-intensities and outputs controls..\nThis policy achieves a modest performance (although still worse than the policies based on conv4_: feature dynamics) but it requires significantly more training samples than any of the other learning. based methods. We also trained CNN policies that take in extracted VGG features (without any. dynamics) as inputs, but they perform worse (see|Table 4Jin the Appendix). This suggests that givei. a policy parametrization that is expressive enough and given a large number of training samples, i. is better to directly provide the raw pixel-intensity images to the policy instead of extracted VGC. features. This is because VGG features are not optimized for this task and their representation loses. some information that is useful for servoing.\nThe other two prior methods use classical image-based visual servoing (IBvS) (Chaumette &. Hutchinson2006) with respect to Oriented FAST and Rotated BRIEF (ORB) feature points (Rublee et al.||2011), or feature points extracted from a visual tracker. For the former, the target features con- sist of only the ORB feature points that belong to the car, and this specifies that the car is relevant for the task. For the tracker-based method, we use the Continuous Convolution Operator Tracker. (C-COT) (Danelljan et al.]2016) (the current state-of-the-art visual tracker) to get bounding boxes around the car and use the four corners of the box as the feature points for servoing. We provide the. ground truth car's bounding box of the first frame as an input to the C-COT tracker. For all of the. IBVS methods, we provide the ground truth depth values of the feature points, which are used in the. algorithm's interaction matrix5\nThe first method performs poorly, in part because ORB features are not discriminative enough fo. some of the cars, and the target feature points are sometimes matched to feature points that ar not on the car. The tracker-based method achieves a relatively good performance. The gap i1. performance with respect to our method is in part due to the lack of car dynamics information i the IBVS model, whereas our method implicitly incorporates that in the learned feature dynamics. It is also worth noting that the tracker-based policy runs significantly slower than our method. Th. open-source implementation of the C-COT tracker|runs at about 1Hz whereas our policy base. on conv4_3 features runs at about 16Hz. Most of the computation time of our method is sper computing features from the VGG network, so there is room for speedups if we use a network tha. is less computationally demanding."}, {"section_index": "11", "section_name": "7 DISCUSSION", "section_text": "Manual design of visual features and dynamics models can limit the applicability of visual ser voing approaches. We described an approach that combines learned visual features with learning predictive dynamics models and reinforcement learning to learn visual servoing mechanisms. Our experiments demonstrate that standard deep features, in our case taken from a model trained foi object classification, can be used together with a bilinear predictive model to learn an effective visual servo that is robust to visual variation, changes in viewing angle and appearance, and occlu sions. For control we propose to learn Q-values, building on fitted Q-iteration, which at execution time allows for one-step lookahead calculations that optimize long term objectives. Our method can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvemeni over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms."}, {"section_index": "12", "section_name": "ACKNOWLEDGEMENTS", "section_text": "This research was funded in part by the Army Research Office through the MAST program and the Berkeley DeepDrive consortium. Alex Lee was also supported by the NSF GRFP."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Andrea Censi and Richard M Murray. Bootstrapping bilinear models of simple vehicles. The Inter national Journal of Robotics Research, 34(8):1087-1113, 2015, 2015\nFrancois Chaumette and Seth Hutchinson. Visual servo control. I. Basic approaches. IEEE Robotic. & Automation Magazine, 13(4):82-90, 2006, 2006.\nJian Chen, Warren E Dixon, M Dawson, and Michael McIntyre. Homography-based visual servo. tracking control of a wheeled mobile robot. IEEE Transactions on Robotics, 22(2):406-415 2006, 2006.\nPeter I Corke. Visual control of robot manipulators - A review. Visual servoing, 7:1-31, 1993, 1993\nBernard Espiau, Francois Chaumette, and Patrick Rives. A new approach to visual servoing in OhotiesIEEF\nAmir Massoud Farahmand, Mohammad Ghavamzadeh, Csaba Szepesvari, and Shie Mannor. Reg ularized fitted Q-iteration for planning in continuous-space Markovian decision problems. In American Control Conference, 2009. ACC'09., pp. 725-730. IEEE, 2009, 2009.\nKoichi Hashimoto. Visual servoing, volume 7. World scientific, 1993, 1993\nMartin Danelljan, Andreas Robinson, Fahad Shahbaz Khan, and Michael Felsberg. Beyond correla. tion filters: Learning continuous convolution operators for visual tracking. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 472-488. Springer, 2016, 2016..\nPieter-Tjerk De Boer, Dirk P Kroese, Shie Mannor, and Reuven Y Rubinstein. A tutorial on the\nSeth Hutchinson, Gregory D Hager, and Peter I Corke. A tutorial on visual servo control. IEEL transactions on robotics and automation, 12(5):651-670, 1996, 1996.\nMartin Jagersand, Olac Fuentes, and Randal Nelson. Experimental evaluation of uncalibrated visual servoing for precision manipulation. In Proceedings of the IEEE International Conference on. Robotics and Automation (ICRA), volume 4, pp. 2874-2880. IEEE. 1997. 1997.\nXu Jia. Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool. Dynamic filter networks. In Advances in Neural Information Processing Systems (NIPS). pp. 667-675. 2016. 2016\nDanica Kragic and Henrik I Christensen. Survey on visual servoing for manipulation. Computa tional Vision and Active Perception Laboratory, Fiskartorpsy, 15, 2002, 2002..\nThomas Lampe and Martin Riedmiller. Acquiring visual servoing reaching and grasping skills using neural reinforcement learning. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2013, 2013.\nSascha Lange, Martin Riedmiller, and Arne Voigtlander. Autonomous reinforcement learning on raw visual input data in a real world application. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2012, 2012.\nWilliam Lotter, Gabriel Kreiman, and David Cox. Deep predictive coding networks for video pre diction and unsupervised learning. CoRR, abs/1605.08104, 2016, 2016.\nMichael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyon mean square error. CoRR, abs/1511.05440. 2015. 2015\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daa. Wierstra, and Martin A. Riedmiller. Playing Atari with deep reinforcement learning. CoRR abs/1312.5602, 2013, 2013.\nJunhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video prediction using deep networks in Atari games. In Advances in Neural Information Pro- cessing Systems (NIPS), pp. 2863-2871, 2015, 2015.\nKoh Hosoda and Minoru Asada. Versatile visual servoing without knowledge of true Jacobian. In Intelligent Robots and Systems' 94.'Advanced Robotic Systems and the Real World', IROs'94 Proceedings of the IEEE/RsJ/GI International Conference on, volume 1, pp. 186-193. IEEE 1994, 1994.\nSergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo motor policies. Journal of Machine Learning Research, 17(39):1-40, 2016, 2016.\nDavid G Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision. 60(2):91-110. 2004. 2004\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imag recognition. CoRR, abs/1409.1556. 2014. 2014.\nJacob Walker, Carl Doersch, Abhinav Gupta, and Martial Hebert. An uncertain future: Forecasting from static images using variational autoencoders. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 835-851. Springer, 2016, 2016\nFisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. CoRR abs/1511.07122, 2015, 2015.\nu-u) = f(l\nFurthermore, the bilinear dynamics allows the Jacobian matrix to be computed efficiently by simply doing a forward pass through the model. For the locally bilinear dynamics of|Equation (2)] the j-th column of the Jacobian matrix is given by\nJohn Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust re gion policy optimization. In Proceedings of the International Conference on Machine Learning (ICML), pp. 1889-1897, 2015, 2015\nManuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems (N1PS), pp. 2746-2754, 2015, 2015.\nee E Weiss, Arthur C Sanderson, and Charles P Neuman. Dynamic sensor-based control of robots ith O\nC,J du"}, {"section_index": "14", "section_name": "8 SERVOING COST FUNCTION FOR REINFORCEMENT LEARNING", "section_text": "2 2 if ||pt+1||2 T and car in F t,Ut, St._ Pt+1 Pt+ T-t+1)cSt) otherwise,\nThe camera is attached to the vehicle slightly in front of the robot's origin and facing down at an angle of /6rad, similar to a commercial quadcopter drone. The robot has 4 degrees of freedom corresponding to translation and yaw angle. Pitch and roll are held fixed.\nIn our simulations, the quadcopter follows a car that drives at 1 m s-1 along city roads during training. and testing. The quadcopter's speed is limited to within 10 m s-1 for each translational degree of. freedom, and its angular speed is limited to within /2 rad s-1. The simulator runs at 10 Hz. For each trajectory, a car is chosen randomly from a set of cars, and placed randomly on one of the roads. The quadcopter is initialized right behind the car, in the desired relative position for following. The. image observed at the beginning of the trajectory is used as the goal observation..\nThe dynamics of all the features were trained using a dataset of 10000 triplets xt, ut, Xt+1. The observations are 128 128 RGB images and the actions are 4-dimensional vectors of real numbers encoding the linear and angular (yaw) velocities. The actions are normalized to between -1 and 1.\nThe training set was generated from 100 trajectories of a quadcopter following a car around the cil. with some randomness. Each trajectory was 100 steps long. Only 5 training cars were shown durir earning. The generation process of each trajectory is as follows: First, a car is chosen at rando. rom the set of available cars and it is randomly placed on one of the roads. Then, the quadcopt. s placed at some random position relative to the car's horizontal pose, which is the car's pose th nas been rotated so that the vertical axis of it and the world matches. This quadcopter position. niformly sampled in cylindrical coordinates relative to the car's horizontal pose, with heights in tl. nterval 12 m to 18 m, and azimuthal angles in the interval -/2rad to /2rad (where the origin he azimuthal angle is the back of the car). The radii and yaw angles are initialized so that the ca s in the middle of the image. At every time step, the robot takes an action that moves it towards. arget pose, with some additive Gaussian noise (o = 0.2). The target pose is sampled according he same procedure as the initial pose, and it is sampled once at the beginning of each trajectory..\nWe try the fully and locally connected dynamics for pixel intensities to better understand the per- formance trade-offs when assuming locally connected dynamics. We do not use the latter for the semantic features since they are too high-dimensional for the dynamics model to fit in memory. The dynamics models were trained with ADAM using 10000 iterations, a batch size of 32, a learning rate of 0.001, and momentums of 0.9 and 0.999, and a weight decay of 0.0005.\nThe goal of reinforcement learning is to find a policy that maximizes the expected sum of rewards or equivalently, a policy that minimizes the expected sum of costs. The cost should be one that quantifies progress towards the goal. We define the cost function in terms of the position of the. target object (in the camera's local frame) after the action has been taken,.\nwhere T is the maximum trajectory length. The episode terminates early if the camera is too close. to the car (less than a distance t) or the car's origin is outside the camera's field of view (FOV). The car's position at time t is pt = (pt, Pt, p?) and the car's target position is p* = (0, 0, p?), both in. the camera's local frame (z-direction is forward). Our experiments use T = 100 and = 4 m.\ndtlonlHlgonlln unweighted feature feature feature ours, Feature feature dynamics dynamics dynamics feature Dynamics dynamics + CEM + TRPO + TRPO dynamics + CEM (1500) (3250) ( 80) ( 2000) + FQI (20) pixel, FC 8.20 0.66 7.77 0.66 9.56 0.62 8.03 0.66 7.92 0.67 pixel, LC 8.07 0.74 7.13 0.74 10.11 0.60 7.97 0.72 7.98 0.77 VGG conv1_2 2.22 0.38 2.06 0.35 1.66 0.31 1.89 0.32 VGG conv2_2 2.40 0.47 2.42 0.47 1.89 0.40 1.40 0.29 VGG conv3_3 2.91 0.52 2.87 0.53 1.59 0.42 1.56 0.40 VGG conv4_3 2.70 0.52 2.57 0.49 1.69 0.41 1.11 0.29 VGG conv5_3 3.68 0.47 3.69 0.48 3.16 0.48 2.49 0.35\n(b) Costs when using novel cars, none of which were seen during learning\nTable 2: Costs on test executions of the dynamics-based servoing policies for different feature dynamics and weighting of the features. The reported numbers are the mean and standard error across 100 test trajectories, o. up to 100 time steps each. We test on executions with the training cars and the novel cars; for consistency, the novel cars follow the same route as the training cars. We compare the performance of policies with unweightec features or weights learned by other methods. For the case of unweighted feature dynamics, we use the cross entropy method (CEM) to learn the relative weights of the control and the single feature weight w. Fo. the other cases, we learn the weights with CEM, Trust Region Policy Optimization (TRPO) for either 2 or 50 iterations, and our proposed FQI algorithm. CEM searches over the full space of policy parameters w and X, but it was only ran for pixel features since it does not scale for high-dimensional problems. We report the number of training trajectories in parenthesis. For TRPO, we use a fixed number of training samples per iteration, whereas for CEM and FQI, we use a fixed number of training trajectories per iteration. We use a batch size of 4000 samples for TRPO, which means that at least 40 trajectories were used per iteration, since trajectories can terminate early, i.e. in less than 100 time steps."}, {"section_index": "15", "section_name": "C.3 LEARNING WEIGHTING OF FEATURE DYNAMICS WITH REINFORCEMENT LEARNING", "section_text": "We use CEM, TRPO and FQI to learn the feature weighting and report the performance of the learned policies in Table 2] We use the cost function described in|Appendix B] a discount factor oi y = 0.9, and trajectories of up to 100 steps. All the algorithms used initial weights of w = 1 anc X = 1, and a Gaussian exploration policy with the current policy as the mean and a fixed standarc deviation exploration = 0.2.\nPolicy Optimization Algorithm\na) Costs when using the set of cars seen during learning\nPolicy Optimization Algorithm unweighted feature feature feature ours, Feature feature dynamics dynamics dynamics feature Dynamics dynamics + CEM + TRPO + TRPO dynamics + CEM (1500) (3250) ( 80) ( 2000) + FQI (20) pixel, FC 8.84 0.68 8.66 0.70 10.01 0.62 8.75 0.67 9.00 0.70 pixel, LC 8.37 0.75 7.17 0.75 11.29 0.57 8.25 0.71 8.36 0.79 VGG conv1_2 2.03 0.43 1.79 0.36 1.42 0.33 1.78 0.37 VGG conv2_2 2.01 0.44 2.00 0.45 1.26 0.30 1.28 0.30 VGG conv3_3 2.03 0.47 2.08 0.47 1.46 0.37 1.04 0.31 VGG conv4_3 2.40 0.50 2.57 0.53 1.48 0.36 0.90 0.26 VGG conv5_3 3.31 0.45 3.55 0.50 2.76 0.42 2.56 0.41\nFor the case of unweighted features, we use CEM to optimize for a single weight w and for the. weights X. For the case of weighted features, we use CEM to optimize for the full space of pa-. rameters, but we only do that for the pixel feature dynamics since CEM does not scale for high. dimensional problems, which is the case for all the VGG features. Each iteration of CEM performs. a certain number of noisy evaluations and selects the top 20% for the elite set. The number of noisy. evaluations per iteration was 3 times the number of parameters being optimized. Each noisy evalua\nFeature Observations from Test Executions Cost Dynamics pixel, 24.74 fully connected 16.69 pixel, 24.92 locally connected 16.47 15.91 VGG conv1_2 1.57 7.53 VGG conv2_2 2.56 6.01 VGG conv3_3 3.76 5.94 VGG conv4_3 4.31 15.51 VGG conv5_3 17.39\nTable 3: Sample observations from test executions in our experiments, and the costs for each trajectory, fo. different feature dynamics. We use the weights learned by our FQI algorithm. This table follows the sam. format as Table 1] Some of the trajectories were shorter than 100 steps because of the termination conditio. (e.g. the car is no longer in the image). The first observation of each trajectory is used as the target observation. The trajectories shown in here were chosen to reflect different types of behaviors. In the first trajectory, the blu. car turns abruptly to the right, making the view significantly different from the target observation. In the secon trajectory, a distractor object (i.e. the lamp) shows up in the target image and an occluder object (i.e. the traffi. light) appears through the execution. The policies based on deeper VGG features, up to VGG conv4_3, ar. generally more robust to the appearance changes between the observations and the target observation, whicl. are typically caused by movements of the car, distractor objects, and occlusions..\nFQI Sampling Iteration. TRPO Sampling Iteration. 0 1 2 3 4 5 6 7 8 9 10 0 5 10 15 20 25 30 35 40 45 50 101 101 100 100 0 2000 4000 6000 8000 10000 0 40000 80000 120000 160000 200000 Number of Training Samples Number of Training Samples pixel, fully connected. VGG conv1_2 VGG conv3_3 VGG conv5_3 pixel, locally connected VGG conv2 2 VGG conv4 3\nFigure 7: Costs of validation executions using various feature dynamics models. where the feature weights are optimized with FQI (left plot) or TRPO (right plot). The reported values are the mean and standard error across 10 validation trajectories, of up to 100 time steps each.\ntion used the average sum of costs of 10 trajectory rollouts as its evaluation metric. The parameters of the last iteration were used for the final policy. The policies with unweighted features dynamics and the policies with pixel features dynamics were trained for 10 and 25 iterations, respectively..\nWe use our proposed FQI algorithm to optimize for the weights w, X, and surpass the other methods. in terms of performance on test executions, sample efficiency, and overall computation efficiency'. The updates of the inner iteration of our algorithm are computationally efficient; since the data is fixed for a given sampling iteration, we can precompute $ (st, ut) and certain terms of (st+1,:). The parameters that achieved the best performance on 10 validation trajectories were used for the final policy. The policies are trained with FQI for S = 2 sampling iterations, a batch size of 10. trajectories per sampling iteration, K = 10 inner iterations per sampling iteration, and a regulariza-. tion coefficient of v = O.1. We found that regularization of the parameters was important for the. algorithm to converge. We show sample trajectories of the resulting policies in|Table 3.\nThe FQI algorithm often achieved most of its performance gain after the first iteration. We rai additional sampling iterations of FQI to see if the policies improved further. For each iteration, we. evaluated the performance of the policies on 10 validation trajectories. We did the same for the. policies trained with TRPO, and we compare the learning curves of both methods in|Figure 7.\n'Our policy based on conv4_3 features takes around 650 s to run K = 10 iterations of FQI for a given batcl size of 10 training trajectories.\nWe use TRPO to optimize for the full space of parameters for each of the feature dynamics we con- sider in this work. We use a Gaussian policy, where the mean is the servoing policy of|Equation (3) and the standard deviation is fixed to exploration = 0.2 (i.e. we do not learn the standard devia- tion). Since the parameters are constrained to be non-negative, we parametrize the TRPO policies with w and . We use a Gaussian baseline, where the mean is a 5-layer CNN, consisting of 2 convolutional and 3 fully connected layers, and a standard deviation that is initialized to 1. The convolutional layers use 16 filters (4 4, stride 2) each, the first 2 fully-connected layers use 32 hidden units each, and all the layers except for the last one use ReLU activations. The input of the baseline network are the features (either pixel intensities or VGG features) corresponding to the feature dynamics being used. The parameters of the last iteration were used for the final policy. The policies are trained with TRPO for 50 iterations, a batch size of 4000 samples per iteration, and a step size of 0.01.\nObservation Modality. ground truth car position 0.59 0.24 raw pixel-intensity images 5.20 0.40 VGG conv1_2 features 8.35 0.44 VGG conv2_2 features 14.01 0.47 VGG conv3_3 features 10.51 0.65\n(b) Costs when using a new set of cars, none of which were seen during learning\nTable 4: Costs on test executions of servoing policies that were trained end-to-end with TRPO. These policies. take in different observation modalities: ground truth car position or image-based observations. This table. follows the same format asTable 2 The mean of the first policy is parametrized as a 3-layer MLP, with tanh. non-linearities except for the output layer; the first 2 fully connected layers use 32 hidden units each. For the. other policies, each of their means is parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fully. connected layers, with ReLU non-linearities except for the output layer; the convolutional layers use 16 filters (4 4, stride 2) each and the first 2 fully-connected layers use 32 hidden units each. All the policies are trainec. with TRPO, a batch size of 4000 samples, 500 iterations, and a step size of O.01. The car position observations. are not affected by the appearance of the cars, so the test performance for that modality is the same regardles. of which set of cars are used.\nWe use TRPO to train end-to-end servoing policies for various observation modalities and report the performance of the learned policies in|Table 4 The policies are trained with the set of training cars, and tested on both this set and on the set of novel cars. The observation modalities that we consider are ground truth car positions (relative to the quadcopter), images of pixel intensities from the quadcopter's camera, and VGG features extracted from those images. Unlike our method and the other experiments, no feature dynamics are explicitly learned for these experiments.\nWe use a Gaussian policy, where the mean is either a multi-layer perceptron (MLP) or a convo. lutional neural net (CNN), and the standard deviation is initialized to 1. We also use a Gaussian. baseline, which is parametrized just as the corresponding Gaussian policy (but no parameters are. shared between the policy and the baseline). For the policy that takes in car positions, the mean. is parametrized as a 3-layer MLP, with tanh non-linearities except for the output layer; the first 2 fully connected layers use 32 hidden units each. For the other policies, each of their means is. parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fully-connected layers, with. ReLU non-linearities except for the output layer; the convolutional layers use 16 filters (4 4, stride. 2) each and the first 2 fully-connected layers use 32 hidden units each..\nThe CNN policies would often not converge for several randomly initialized parameters. Thus, at the beginning of training, we tried multiple random seeds until we got a policy that achieved a relatively low cost on validation trajectories, and used the best initialization for training. The MLP policy. did not have this problem, so we did not have to try multiple random initializations for it. All the policies are trained with a batch size of 4000 samples, 500 iterations, and a step size of O.01. The parameters of the last iteration were used for the final policy.\na) Costs when using the set of cars seen during learning\nLCCCC corners of bounding box from C-COT tracker. (0.75) 1.70 0.30 corners of ground truth bounding box. (0.75) 0.86 0.25 corners of next frame's bounding box from C-COT tracker (O.65) 1.46 0.22 corners of next frame's ground truth bounding box. (0.65) 0.53 0.05 SIFT feature points (0.30) 14.47 0.75 SURF feature points (0.60) 16.37 0.78 ORB feature points (0.30) 4.41 0.60\nTable 5: Costs on test executions when using classical image-based visual servoing (IBVS) with respect to feature points derived from bounding boxes and keypoints derived from hand-engineered features. Since there is no learning involved in this method, we only test with one set of cars: the cars that were used for training in the other methods. This table follows the same format as|Table 2 This method has one hyperparameter, which is the gain for the control law. For each feature type, we select the best hyperparameter (shown in parenthesis) by validating the policy on 10 validation trajectories for gains between O.05 and 2, in increments of 0.05. The servoing policies based on bounding box features achieve low cost, and even lower ones if ground truth ca dynamics is used. However, servoing with respect to hand-crafted feature points is significantly worse than the Other methods.\nTraditional visual servoing techniques (Feddema & Mitchell] 1989, Weiss et al.] 1987) use the image-plane coordinates of a set of points for control. For comparison to our method, we evalu- ate the servoing performance of feature points derived from bounding boxes and keypoints derived from hand-engineered features, and report the costs of test executions on Table 5\nWe use bounding boxes from the C-COT tracker (Danelljan et al.]2016) (the current state-of-the-art visual tracker) and ground truth bounding boxes from the simulator. The latter is defined as the box that tightly fits around the visible portions of the car. We provide the ground truth bounding box of the first frame to the C-COT tracker to indicate that we want to track the car. We use the four corners of the box as the feature points for servoing to take into account the position and scale of the car in image coordinates.\nWe provide the ground truth depth values of the feature points for the interaction matrices. Ir. classical image-based visual servoing, the control law involves the interaction matrix (also knowr as feature Jacobian), which is the Jacobian of the points in image space with respect to the camera's. control (see Chaumette & Hutchinson(2006) for details). The analytical feature Jacobian used ir. IBVS assumes that the target points are static in the world frame. This is not true for a moving car. so we consider a variant where the feature Jacobian incorporates the ground truth dynamics of the. car. This amounts to adding a non-constant translation bias to the output of the dynamics function. where the translation is the displacement due to the car's movement of the 3-dimensional point ir the camera's reference frame. Note that this is still not exactly equivalent to having the car being. static since the roads have different slopes but the pitch and roll of the quadcopter is constrained tc. be fixed.\nFor the hand-crafted features, we consider SIFT (Lowe2004), SURF (Bay et al.] 2006) and ORB (Rublee et al.]2011) keypoints. We filter out the keypoints of the first frame that does not belong to the car and use these as the target keypoints. However, we use all the keypoints for the subsequent observations.\nThe servoing policies based on bounding box features achieve low cost, and even lower ones if ground truth car dynamics is used. However, servoing with respect to hand-crafted feature points is significantly worse than the other methods. This is, in part, because the feature extraction and match ing process introduces compounding errors. Similar results were found by Collewet & Marchand (2011), who proposed photometric visual servoing (i.e. servoing with respect to pixel intensities) and showed that it outperforms, by an order of magnitude, classical visual servoing that uses SURI features.\nPolicy Variant Observation Modality (Pose) Use Rotation Ignore Rotation car pose (1.55) 0.58 0.25 (1.90) 0.51 0.25 next frame's car pose (1.00) 0.0059 0.0020 (1.00) 0.0025 0.0017\nTable 6: Costs on test executions when using classical position-based visual servoing (PBVS). Since there i: no learning involved in this method, we only test with one set of cars: the cars that were used for training in the other methods. This table follows the same format as Table 2 This method has one hyperparameter, which is the gain for the control law. For each condition, we select the best hyperparameter (shown in parenthesis) by validating the policy on 10 validation trajectories for gains between 0.05 and 2, in increments of 0.05. These. servoing policies, which use ground truth car poses, outperforms all the other policies based on images. Ir. addition, the performance is more than two orders of magnitude better if ground truth car dynamics is used."}, {"section_index": "16", "section_name": "C.6 CLASSICAL POSITION-BASED VISUAL SERVOING", "section_text": "Similar to our IBVS experiments, we consider a variant that uses the car pose of the next time step. as a way to incorporate the ground truth car dynamics into the interaction matrix. Since the cost function is invariant to the orientation of the car, we also consider a variant where the policy only minimizes the translational part of the pose error..\nThese servoing policies, which use ground truth car poses, outperforms all the other policies based. on images. In addition, the performance is more than two orders of magnitude better if ground truth car dynamics is used."}]
BkmM8Dceg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "A crucial aspect of current deep learning architectures is the encoding of invariances. This fac. is epitomized in the success of convolutional neural networks (CNN), where equivariance to image. translation is key: translating the input results in a translated output. When invariances are present ir the data, encoding them explicitly in an architecture provides an important source of regularization. which allows to reduce the amount of training data required for learning. Invariances may also be. used to improve the efficiency of implementations; for instance, a convolutional layer requires orders. of magnitude less memory and also less computation compared to an equivalent fully-connectec layer.\nThe success of CNNs indicates that translation invariance is an important property of images. How. ever, this does not explain why translation equivariant operators work well for image understanding. The common interpretation is that such operators are matched to the statistics of natural images. which are well known to be translation invariant (Hyvarinen et al., 2o09). However, natural imag. statistics are also (largely) invariant to other transformations such as isotropic scaling and rotatior which suggests that alternative neural network designs may also work well with images. Further. more, in specific applications, invariances other than translation may be more appropriate..\nTherefore, it is natural to consider generalizing convolutional architectures to other image transfor. mations, and this has been the subject of extensive study (Kanazawa et al., 2014; Bruna et al., 2013;. Cohen & Welling, 2016). Unfortunately these approaches do not possess the same memory and. speed benefits that CNNs enjoy. The reason is that, ultimately, they have to transform (warp) an image or filter several times (Kanazawa et al., 2014; Marcos et al., 2016; Dieleman et al., 2015), in. curring a high computational burden. Another approach is to consider a basis of filters (analogous to. eigen-images) encoding the desired invariance (Cohen & Welling, 2014; Bruna et al., 2013; Cohen. & Welling, 2016), which requires more storage than a convolutional filter.."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Although they are able to handle transformations with many pose parameters, in practice most recent proposals are limited to very coarsely discretized transformations, such as horizontal/vertical flips and 90 rotations (Dieleman et al., 2015; Cohen & Welling, 2014)..\nIn this work we propose a generalization of CNNs that overcomes these disadvantages. Our mair. result shows that a linear layer with equivariance w.r.t. a large class of 2-parameters transformations can always be implemented efficiently, using a standard convolution in a warped image space. The. image warp can be implemented using bilinear resampling, a simple and fast operation that has beer. popularized by spatial transformer networks (Jaderberg et al., 2015), and is part of most deep learn. ing toolboxes. Unlike previous proposals, the proposed warped convolutions can handle continuous. transformations, such as fine rotation and scaling.\nThis makes generalized convolution easily implementable in neural networks, including using fast convolution algorithms on GPU hardware, such as Winograd (Lavin, 2015) or the Fast Fourier Trans form (Lyons, 2010). We present these notions in the simplest possible way (sections 2 to 4), but we note that they can be derived in broader generality from well know concepts of group theory (sec- tion 4.2).\nH(u;I) = I(x) F(x+u) dx\nwhere I(x) and F(x) are continuous functions over a bounded 2D region C R2, that is: I, F :. Q -> R. The real-valued 2D vectors x E now play the role of the indexes k E Z2. Equation 2. reduces to the discrete case of eq. 1 if we define I (x) and F(x) as the sum of delta functions on grids Intermediate values can be obtained by interpolation, such as bilinear (which amounts to convolution. of the delta functions with a triangle filter (Jaderberg et al., 2015)). Importantly, such continuous. images can be deformed by very rich continuous transformations of the input coordinates, whereas. strictly discrete operations would be more limiting.\nOver the next sections it will be more convenient to translate the image I instead of the filter F. Thi alternative form of eq. 2 is obtained by replacing x + u -> x:.\nThe standard convolution operator of eq. 3 can be interpreted as applying the filter to translatec versions of the image. Translations can be replaced by other transformations as follows (Henrique et al., 2014):\nH(t; I) = I(t(x)) F(x) dx, t E G\n'Note that eq. 1 defines cross-correlation in the signal processing literature, but here we follow the conven-. tion used in machine learning and call it convolution. We also ignore the possibility that the input image has more than one channel, and that convolution layers in CNN involve banks of filters instead of single ones. All. such details are immaterial to our discussion.\nWe start by looking at the basic building block of CNNs, i.e. the convolution operator. This operator computes the inner product of an image I E IRmx n with a translated version of the filter F E Rrx s. producing a new image as output:.\nH = IkFk+j k\nwhere k, i E Z2 are two-dimensional vectors of indexes, and the summation ranges inside the extents of both arrays.I To handle continuous deformations of the input, it is more natural to express eq. 1 as an integral over continuous rather than discrete inputs:.\nH(u;I) = I(xu) F(x) dx\nwhere G is a set of transformation functions t : -> (assumed to be invertible). Intuitively, this generalized convolution performs an exhaustive search for a pattern, at many different poses (Hen riques et al., 2014; Kanazawa et al., 2014). The interest in this definition lies in the fact that it makes convolution equivariant (Lenc & Vedaldi, 2015):\nLemma 1 (Equivariance). Consider the generalized convolution operator H(t; I) of eq. 4. Gener alized convolution \"commutes\" with any transformation q E G of the image..\nH(t;Ioq) = H(qot;I)\nProof. One has immediately H(t; I o I(q(t(x))) F(x) dx = H(q ot;I)\nOne has immediately H(t; I o q) = f I(q(t(x))) F(x) dx = H(q o t; I)\nA notable case is when transformations have an additive parametrization t : R2 -> Q, with x, u) +> t,,(x) and t,, o t, = tu+. In this case, the equivariance relation can be written as\nUnfortunately, what eq. 4 gains us in generality, it loses in both performance and ease of imple mentation. Most works in computer vision that looked at filtering under generalized transformations (e.g. scale pyramids (Kanazawa et al., 2014) or rotated filter banks (Marcos et al., 2016; Cohen & Welling, 2014; 2016; Henriques et al., 2014)) compute eq. 4 directly by evaluating a large number of transformations t E G. This entails warping (transforming) either the image or the filter once per transformation t, which can be expensive.\nOpting to transform the filter instead of the image can be advantageous, since it is smaller in size. On the other hand, the filter and its domain then become spatially-varying, which foregoes the benefit of the regular, predictable, and local pattern of computations in standard convolution. It precludes the use of fast convolution routines such as Winograd's algorithm (Lavin, 2015), or the Fast Fourier. Transform (Lyons, 2010), which has lower computational complexity than exhaustive search (eq. 3)..\nIn practice, most recent works focus on very coarse transformations that do not change the filter support and can be implemented strictly via permutations, like horizontal/vertical flips and 90 ro- tations (Dieleman et al., 2015; Cohen & Welling, 2014). Such difficulties explain why generalized convolutions are not as widespread as CNNs.\nIn section 4 we will show that, for an important class of transformations, including the ones con- sidered in previous works (such as Kanazawa et al. (2014); Cohen & Welling (2014); Marcos et al. (2016)) it is possible to perform generalized convolution by composing a single warp with a stan- dard convolution, instead of several warps. Thus, we are able to take full advantage of modern convolution implementations (Lavin, 2015; Lyons, 2010), including those with lower computational complexity."}, {"section_index": "2", "section_name": "4 MAIN RESULT", "section_text": "Our main contribution is to show that the generalized convolution operator of eq. 4 can be imple. mented efficiently by a standard convolution, by pre-warping the input image and filter appropriately The warp is the same for any image, depending solely on the nature of the relevant transformations. and can be written in closed form. This result, given in theorem 1, allows us to implement very efficient generalized convolutions using simple computational blocks, as shown in section 4.1. We name this method warped convolution..\nThe strongest assumption is that transformations must have an additive parametrization. By this, we mean that there exists a bijection ty : -> such that, for any u, v E R2, parameters compose additively tu o ty = tu+v. The second assumption is that there exists a pivot point xo E such\nH(u;Ioty)=H(v+u;I)\n[n particular, standard convolution is obtained when t,(x) = x - u is the translation operator. In this case, the lemma above simply states that any translation of the input of the convolution results. n a corresponding translation of the output..\nIn section 5, we will look in more detail at a few concrete examples of transformations other than translations. Although we will not do so explicitly, in this construction it is also possible to let one or more dimensions of the parameter space R2 be given modulus a period Q, in the sense of replacing R with R/Z(Q); the latter is required to parameterise transformations such as rotation.\nTheorem 1. Consider the generalized convolution of eg. 4. Assume that the transformation is. additive (tu o t, = tu+v). Assume also that, for a fixed pivot point xo, the function u +> tu(xo) is. bijective. Then we can rewrite generalized convolution (eg. 4) as the standard convolution\nH(u;I) = I(u+v)F(v) dv,\nThe warp that is applied to both inputs in eq. 7 can be interpreted as follows. We start with ar arbitrary pivot point xo in the image and them sample other points by repeatedly applying the trans formation tu(xo) to the pivot (by varying u). When discretized, this sampling is performed ove a 2D grid of parameters u. Finally, sampling the input at these points (for example, by bilinear interpolation) yields the warped input.\nAn illustration is given in fig. 1, for various transformations (each one is discussed in more detail in. section 5). The red dot shows the pivot point xo, and the two arrows pointing away from it show the. two directions of increasing u values (recall that transformation parameters are two-dimensional) The grids were generated by sampling u at regular intervals. Note that the warp grids are independent. of the image contents - they can be computed once offline and then applied to any image.."}, {"section_index": "3", "section_name": "4.1 PRACTICAL CONSIDERATIONS", "section_text": "There are a few interesting aspects that simplify the use of theorem 1 in practice\nFirst, since in most applications the filter F is learned, we are free to ignore the constant warp and Jacobian in eq. 7 (which amounts to a simple reparametrization), and learn F directly. In practice, this means that we warp only the input image I to obtain I, and then perform a standard convolution with a filter F. The learned warped filter F has a one-to-one correspondence to an image-space filter F by means of eq. 7, although there is no real need to build the latter explicitly.\nSecond, we can choose either one or two spatial transformations for the generalized convolutior (e.g. scale and rotation, simultaneously). The reason is that the input image is 2D, so the parameter space after warping is also 2D. The choice is not arbitrary though: the two transformations mus1 commute. in order to respect additivity. This will be the case of the pairs we study in section 5\nthat u +> tu(xo) defines a bijection R2 -> from the parameter space to the real plane. The latter requirements means that any point x E can be \"reached\" by transforming xo under a suitable tu We then have that:\ndtu(xo) I(u) = I(tu(xo)) F(u) = F(tu(xo)) du\nH(u;I) = I(tu(x)) F(x) dx dtu(xo) I(tu(tx(xo))) F(tx(xo)) dv du dty(xo) I(tu+v(xo)) F(tx(xo)) du dv i(u+v) I(u+v)F(v) dv.\nThe last factor in eq. 7 is the determinant of the Jacobian of the image transformation t. It rescales. the image values to account for the stretching and shrinking of space due to non-linear warps. It. can also be computed offline, and its application amounts to an element-wise product by a constant array. A generalization using group theory is discussed in section 4.2.\nBy theorem 1, these steps are equivalent to a generalized convolution, which performs an exhaustive search across the pose-space of transformation t, but at a much lower computational cost."}, {"section_index": "4", "section_name": "4.2 RELATIONSHIP TO GROUP THEORY", "section_text": "To this end, let G be a group of transformations. Under very mild conditions (the group has to. be locally compact and Hausdorff), there exists a unique measure on the group, the Haar measure. which is invariant to the group action, in the sense that, given a measurable function I : G ->. R, then I' I(g'g) dg = I' I(g) dg. Using this measure, one can define generalized convolution as. (I * F)(t) = SG I(tg)F(g-1) dg. This resembles our definition (4), although image and filter are. defined on the group G instead of the spatial domain R2. Lemma 1 translates immediately to this. case (Folland, 1995).\nIn order to extend Theorem 1, we need to make this general but abstract construction concrete. Here one assumes that the group acts transitively on a subset X C R2 (which means that any point x E X can be written as x = gxo, for a fixed point xo E X and a suitable transformation g E G). Ther one can define the image as I(g) = I(g(xo)), where I is a function of the spatial domain X insteac of the group G, and likewise for the filter. Next, it is necessary to explicitly calculate the integra over G. If the group is an Abelian (commutative) Lie group, then one can show that there exists a map exp : V -> G, the exponential map, defined on a vector space V. Under commutativity, thi. map is also additive, in the sense that exp(u) exp(v) = exp(u + v). The structure of V depends or the specific group, but under such restrictive conditions, it is a torus, which allows the calculation oi\nWe now give some concrete examples of pairs of spatial transformations that obey the conditions of theorem 1, and can be useful in practice..\nDetection tasks require predicting the extent of an object as a bounding box. While the location can be found accurately by a standard CNN, which is equivariant to translation, the size prediction could similarly benefit from equivariance to horizontal and vertical scale (equivalently, scale and aspect ratio).\nSuch a spatial transformation, from which a warp can be constructed, is given by.\nThis section relates our results, which have been presented using a simple formalism and in a re stricted setting, to a more general approach based on group theory (Folland, 1995).\nFinally, in order to swap integration over the group parameters with integration over space, one assumes that x = exp(u)xo defines a smooth bijection V -> X, so that it is possible to use the change of variable u -> u(x) where exp(u(x))xo = x. This allows writing the integral as f I(exp(u)xo) du = f I(x) |du/dx| dx. Note that this Jacobian is the inverse of the one found in. (1) due to the fact that we started by defining our convolution using I instead of I..\n(a) translation (b) scale/aspect ratio (c) scale/rotation (d) 3D rotation (yaw/pitch)\nsu2 ||x cos(atan2(x2, x1) + u1) tu(x) = su2 ||x|| sin(atan2(x2, x1) + u1)\nwhere atan2 is the standard 4-quadrant inverse tangent function (at an2). The domain in this case must exclude the origin ( E R2 {0}), since a pivot xo = 0 cannot reach any other points in the. image by rotation or scaling\nFigure 1: First row: Sampling grids that define the warps associated with different spatial transfor- mations. Second row: An example image (a) after warping with each grid (b-d). Third row: A small translation is applied to each warped image, which is then mapped back to the original space (by an inverse warp). Translation in one axis of the appropriate warped space is equivalent to (b) horizontal scaling: (c) planar rotation: (d) 3D rotation around the vertical axis..\nX1 sU1 X2 SU2\nThe s constant controls the total degree of scaling applied. Notice that the output must be exponential in the scale parameters u; this ensures the additive structure required by theorem 1: t(t,(x)) = tu+v(x). The resulting warp grid can be visualized in fig. 1-b. In this case, the domain of the image must be E R?, since a pivot xo in one quadrant cannot reach another quadrant by any amount of (positive) scaling.\nPlanar scale and rotation are perhaps the most obvious spatial transformations in images, and are a natural test case for works on spatial transformations (Kanazawa et al., 2014; Marcos et al., 2016) Rotating a point x by u1 radians and scaling it by u2, around the origin, can be performed with\nWarp CNN Soft argmax 1 Scale+bias\nThe resulting warp grid can be visualized in fig. 1-c. It is interesting to observe that it corresponds exactly to the log-polar domain, which is used in the signal processing literature to perform corre- lation across scale and rotation (Tzimiropoulos et al., 2010; Reddy & Chatterji, 1996). In fact, it was the source of inspiration for this work, which can be seen as a generalization of the log-polar domain to other spatial transformations."}, {"section_index": "5", "section_name": "5.3 3D SPHERE ROTATION UNDER PERSPECTIVE", "section_text": "We will now tackle a more difficult spatial transformation, in an attempt to demonstrate the gener ality of theorem 1. The transformations we will consider are yaw and pitch rotations in 3D space. as seen by a perspective camera. In the experiments (section 6) we will show how to apply it to face pose estimation.\nIn order to maintain additivity, the rotated 3D points must remain on the surface of a sphere. We consider a simplified camera and world model, whose only hyperparameters are a focal length f the radius of a sphere r, and its distance from the camera center d. The equations for the spatial transformation corresponding to yaw and pitch rotation under this model are in appendix A.\nAs mentioned in section 2.2, generalized convolution performs an exhaustive search for pattern across spatial transformations, by varying pose parameters. For tasks where invariance to that trans formation is important, it is usual to pool the detection responses across all poses (Marcos et al. 2016: Kanazawa et al., 2014).\nIn the experiments, however, we will test the framework in pose prediction tasks. As such, we do not want to pool the detection responses (e.g. with a max operation) but rather find the pose with the strongest response (i.e., an argmax operation). To perform this operation in a differentiable manner. we implement a soft argmax operation, defined as follows:\nmn mn S1(a) = m n ij ij\nwhere o(a) E Rmxn is the softmax over all spatial locations, and o(a) indexes the element al (i, j). The outputs are the two spatial coordinates of the maximum value, s(a) E R2.\nOur base architecture then consists of the following blocks, outlined in fig. 2. First, the input image is warped with a pre-generated grid, according to section 4. The warped image is then processed by a standard CNN, which is now equivariant to the spatial transformation that was used to generate the warp grid. A soft argmax (eq. 10) then finds the maximum over pose-space. To ensure the pose prediction is well registered to the reference coordinate system, a learnable scale and bias are applied\nFigure 2: Equivariant pose estimation strategy used in the experiments (section 6). With an appro priate warp and a standard CNN, the shaded block becomes equivalent to a generalized CNN (by theorem 1). which performs exhaustive searches across pose-space instead of image-space\nThe corresponding warp grid can be seen in fig. 1-d. It can be observed that the grid corresponds to what we would expect of a 3D rendering of a sphere with a discrete mesh. An intuitive picture of the effect of the warp grid in such cases is that it wraps the 2D image around the surface of the 3D object, so that translation in the warped space corresponds to moving between vertexes of the 3D geometry.\nCNN+FC CNN+softargmax Warped CNN Rotation error (degrees) 28.87 30.6 26.44 Scale error (px) 17.51 5.783 5.4\nTable 1: Results of scale and rotation pose estimation of vehicles in the Google Earth dataset\nFigure 3: Example pose estimates (rotation and scale) on the Google Earth dataset (Section 6.2)\nto the outputs. Training proceeds by minimizing the L- loss between the predicted pose and groun truth pose."}, {"section_index": "6", "section_name": "6.2 GOOGLE EARTH", "section_text": "For the first task in our experiments, we will consider aerial photos of vehicles, which have been used in several works that deal with rotation invariance (Liu et al., 2014; Schmidt & Roth, 2012 Henriques et al., 2014)\nDataset. The Google Earth dataset (Heitz & Koller, 2o08) contains bounding box annotations. supplemented with angle annotations from (Henriques et al., 2014), for 697 vehicles in 15 large. images. We use the first 10 for training and the rest for validation. Going beyond these previous works, we focus on the estimation of both rotation and scale parameters. The object scale is taker to be the diagonal length of the bounding box.\nImplementation. A 48 48 image around each vehicle is cropped and downscaled by 50%, and then fed to a network for pose prediction. The proposed method, Warped CNN, follows the architecture. of section 6.1 (visualized in fig. 2). The CNN block contains 3 convolutional layers with 5 5 filters, with 20, 50 and 1 output channels respectively. Recall that the output of the CNN block is a. single-channel response map over 2D pose-space, which in this case consists of rotation and scale.. Between the convolutional layers there are 3 3 max-pooling operators, with a stride of 2, and a ReLU before the last layer. All networks are trained for 20 epochs with SGD, using hyperparameters chosen by cross-validation.\nBaselines and results. The results of the experiments are presented in table 1, which shows angular. and scale error in the validation set. Qualitative results are shown in fig. 3. To verify whether the. proposed warped convolution is indeed responsible for a boost in performance, rather than other ar-. chitectural details, we compare it against a number of baselines with different components removed.. The first baseline, CNN+softargmax, consists of the same architecture but without the warp (sec-. tion 5.2). This is a standard CNN, with the soft argmax at the end. Since CNNs are equivariant to. translation, rather than scale and rotation, we observe a drop in performance. For the second base. line, CNN+FC, we replace the soft argmax with a fully-connected layer, to allow a prediction that. is not equivariant with translation. The FC layer improves the angular error, but not the scale error.. The proposed Warped CNN has a similar (slightly lower) capacity to the CNN+FC baseline, but we. see it achieve better performance, since its architectural equivariance seems to be better matched to. the data distribution.\nCNN+FC STN+FC STN+softargmax Warped CNN Yaw err. (deg.) 13.87 16.92 15.01 10.65 Pitch err. (deg.) 7.23 10.17 6.88 6.351\nTable 2: Results of yaw and pitch pose estimation of faces on the AFLW dataset.\nDataset. For this task we use the Annotated Facial Landmarks in the Wild (AFLW) dataset (Koestinger et al., 2011). It contains about 25K faces found in Flickr photos, and includes yaw (left-right) and pitch (up-down) annotations. We removed 933 faces with yaw larger than 90 degrees (i.e., facing away from the camera), resulting in a set of 24,384 samples. 20% of the faces were set aside for validation.\nImplementation. The region in each face's bounding box is resized to a 64 64 image, whicl is then processed by the network. Recall that our simplified 3D model of yaw and pitch rotatiol (section 5.3) assumes a spherical geometry. Although a person's head roughly follows a spherica shape, the sample images are centered around the face, not the head. As such, we use an affin Spatial Transformer Network (STN) (Jaderberg et al., 2015) as a first step, to center the imag correctly. Similarly, because the optimal camera parameters (f, r and d) are difficult to set b hand, we let the network learn them, by computing their derivatives numerically (which has a lov overhead, since they are scalars). The rest of the network follows the same diagram as before (fig. 2) The main CNN has 4 convolutional layers, the first two with 5 5 filters, the others being 9 9. Th numbers of output channels are 20, 50, 20 and 1, respectively. A 3 3 max-pooling with a strid of 2 is performed after the first layer, and there are ReLU non-linearities between the others. As fo the STN, it has 3 convolutional layers (5 5), with 20, 50 and 6 output channels respectively, an 3 3 max-pooling (stride 2) between them.\nBaselines and results. The angular error of the proposed equivariant pose estimation, Warped CNN is shown in table 2, along with a number of baselines. Qualitative results are shown in fig. 4. The goal of these experiments is to demonstrate that it is possible to achieve equivariance to complex 3D. rotations. We also wish to disentangle the performance benefits of the warped convolution from the. other architectural aspects. The first baseline, STN+softargmax, is the same as the proposed method,. but without the warp. The large performance drop indicates that the spherical model incorporates important domain knowledge, which is ignored by a translation-equivariant STN. To allow non-. equivariant models, we also test two other baselines where the softargmax is replaced with a fully-. connected (FC) layer. The STN+FC includes an affine Spatial Transformer, while the CNN+FC does not, corresponding to a standard CNN of equivalent capacity. We observe that neither the FC. or the STN components can account up for the performance of the warped convolution, which better exploits the natural 3D rotation equivariance of the data.."}, {"section_index": "7", "section_name": "7 CONCLUSIONS", "section_text": "In this work we show that it is possible to reuse highly optimized convolutional blocks, which are equivariant to image translation, and coax them to exhibit equivariance to other operators, includ- ing 3D transformations. This is achieved by a simple warp of the input image, implemented with off-the-shelf components of deep networks, and can be used for image recognition tasks involving a large range of image transformations. Compared to other works, warped convolutions are simpler relying on highly optimized convolution routines, and can flexibly handle many types of continu- ous transformations. Studying generalizations that support more than two parameters seems like a fruitful direction for future work. In addition to the practical aspects, our analysis offers some in- sights into the fundamental relationships between arbitrary image transformations and convolutional architectures.\nFigure 4: Example pose estimates (yaw and pitch) on the AFLW dataset (Section 6.3)"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Joan Bruna, Arthur Szlam, and Yann LeCun. Learning stable group invariant representations with convolutional networks. arXiv preprint arXiv:1301.3537, 2013.\nTaco Cohen and Max Welling. Group equivariant convolutional networks. In Proceedings of the 33rd International Conference on Machine Learning (ICML-16), 2016.\nGerald B Folland. A course in abstract harmonic analysis. 1995\nAapo Hyvarinen, Jarmo Hurri, and Patrick O Hoyer. Natural Image Statistics: A Probabilisti. nal Visi 30 Scie Media\nMax Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Ad vances in Neural Information Processing Systems, pp. 2017-2025. 2015.\nAngjoo Kanazawa, Abhishek Sharma, and David Jacobs. Locally scale-invariant convolutional neural networks. arXiv preprint arXiv:1412.5104, 2014.\nMartin Koestinger, Paul Wohlhart, Peter M. Roth, and Horst Bischof. Annotated facial landmarks in the wild: A large-scale, real-world database for facial landmark localization. In First IEEE. International Workshop on Benchmarking Facial Image Analysis Technologies. 2011\nKarel Lenc and Andrea Vedaldi. Understanding image representations by measuring their equiv ariance and equivalence. In Proceedings of the IEEE conference on computer vision and patter recognition, pp. 991-999, 2015.\nSander Dieleman. Kyle W Willett. and Joni Dambre. Rotation-invariant convolutional neural net works for galaxy morphology prediction. Monthly notices of the royal astronomical society, 450 (2):1441-1459, 2015.\nAndrew Lavin. Fast algorithms for convolutional neural networks. arXiv preprint arXiv: 1509.09308 2015.\nB. Srinivasa Reddy and Biswanath N. Chatterji. An FFT-based technique for translation, rotation. and scale-invariant image registration. IEEE Transactions on Image Processing, 5(8):1266-1271, 1996. Uwe Schmidt and Stefan Roth. Learning rotation-aware features: From invariant priors to equivari-. ant descriptors. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2050-2057, 2012. Georgios Tzimiropoulos, Vasileios Argyriou, Stefanos Zafeiriou, and Tania Stathaki. Robust FFT-\nOur simplified model consists of a perspective camera with focal length f and all other camera parameters equal to identity, at a distance d from a centered sphere of radius r (see fig. 1-d).\nA 2D point x in image-space corresponds to the 3D point\nRaycasting it along the z axis, it will intersect the sphere surface at the 3D poin\np f d p\nThen, the yaw and pitch coordinates of the point a on the surface of the sphere are.\nThese polar coordinates are now rotated by the spatial transformation parameters.\nConverting the polar coordinates back to a 3D point q\nFinally, projection of q' into image-space yields\np = (x1,X2,f)\nIf the argument of the square-root is negative, the ray does not intersect the sphere and so the point transformation is undefined. This means that the domain of the image should be restricted to the sphere region. In practice, in such cases we simply leave the point unmodified\nq1 D1 = coS - atan d. q3\nr sin o' sin $?. r cos $' r sin $' cos $', - d\n1' x)= q2"}]
Sy7m72Ogg
[{"section_index": "0", "section_name": "AN ACTOR-CRITIC ALGORITHM FOR LEARNING RATE LEARNING", "section_text": "Chang Xu\nNankai University\nchangxu@nbjl.nankai.edu.cn\nNankai University\nwgzwp@nbjl.nankai.edu.cn\nStochastic gradient descent (SGD), which updates the model parameters by adding a local gradient times a learning rate at each step, is widely used in model training of machine learning algorithms such as neural networks. It is observed that the models trained by SGD are sensitive to learning rates and good learning rates are problem specific. To avoid manually searching of learning rates, which is tedious and inefficient, we propose an algorithm to automatically learn learning rates using actor-critic methods from reinforcement learning (RL). In particular, we train a policy network called actor to decide the learning rate at each step during training, and a value network called critic to give feedback about quality of the decision (e.g., the goodness of the learning rate outputted by the actor) that the actor made. Experiments show that our method leads to good convergence of SGD and can prevent overfitting to a certain extent, resulting in better performance than human-designed competitors."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "While facing large scale of training data, stochastic learning such as stochastic gradient descen. (SGD) is usually much faster than batch learning and often results in better models. An observatior. for SGD methods is that their performances are highly sensitive to the choice of learning rateLeCur. et al.[(2012). Clearly, setting a static learning rate for the whole training process is insufficient, since. intuitively the learning rate should decrease when the model becomes more and more close to a. (local) optimum as the training goes on over time Maclaurin et al.(2015). Although there are some. empirical suggestions to guide how to adjust the learning rate over time in training, it is still a difficul. task to find a good policy to adjust the learning rate, given that good policies are problem specific anc. depend on implementation details of a machine learning algorithm. One usually needs to try many. times and adjust the learning rate manually to accumulate knowledge about the problem. However. human involvement often needs domain knowledge about the target problems, which is inefficien. and difficult to scale up to different problems. Thus, a natural question arises: can we automatically. adjust the learning rate? This is exactly the focus of this work and we aim to automatically learn th learning rates for SGD based machine learning (ML) algorithms without human-designed rules o. hand-crafted features.\nBy examining the current practice of learning rate control/adjustment, we have two observations First, learning rate control is a sequential decision process. At the beginning, we set an initia learning rate. Then at each step, we decide whether to change the learning rate and how to change it, based on the current model and loss, training data at hand, and maybe history of the training process. As suggested in Orr & Muller(2003), one well-principled method for estimating the idea learning rate that is to decrease the learning rate when the weight vector oscillates, and increase i when the weight vector follows a relatively steady direction. Second, although at each step some immediate reward (e.g., the loss decrement) can be obtained by taking actions, we care more abou the performance of the final model found by the ML algorithm. Consider two different learning rate\nTao Qin\nMicrosoft Research Asia\ntaoqin@microsoft.com\nMicrosoft Research Asia\ntie-van.liu@microsoft.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "control policies: the first one leads to fast loss decrease at the beginning but gets saturated and stuck in a local minimum quickly, while the second one starts with slower loss decrease but results in much smaller final loss. Obviously, the second policy is better. That is, we prefer long-term rewards over short-term rewards"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Our focus is to improve gradient based ML algorithm through automatic learning of learning rate Different approaches have been proposed to improve gradient methods, especially for deep neural networks.\nSenior et al.(2013); Sutton(1992);Darken & Moody(1990) focus on predefining update rules t adjust learning rates during training. A limitation of these methods is that they have additional fre parameters which need to be set manually. Another recent work[Daniel et al.(2016) studies how t automatically select step sizes, but it still requires hand-tuned features. Schaul et al.[(2013) propose a method to choose good learning rate for SGD, which relies on the square norm of the expectatio of the gradient, and the expectation of the square norm of the gradient. The method is much mor constrained than ours and several assumption should be met.\nCombining the two observations, it is easy to see that the problem of finding a good policy to control/adjust learning rate falls into the scope of reinforcement learning (RL) Sutton & Barto (1998), if one is familiar with RL. Inspired by the recent success of RL for sequential decision problems, in this work, we leverage RL techniques and try to learn the learning rate for SGD based methods.\nWe propose an algorithm to learn the learning rate within the actor-critic framework Sutton|(1984); Sutton et al.(1999);Barto et al.(1983); Silver et al.(2014) from RL. In particular, an actor network is trained to take an action that decides the learning rate for current step, and a critic network is trained to give feedbacks to the actor network about long-term performance and help the actor network to adjust itself so as to perform better in the future steps. The main contributions of this paper include:\nWe propose an actor-critic algorithm to automatically learn the learning rate for ML algo. rithms. Long-term rewards are exploited by the critic network in our algorithm to choose a better learning rate at each step. We propose to feed different training examples to the actor network and the critic network which improve the generalization performance of the learnt ML model.. A series of experiments validate the effectiveness of our proposed algorithm for learning. rate control.\nSince SGD solely rely on a given example (or a mini-batch of examples) to compare gradient, its model update at each step tends to be unstable and it takes many steps to converge. To solve this problem, momentum SGD Jacobs(1988) is proposed to accelerate SGD by using recent gradients RMSpropTieleman & Hinton(2012) utilizes the magnitude of recent gradients to normalize the gradients. It always keeps a moving average over the root mean squared gradients, by which it di- vides the current gradient. Adagrad Duchi et al. (2011) adapts component-wise learning rates, and performs larger updates for infrequent and smaller updates for frequent parameters. Adadelta Zeiler (2012) extends Adagrad by reducing its aggressive, monotonically decreasing learning rate. Instead of accumulating all past squared gradients, Adadelta restricts the window of accumulated past gra- dients to some fixed size. Adam Kingma & Ba(2014) computes component-wise learning rates using the estimates of first and second moments of the gradients, which combines the advantages of AdaGrad and RMSProp.\nSince our proposed algorithm is based on RL techniques, here we give a very brief introduction to RL. which will ease the description of our algorithm in next section..\nReinforcement learning Sutton(1988) is concerned with how an agent acts in a stochastic envi ronment by sequentially choosing actions over a sequence of time steps, in order to maximize : cumulative reward. In RL, a state st encodes the agents observation about the environment at a time step t, and a policy function (st) determines how the agent behaves (e.g., which action tc take) at state st. An action-value function (or, Q function) Q(st, at) is usually used to denote the cumulative reward of taking action a' at state s' and then following policy afterwards.\nMany RL algorithms have been proposed Sutton & Barto(1998);Watkins & Dayan(1992), and many RL algorithmsSutton (1984); Sutton et al.(1999); Barto et al.(1983); Silver et al.(2014) can be described under the actor-critic framework. An actor-critic algorithm learns the policy function and the value function simultaneously and interactively. The policy structure is known as the actor and is used to select actions; the estimated value function is known as the critic, and it criticizes the actions made by the actor.\nRecently, deep reinforcement learning, which uses deep neural networks to approximate/represent the policy function and/or the value function, have shown promise in various domains, including Atari gamesMnih et al.(2015), GoSilver et al.(2016), machine translation Bahdanau et al. 2016), image recognition Xu et al.(2015), etc.\nIn this section, we present an actor-critic algorithm that can automate the learning rate control for SGD based machine learning algorithms\nAw Automatic Learning Rate Controller Action Reward Last layer Last layer First layer First layer Optimizee = x(wt, X Actor Network. Critic Network.\nFigure 1: The framework of our osed automatic learning rate controller\nt+1\nIt is observed that the performance of SGD based methods is quite sensitive to the choice of at for. non-convex loss function f. Unfortunately, f is usually non-convex with respect to the parameters\nMany machine learning tasks need to train a model with parameters w by minimizing a loss function f defined over a set X of training examples:.\n= arg min fw(X)\nw in many ML algorithms, especially for deep neural networks. We aim to learn a learning rat controller using RL techniques that can automatically control at.."}, {"section_index": "4", "section_name": "3.1 ACTOR NETWORK", "section_text": "The actor network, which is called policy network in RL, plays the key role in our algorithm: i determines the learning rate control policy for the primary ML algorithm'|based on the current. model, training data, and maybe historical information during the training process..\nNote that w' could be of huge dimensions, e.g., one widely used image recognition model VGGNet Simonyan & Zisserman[(2014) has more than 140 million parameters. If the actor network takes all of those parameters as the inputs, its computational complexity would dominate the complexity oi the primary algorithm, which is unfordable. Therefore, we propose to use a function x() to process and yield a compact vector s' as the input of the actor network. Following the practice in RL, we call x() the state function, which takes w' and the training data x as inputs:\nThen the actor network e( parameterized by 0 yields an action at\nwhere the action at e R is a continuous value. When a' is determined, we update the model of th primary algorithm by Equation|2\nNote that the actor network has its own parameters and we need to learn them to output a good action. To learn the actor network, we need to know how to evaluate the goodness of an actor network. The critic network exactly plays this role."}, {"section_index": "5", "section_name": "3.2 CRITIC NETWORK", "section_text": "Recall that our goal is to find a good policy for learning rate control to ensure that a good mode can be learnt eventually by the primary ML algorithm. For this purpose, the actor network needs tc output a good action a' at state st so that finally a low training loss f() can be achieved. In RL, the Q function Q(s, a) is often used to denote the long term reward of the state-action pair s, a while following the policy to take future actions. In our problem, Q(st, at) indicates the accumulativ decrement of training loss starting from step t. We define the immediate reward at step t as the one step loss decrement:\nThe accumulative value Rt of policy at step t is the total discounted reward from step t\nwhere y E (0. 1 is the discount factor\nConsidering that both the states and actions are uncountable in our problem, the critic network use a parametric function Qo(s, a) with parameters to approximate the Q value function Q(s, a).\n'Here we have two learning algorithms. We call the one with learning rate to adjust as the primary M L algorithm, and the other one which optimizes the learning rate of the primary one as the secondary MI algorithm.\nFigure|1|illustrates our automatic learning rate controller, which adopts the actor-critic framework in RL. The basic idea is that at each step, given the current model w' and training sample x, an. actor network is used to take an action (the learning rate a', and it will be used to update the model wt), and a critic network is used to estimate the goodness of the action. The actor network will be. updated using the estimated goodness of at, and the critic network will be updated by minimizing. temporal difference (TD) Sutton & Barto(1990) error. We describe the details of our algorithm in. the following subsections.\nx(w,X)\n-Qo(st, at\nV0 = Vewe(st+1 =e(s)\nAlgorithm 1 Actor-Critic Algorithm for Learning Rate Learning\nRequire: Training steps T ; training set X; loss function f; state function X; discount factor: y . Ensure: Model parameters w, policy parameters 0 of the actor network, and value parameters the critic network;. 1: Initial parameters wo, 0o, Yo; 2: for t = O, ..., T do 3: Sample x; E X, i E 1, ..., N. 4: Extract state vector: s = X(wt, xi). 5: //Actor network selects an action. 6: Computes learning rate a, = e(st) 7: //Update model parameters w. 8: Compute ft(xi). Update w: wt+1 = wt - a$V ft(xi). 9: 10: //Update critic network by minimizing square error between estimation and label.. 11: rt=ft(xi)-ft+1(xi) 12: Compute Q(st+1,o(st+1)), Q(sf,at) 13: 14: Compute &' according to Equation7} gt=rt+yQ(st+Ig(st+1))Q(st,at) 15: Update using the following gradients according to Equation8]: Vy=8tVyQy(st,at) 16: / Update actor network 17: Sample x; E X, j E 1,..., N, j # i.. 18: Compute at+1 = e(st+1). 19: 20: Update 0 from Equation[9] st+1 =te(s) 21: end for 22: return w,0, ;\nRequire: Training steps T ; training set X; loss function f; state function X; discount factor: ;. Ensure: Model parameters w, policy parameters 0 of the actor network, and value parameters p of\nThe overall algorithm is shown in Algorithm 1. In each step, we sample an example (Line 3), extrac. the current state vector (Line 4), compute the learning rate using the actor network (Line 6), update. the model (Lines 8-9), compute TD error (Lines 11-14), update the critic network (Line 15), anc sample another example (Line 17) to update the actor network (Line 18-20). We would like to make. some discussions about the algorithm.\nThe critic network has its own parameters , which is updated at each step using TD learning. More precisely, the critic is trained by minimizing the square error between the estimation Qo(st, at) and the target yt:\nt\nVo=8V\nThe policy parameters 0 of the actor network is updated by ensuring that it can output the action with the largest Q value at state st, i.e., a* = arg maxa Q(st, a). Mathematically,\nSecond, one may notice that we use one example (e.g., x;) for model and the critic network update. but a different example (e.g., xy) for the actor network update. Doing so we can avoid that the al-. gorithm will overfit on some (too) hard examples and can improve the generalization performance. of the algorithm on the test set. Consider a hard examplq-|in a classification task. Since such an example is difficult to be classified correctly, intuitively its gradient will be large and the learning rate given by the actor network at this step will also be large. In other words, this hard example will. greatly change the model, while itself is not a good representative of its category and the learning. algorithm should not pay much attention to it. If we feed the same example to both the actor network and the critic network, both of them will encourage the model to change a lot to fit the example, con-. sequently resulting in oscillation of the training, as shown in our experiments. By feeding different. examples to the actor and critic networks, it is very likely the critic network will find that the gradi- ent direction of the example fed into the actor network is inconsistent with its own training example. and thus criticize the large learning rate suggested by the actor network. More precisely, the update. of w is based on x; and the learning rate suggested by the actor network, while the training target. of the actor network is to maximize the output of the critic network on x;. If there is big gradient. disagreement between x; and x, the update of w, which is affected by actor's decision, would cause. the critic's output on x; to be small. To compensate this effect, the actor network is forced to predict a small learning rate for a too hard x; in this situation.."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "We conducted a set of experiments to test the performance of our learning rate learning algorithm. and compared with several baseline methods. We report the experimental results in this section."}, {"section_index": "7", "section_name": "4.1 EXPERIMENTAL SETUP", "section_text": "We tested our method on two widely used image classification datasets: MNIST LeCun et al (1998) and CIFAR-10 Krizhevsky & Hinton(2009). Convolutional neural networks (CNNs) are the standard model for image classification tasks in recent years, and thus the primary ML algorithm adopted the CNN model in all our experiments.\nWe specified our actor-critic algorithm in experiments as follows. Given that stochastic mini-batc. training is a common practice in deep learning, the actor-critic algorithm also operated on mini. batches, i.e., each step is a mini batch in our experiments. We defined the state st = x(wt, Xt) a. the average loss of learning model w' on the input min-batch X,. We specified the actor network a. a two-layer long short-term memory (LSTM) network with 20 units in each layer, considering that. good learning rate for step t depends on and correlates with the learning rates at previous steps whil. LSTM is well suited to model sequences with long-distance dependence. We used the absolute valu. activation function for the output layer of the LSTM to ensure a positive learning rate. The LSTN. was unrolled for 20 steps during training. We specified the critic network as a simple neural networ. with one hidden layer and 10 hidden units. We use Adam with the default setting in TensorFlo optimizer toolbox Abadi et al.(2015) to train the actor and critic networks in all the experiments.."}, {"section_index": "8", "section_name": "4.2 RESULTS ON MNIST", "section_text": "MNIST is a dataset for handwritten digit classification task. Each example in the dataset is a 28 2 black and white image containing a digit in {0, 1, ... , 9}. The CNN model used in the primary\n2For example, an example may has an incorrect label because of the limited quality of labelers\nWe compared our method with several mainstream SGD algorithms, including SGD, AdamKingma & Ba(2014), AdagradDuchi et al.(2011) and RMSpropTieleman & Hinton(2012). For each of these algorithms and each dataset, we tried the following learning rates 10-4, 10-3, .., 100. We report the best performance of these algorithms over those learning rates. If an algorithm needs some other parameters to set, such as decay coefficients for Adam, we used the default setting in TensorFlow optimizer toolbox. For each benchmark and our proposed method, five independent runs are averaged and reported in all of the following experiments.\n0.1 0.075 0.995 0.9925 0.08 0.065 0.99 0.06 0.055 0.9875 0.04 0.045 0.985 0.02 0.035 0.9825 0 0.025 0.98 2000 4000 6000 8000 10000 12000 0 2000 4000 6000 8000 10000 12000 2000 4000 6000 8000 10000 12000 0 -SGDADAM -Adagrad----RMSprop Our method -- SGD- ADAM - Adagrad ---- RMSprop Our method --SGD - ADAM -Adagrad----RMSprop Our method (a) (b)\nFigure 2: Results on MNIST. (a) Training loss. (b) Test loss. (c) Test accuracy. The x-axis is the number of mini batches\n1.4 0.8 1.2 0.79 0.78 0.8 0.77 0.6 0.7 0.76 0.4 0.2 0.6 0.75 0 20000 40000 60000 80000 100000 0 20000 40000 60000 80000 100000 10000 25000 40000 55000 70000 85000 100000 --=SGD----ADAM==- Adagrad--=RMSprop Our method - SGD = - ADAM - Adagrad -- RMSprop Our method =- SGD - ADAM * - Adagrad = = - RMSprop Our method a b C\n0.8 0.79 0.9 0.78 0.8 0.77 0.6 0.76 0.4 0.2 0.75 80000 20000 40000 60000 10000 20000 40000 60000 100000 80000 100000 25000 40000 55000 70000 85000 100000 -Adagrad RMSpro ur method -SGD Adagrad - RMSprop Our method --SGD--ADAM Adagrad -- RMSprop Our method (a) (b) C\nFigure 3: Results on CIFAR10. (a) Training loss. (b) Test loss. (c) Test accuracy. The x-axis is the number of mini batches.\nML algorithm is consist of two convolutional layers, each followed by a pooling layer, and finally a. fully connected layer. The first convolutional layer filters each input image using 32 kernels of size. 5 5. The max-pooling layer following the first convolutional layer is performed over 2 2 pixe. windows, with stride 2. The second convolutional layer takes the outputs of the first max-pooling. layer as inputs and filters them with 64 kernels of size 5 5. The max-pooling layer following. the second convolutional layer is performed over 2 2 pixel windows, with stride 2. The outputs. of second max pooling layer are fed to a fully connected layer with 512 neurons. Dropout was. conducted on the fully connect layer with a dropout rate of O.5. ReLU activation functions are usec in the CNN model. There are 60,000 training images and 10,000 test images in this dataset. We. scaled the pixel values to the [0,1] range before inputting to all the algorithms. Each mini batch. contains 50 randomly sampled images.\nFigure2 shows the results of our actor-critic algorithm for learning rate learning and the baseline. methods, including the curves of training loss, test loss, and test accuracy. The final accuracies of these methods are summarized in Table[1 We have the following observations.\nTable 1: Error rate comparison on MNIST\nOptimizer Error Rate (%) SGD 0.75 ADAM 0.87 Adagrad 0.94 RMSprop 0.83 Our method 0.67\nIn terms of training loss, our algorithm has similar convergence speed to the baseline meth-. ods. One may expect that our algorithm should have significantly faster convergence speed. considering that our algorithm learns both the learning rate and the CNN model while the baselines only learn the CNN model and choose the learning rates per some predefined. rules. However, this is not correct. As discussed in Section 3.4, we carefully design the algorithm and feed different samples to the actor network and critic network. Doing so we can focus more on generalization performance than training loss: as shown in Figure4] our. algorithm achieves the best test accuracy.\nTable 2: Classification Accuracy on CIFAR-10\nOptimizer Accuracy SGD 78.74 ADAM 77.46 Adagrad 78.46 RMSprop 62.3 Our method 79.34\n2.5 0.055 1.8 0.050 2 1.6 0.045 0.040 1.5 bul 0.030 0.025 0.5 A 0.020 0 0.8 0.015 0 5000 10000 15000 20000 25000 30000 35000 40000 0 5000 10000 15000 20000 25000 30000 35000 40000 0.010 --SGD - ADAM - Adagrad --RMSprop Our method - SGD= I = - ADAM - Adagrad---- RMSprop Our method 102 (b) a\nFigure 4: Results on CIFAR-10 with 20% training data (a) Training loss. (b) Test loss.."}, {"section_index": "9", "section_name": "4.3 RESULTS ON CIFAR-10", "section_text": "CIFAR-10 is a dataset consisting of 60000 natural 32 32 RGB images in 10 classes: 50,000 imagesfor training and 10,O00 for test. We used a CNN with 2 convolutional layers (each followed. by max-pooling layer) and 2 fully connected layers for this task. There is a max pooling layer which. performed over 2 2 pixel windows, with stride 2 after each convolutional layer. All convolutional. layers filter the input with 64 kernels of size 5 5. The outputs of the second pooling layer are fed. to a fully connected layer with 384 neurons. The last fully connected layer has 192 neurons. Before inputting an image to the CNN, we subtracted the per-pixel mean computed over the training set. from each image.\n2.5 1.7 1.6 2 1.5 1.5 1.4 1.3 1 1.2 1.1 0.5 1 0 0.9 0 5000 10000 15000 20000 25000 30000 35000 40000 0 5000 10000 15000 20000 25000 30000 35000 40000 Different -Same - Different +-- Same (a) (b)\nFigure 6: Results on CIFAR-10 with 20% training data. (a) Training loss. (b) Test loss. Our algorithm with x; = x; is shown with blue line, and Our algorithm with x; x; is shown with. Orange line.\nalgorithms on two subsets of training data on CIFAR-10: one with only 20% training data The curves of training loss and test loss are shown in Figure 4] As can be seen from the figure, those baseline methods are easy to overfit and their test loss increases after 5oo0 steps (mini batches). Ir contrast, our algorithm is relatively robust and can prevent overfitting to some extent.\nAs we explained in Section 3.4, feeding different examples to the actor and critic networks is im portant to guarantee generalization ability. Here we conducted another experiment to verify our. intuitive explanation. Figure|6[shows the results of two different implementations of our actor-critic algorithm on CIFAR-10. In the first implementation, we fed the sample examples to the two net-\nOur algorithm achieves the lowest error rate on MNIST. Although the improvement looks small, we would like to point out that given that the accuracy of CNN is already close to 100%, it is a very difficult task to further improve accuracy, not to mention that we only changed learning rate policy without changing the CNN model.\nFigure3 shows the results of all the algorithms on CIFAR-10, including the curves of training loss. the test loss and test accuracy. Table2shows the final test accuracy. We get similar observations as MNIST: our algorithm achieves similar convergence speed in terms of training loss and slightly better test accuracy than baselines. Figure 5|shows the learning rate learned by our method on CIFAR-10. To further understand the generalization performance of our algorithm, we ran all the\nTable 3: Error rate of different methods on different network architectures\nworks, i.e., x; = x; in the algorithm, and in the second implementation, the input x; of the critic. network is different from the input x; of the actor network. It is easy to see from the figure that. setting x; = x; tends to oscillate during training and leads to poor test performance. Thus, we need. to feed different training data to the actor network and the critic network to ensure the performance of the algorithm."}, {"section_index": "10", "section_name": "4.4 COMPARISON WITH OTHER ADAPTIVE LEARNING RATE METHOD", "section_text": "We also compare our method with \"vSGD\" from previous by work Schaul et al.(2013), which car automatically adjust learning rates to minimize the expected error. This method tries to compute learning rate at each update by optimizing the expected loss after the next update according tc the square norm of the expectation of the gradient, and the expectation of the square norm of the gradient. Note that our method learns to predict a learning rate at each time step by utilizing the long term reward predicted by a critic network..\nFor a fair comparison, we followed the experiments settings ofSchaul et al.(2013), which design hree different network architectures for MNIST task to measure the performance. The first one denoted by M0' which is simple softmax regression (i.e. a network with no hidden layer). TI second one ('M1') is a fully connected multi-layer perceptron, with a single hidden layer. Tl third one (denoted M2') is a deep, fully connected multi-layer perceptron with two hidden laye The vSGD has three variants in their paper. We referred to the results reported in their paper ar compared our method with all of three variants of their algorithm (vSGD-1, vSGD-b, vSGD-g The learning rates of SGD are decreased according to a human designed schedule, and the hype parameters of SGD, ADAM, Adagrad, RMSprop are carefully determined by their lowest test err among a set of hyper-parameters. All hyper-parameters can be found in Schaul et al. (2013).\nThe experimental results are reported in Table[3] It shows that our proposed method performs bette. than vSGD and other baseline methods, and is stable across different network architectures.\nIn this work, we have studied how to automatically learn learning rates for gradient based machine learning methods and proposed an actor-critic algorithm, inspired by the recent success of reinforce- ment learning. The experiments on two image classification datasets have shown that our method (1) has comparable convergence speed with expert-designed optimizer while achieving better test accuracy, and (2) can successfully adjust learning rate for different datasets and CNN model struc-. tures.\nFor the future work, we will explore the following directions. In this work, we have applied our. algorithm to control the learning rates of SGD. We will apply to other variants of SGD methods. We. have focused on learning a learning rate for all the model parameters. We will study how to learr an individual learning rate for each parameter. We have considered learning learning rates using RI. techniques. We will consider learning other hyperparameters such as step-dependent dropout rates. for deep neural networks."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow. org, 1, 2015.\nMethods Network SGD ADAM Adagrad RMSprop vSGD-1 vSGD-b vSGD-g Our method MO 7.60 8.70 7.52 10.91 7.50 7.89 8.20 7.50 M1 2.34 4.12 2.70 6.17 2.42 2.44 4.14 2.04 M2 2.15 3.85 2.34 3.81 2.16 2.05 3.65 2.03\nJohn Duchi. Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159. 2011\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nYann A LeCun, Leon Bottou, Genevieve B Orr, and Klaus-Robert Muller. Efficient backprop. In Neural networks: Tricks of the trade, pp. 9-48. Springer, 2012\nGenevieve B Orr and Klaus-Robert Muller. Neural networks: tricks of the trade. Springer, 2003\nTom Schaul, Sixin Zhang, and Yann LeCun. No more pesky learning rates. ICML (3), 28:343-351 2013.\nAndrew Senior, Georg Heigold, Ke Yang, et al. An empirical study of learning rates in deep neural networks for speech recognition. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6724-6728. IEEE, 2013.\nDavid Silver, Guy Lever, and Nicolas Heess. Deterministic policy gradient algorithms. 2014\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche. Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering. the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.\nRichard S Sutton. Adapting bias by gradient descent: An incremental version of delta-bar-delta. In AAAI, pp. 171-176, 1992\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nRichard S Sutton and Andrew G Barto. Time-derivative models of pavlovian reinforcement. pp 497-537.1990\nRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.\nRichard Stuart Sutton. Temporal credit assignment in reinforcement learning. 1984\nChristopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992\nKelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov. Richard S Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2(3):5, 2015.\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012."}, {"section_index": "12", "section_name": "A APPENDIX", "section_text": "A method of automatically controlling learning rate is proposed in the main body of the paper. The learning rate controller adjusts itself during training to control the learning rate. Here, we propose an improved version that can leverage experiences from several repeated training runs to learn a fixed learning rate controller. Empirically, this algorithm can achieve better performance than the previous one. Given that it requires more time for training the learning rate controller, this method is more suitable for training offline models.\nIn this algorithm, during every training run, we fix the actor network and compute the weighted sum. of the gradients of its parameter 0. The parameter is updated after each run (modified from Equation 9):\nV0 = T-1h(t)Vee(st+1)Vg ct+1 Te(s\nh(t) is weighted function which is used to amplify the feedback signal from the initial training stage It is defined as h(t) = 1/t in our experiments. An error rate of 0.48% was achieved with 5 repeated training runs in MNIST experiment (the same setting as Table|1), and in CIFAR-1O experiment (the same setting as Table|2, 80.23% accuracy was achieved with 10 training runs. This method showed better performance in both experiments."}]
r1nTpv9eg
[{"section_index": "0", "section_name": "LEARNING TO PERFORM PHYSICS EXPERIMENTS VIA DEEP REINFORCEMENT LEARNING", "section_text": "When encountering novel objects, humans are able to infer a wide range of phys-. ical properties such as mass, friction and deformability by interacting with them. in a goal driven way. This process of active interaction is in the same spirit as. a scientist performing experiments to discover hidden facts. Recent advances in. artificial intelligence have yielded machines that can achieve superhuman perfor-. mance in Go, Atari, natural language processing, and complex control problems. however, it is not clear that these systems can rival the scientific intuition of even a young child. In this work we introduce a basic set of tasks that require agents. to estimate properties such as mass and cohesion of objects in an interactive sim-. ulated environment where they can manipulate the objects and observe the conse-. quences. We found that deep reinforcement learning methods can learn to perform. the experiments necessary to discover such hidden properties. By systematically. manipulating the problem difficulty and the cost incurred by the agent for per-. forming experiments, we found that agents learn different strategies that balance. the cost of gathering information against the cost of making mistakes in different. situations. We also compare our learned experimentation policies to randomized. baselines and show that the learned policies lead to better predictions.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Our work is inspired by empirical findings and theories in psychology indicating that infant learning and thinking is similar to that of adult scientists (Gopnik!|2012). One important view in developmen tal science is that babies are endowed with a small number of separable systems of core knowledge for reasoning about objects, actions, number, space, and possibly social interactions (Spelke & Kin zler2007). The object core system covering aspects such as cohesion, continuity, and contact enables babies and other animals to solve object related tasks such as reasoning about oclusion anc predicting how objects behave.\nCore knowledge research has motivated the development of methods that endow agents with physics priors and perception modules so as to infer intrinsic physical properties rapidly from data (Battaglia et al.]2013][Wu et al.]2015][2016] Stewart & Ermon2016). For instance, using physics engines and mental simulation, it becomes possible to infer quantities such as mass from visual input (Hamrick et al.]2016]Wu et al.2015).\nIn early stages of life, infants spend a lot of time interacting with objects in a seemingly random manner (Smith & Gasser2005). They interact with objects in multiple ways, including throwing pushing, pulling, breaking, and biting. It is quite possible that this process of actively engaging with objects and watching the consequences of their actions helps infants understand different physical properties of the object which cannot be observed directly using their sensory systems. It seems infants run a series of \"physical' experiments to enhance their knowledge about the world (Gopnik 2012). The act of performing an experiment is useful both for quickly adapting an agent's policy to a new environment and for understanding object properties in a holistic manner. Despite impressive advances in artificial intelligence that have led to superhuman performance in Go, Atari and natural language processing, it is still unclear if these systems behind these advances can rival the scientific intuition of even a small child."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "While we draw inspiration from child development, it must be emphasized that our purpose is no to provide an account of learning and thinking in humans, but rather to explore how similar types of understanding might be learned by artificial agents in a grounded way. To this end we show that we can build agents that can learn to experiment so as to learn representations that are informative about physical properties of objects, using deep reinforcement learning. The act of conducting ar experiment involves the agent having a belief about the world, which it then updates by observing the consequences of actions it performs\nWe investigate the ability of agents to learn to perform experiments to infer object properties through two environments-Which is Heavier and Towers. In the Which is Heavier environment, the agent is able to apply forces to blocks and it must infer which of the blocks is the heaviest. In the Towers environment the agent's task is to infer how many rigid bodies a tower is composed of by knocking it down. Unlike Wu et al.(2015), we assume that the agent has no prior knowledge about physical properties of objects, or the laws of physics, and hence must interact with the objects in order to learn to answer questions about these properties.\nThis is an unusual paper in that it does not present a new model or propose a new algorithm. There. is a reinforcement learning task at the core of each of our experiments, but the algorithm and models. we use to solve it are not new, and many other existing approaches should be expected to perforn equally well if they were to be substituted in the same setting..\nEndowing our agents with knowledge of objects would help enormously with planning, reasoning and exploration, and yet, doing so is far from trivial. What is an object? It turns out this question does not have a straightforward answer, and this paper is based around the idea that staring at a thing is not enough to understand what it is..\nChildren understand their world by engaging with it. Poking something to find that it is soft, tasting. it to discover it is delicious, or hitting it to see if it falls down. Much of the knowledge people have. of the world is the result of interaction. Vision or open loop perception alone is not enough\nThis paper introduces tasks where we can evaluate the ability of agents to learn about these \"hid den\"' properties of objects. This requires environments where the tasks depend on these properties (otherwise the agents have no incentive to learn about them) and also that we have a way to probe for this understanding in agents that complete the tasks.\nPrevious approaches to this problem have relied on either explicit knowledge of the underlying structure of the environment (e.g. hard-wired physical laws) or on exploiting correlations between material appearance and physical properties (see Section|7|for much more detail). One of the contri- butions of this paper is to show that our agents can still learn about properties of objects, even when the connection between material appearance and physical properties is broken. This setting allows us to show that our agents are not merely learning that blocks are heavy; they are learning how to check if blocks are heavy.\nNone of the previous approaches give a complete account of how agents could come to understand. the physical properties of the world around them. Specifying a model manually is difficult to scale\nOur results indicate that in the case Which is Heavier environment our agents learn experimentation strategies that are similar to those we would expect from an algorithm designed with knowledge of the underlying structure of the environment. In the Towers environment we show that our agents learn a closed loop policy that can adapt to a varying time scale. In both environments we show that when using the learned interaction policies agents are more accurate and often take less time to produce correct answers than when following randomized interaction policies\nThis paper is a step towards agents that understand objects and intuitive reasoning in physical worlds. Our best AI agents currently fail on simple control tasks and simple games, such as Montezuma's. Revenge, because when they look at a screen that has a ladder, a key and a skull they don't imme-. diately know that keys open doors, that skulls are probably hazardous and best avoided, that ladders. allow us to defy gravity, etc. The understanding of physics, relations and objects enables children to solve seemingly simple problems that our best existing AI agents do not come close to begin to. solve.\ngeneralize and to ground in perception. Making predictions from only visual properties will fail tc distinguish between objects that look similar, and it will certainly be unable to distinguish betweer a sack full of rocks and a sack full of tennis balls.\nWe design environments that follow a three phase structure:\nInteraction Initially there is an exploration phase, where the agent is free to interact with the envi ronment and gather information.\nLabeling The interaction phase ends when the agent produces a labeling action through which i communicates its answer to the implicit question posed by the environment.\nCrucially, the transition between interaction and labeling does not happen at a fixed time, but is initiated by the agent. This is achieved by providing the agent with the ability to produce eithe an interaction action or a labeling action at every time step. This allows the agent to decide wher enough information has been gathered, and forces it to balance the trade-off between answering now given its current knowledge, or delaying its answer to gather more information.\nThe optimal trade-off between information gathering and risk of answering incorrectly depends on. two factors. The first factor is the difficulty of the question and the second is the cost of information. The difficulty is environment specific and is addressed later when we describe the environments.. The cost of information can be generically controlled by varying the discount factor during learning.. A small discount factor places less emphasis on future rewards and encourages the agent to answer as quickly as possible. On the other hand, a large discount factor encourages the agent to spend more time gathering information in order to increase the likelihood of choosing the correct answer..\nOur use of \"questions'' and \"answers'' differs from how these terms are used elsewhere in the litera ture.Sutton et al. (2011) talk about a value function as a question, and the agent provides an answer. in the form of an approximation of the value. The answer incorporates the agent's knowledge, and. the match between the actual value and the agent's approximation grounds what it means for this. knowledge to be accurate.\nIn our usage the environment (or episode) itself is the question, and answers come in the form of labeling actions. In each episode there is a correct answer whose semantics is grounded in the sign of the reward function, and the accuracy of an agents knowledge is assessed by the frequency with which it is able to choose the correct answer.\nUsing reward (rather than value) to ground our semantics means that we have a straightforward. way to ask questions that do not depend on the agent's behavior. For example, we can easily ask the question \"which block is heaviest?\" without making the question contingent on a particular. information acquisition strategy.\nWe use the same basic agent architecture and training procedure for all of our experiments, making only minimal modifications in order to adapt the agents to different observation spaces and actuators For all experiments we train recurrent agents using an LSTM with 100 hidden units. When working from features we feed the observations into the LSTM directly. When training from pixels we first scale the observations to 84x84 pixels and feed them through a three convolution layers, each\nWe pose the problem of experimentation as that of answering questions about non-visual properties of objects present in the environment. We design environments that ask questions about these prop-. erties by providing rewards when the agent is able to infer them correctly, and we train agents to. answer these questions using reinforcement learning..\n3 5 10 0.0 0.5 1.0 Normalized Gap\nFigure 1: Left: Diagram of the Which is Heavier environment. Blocks are always arranged in a line but mass of the different blocks changes from episode to episode. Right: Mass gap distributions for different settings of used in the experiments..\nfollowed by a ReLU non-linearity. The three layers have 32, 64, 64 square filters with sizes 8 4, 3, which are applied at strides of 4, 2, 1 respectively. We train the agents using Asynchronous Advantage Actor Critic (Mnih et al.|2016), but ensure that the unroll length is always greater than the timeout length so the agent network is unrolled over the entirety of each episode\nThe Which is Heavier environment is designed to ask a question about the relative masses of different objects in a scene. We assign masses to objects in a way that is uncorrelated with their appearance in order to ensure that the task is not solvable without interaction."}, {"section_index": "3", "section_name": "5.1 ENVIRONMENT", "section_text": "The environment is diagrammed in the left panel of Figure[1] It consists of four blocks, which are constrained to only move vertically. The blocks are always the same size, but vary in mass betweer episodes. The agent's strength (i.e. magnitude of force it can apply) remains constant betweer episodes.\nThe question to answer in this environment is which of the four blocks is the heaviest. Since the. mass of each block is randomly assigned in each episode, the agent must poke the blocks and observe. how they respond in order to make this determination. Assigning masses randomly ensures it is no possible to solve this task from vision (or features) alone, since the appearance and identity o. each block imparts no information about its mass in the current episode. The only way to obtair. information about the masses of the blocks is to interact with them and watch how they respond..\nThe Which is Heavier environment is designed to encode a latent bandit problem through a \"physi cal' lens. Each block corresponds to an arm of the bandit, and the reward obtained by pulling each arm is proportional to the mass of the block. Identifying the heaviest block can then be seen as a best arm identification problem (Audibert & Bubeck]2010). Best arm identification is a well studied problem in experimental design, and understanding of how an optimal solution to the latent bandit should behave is used to guide our analysis of the agents we train on this task.\nIt is important to emphasize that we cannot simply apply standard bandit algorithms here, because we impose a much higher level of prior ignorance on our algorithms than that setting allows. Ban dit algorithms assume that rewards are observed directly, whereas our agents observe mass througl its role in dynamics (and in the case of learning from pixels, through the lens of vision as well) To maintain a bandit setting one could imagine parameterizing this transformation from reward tc observation, and perhaps even learning the mapping as well; however, doing so requires explicitly acknowledging the mapping in the design of the learning algorithm, which we avoid doing. More over, acknowledging this mapping in any way requires the a-priori recognition of the existence o the latent bandit structure. From the perspective of our learning algorithm the mere existence o such a structure also lies beyond the veil of ignorance.\nControlling the distribution of masses allows us to control the difficulty of this task. In particular by controlling the size of the mass gap between the two heaviest blocks we can make the task more\nFeatures Pixels 1.0 1.0 0.8 0.8 0.6 0.6 3 3 0.4 0.4 5 5 0.2 0.2 M 10 10 0.0 0.0 0 5 10 15 20 0 5 10 15 20 Steps (x1e6) Steps (x1e6)\nFigure 2: Learning curves for a typical agent trained on the Which is Heavier environment at varying. difficulty settings. The y-axes show the probability of the agent producing the correct answer before. the episode times out. Each plot shows the top 50% of agents started from 10 random seeds with identical hyperparameter settings. The light lines show learning curves from individual agents, and. the dark lines show the median performance across the displayed runs for each difficulty. Left: Agents trained from features. Right: Agents trained from pixels..\nor less difficult. We generate masses in the range [0, 1] and scale them to an appropriate range fo. the agent's strength.\nWe use the following scheme for controlling the difficulty of the Which is Heavier environment First we select one of the blocks uniformly at random to be the \"heavy\" block and designate the remaining three as \"light'' blocks. We sample the mass of the heavy block from Beta(3, 1) and the mass of the light blocks from Beta(1, ). The single parameter effectively controls the distribution of mass gaps (and thus controls the difficulty), with large values of leading to easier problems. Figure|1|shows the distribution of mass gaps for three values of that we use in our experiments.\nWe distinguish between problem level and instance level difficulty for this domain. Instance leve. difficulty refers to the size of the mass gap in a single episode. If the mass gap is small it is harde. to determine which block is heaviest, and we say that one episode is more difficult than another by. comparing their mass gaps. Problem level difficulty refers to the shape of the generating distributioi. of mass gaps (e.g. as shown in the right panel of Figure[1). A distribution that puts more mass or. configurations that have a small mass gap will tend to generate more episodes that are difficult at the. instance level, and we say that one distribution is more difficult than another if it is more likely t generate instances with small mass gaps. We control the problem level difficulty through , but w. incorporate both problem and instance level difficulty in our analysis.\nWe set the episode length limit to 100 steps in this environment, which is sufficient time to be much longer than a typical episode by a successfully trained agent..\nThe obvious choice for actuation in physical domains is some kind of arm or hand based manipulator However, controlling an arm or hand is quite challenging on its own, requiring a fair amount of dexterity on the part of the agent. The manipulation problem, while very interesting in its owr right, is orthogonal to our goals in this work. Therefore we avoid the problem of learning dexterou manipulation by providing the agent with a much simpler form of actuation.\nWe call the actuation strategy for this environment direct actuation, which allows the agent to affect forces on the different blocks directly. At every time step the agent can output one out of eight possible actions. The first four actions result in an application of a vertical force of fixed magnitude to center of mass of each of the four blocks respectively. The remaining actions are labeling actions and correspond to agent's selection of which is the heaviest block."}, {"section_index": "4", "section_name": "5.3 EXPERIMENTS", "section_text": "Our first experiment is a sanity check to show that we can train agents successfully on the Whicl. is Heavier environment using both features and pixels. This experiment is designed simply to shov that our task is solvable, and to illustrate that by changing the problem difficulty we can make the. task very hard.\nWe present two additional experiments showing how varying difficulty leads to differentiated behav ior both at the problem level and at the instance level. In both cases knowledge of the latent bandii problem allows us to make predictions about how an experimenting agent should behave, and our experiments are designed to show that qualitatively correct behavior is obtained by our agents ir spite of their a-priori ignorance of the underlying bandit problem.\nWe show that as we increase the problem difficulty the learned policies transition from guessin immediately when a heavy block is found to strongly preferring to poke all blocks before making decision. This corresponds to the observation that if it is unlikely for more than one arm to give hig reward then any high reward arm is likely to be best.\nWe also observe that our agents can adapt their behavior to the difficulty of individual problem instances. We show that a single agent will tend to spend longer gathering information when the particular problem instance is more difficult. This corresponds to the observation that when the two best arms have similar reward then more information is required to accurately distinguish them.\nFinally, we conduct an experiment comparing our learned information gathering policies to a ran domized baseline method. This experiment shows that agents more reliably produce the correct. label by following their learned interaction policies than by observing the environment being driven. by random actions.\nSuccess in learning For this experiment we trained several agents at three different difficulties. corresponding to E {3, 5,10}. For each problem difficulty we trained agents on both feature observations, which includes the z coordinate of each of the four blocks; and also using raw pixels,. providing 84 84 pixel RGB rendering of the scene to the agent. Representative learning curves. for each condition are shown in Figure[2 The curves are smoothed over time and show a running estimate of the probability of success, rather than showing the reward directly..\nThe agents do not reach perfect performance on this task, with more difficult problems plateauing a1 progressively lower performance. This can be explained by looking at the distributions of instance level difficulties generated by different settings of , which is shown in the right panel of Figure|1 For higher difficulties (lower values of ) there is a substantial probability of generating problem instances where the mass gap is near O, which makes distinguishing between the two heaviest blocks very difficult.\nPopulation strategy differentiation For this experiment we trained agents at three different dif ficulties corresponding to e {3, 5, 10} all using a discount factor of y = 0.95 which corresponds a relatively high cost of gathering information. We trained three agents for each difficulty and shou results aggregated across the different replicas.\nAfter training, each agent was run for 10,o00 steps under the same conditions they were exposed to during training. We record the number and length of episodes executed during the testing period as well as the outcome of each episode. Episodes are terminated by timeout after 100 steps, but the vast majority of episodes are terminated in < 30 steps by the agent producing a label. Since episodes vary in length not all agents complete the same number of episodes during testing.\nThe left plot in Figure [3]shows histograms of the episode lengths broken down by task difficulty.. The dashed vertical line indicates an episode length of four interaction steps, which is the minimum. number of actions required for the agents to interact with every block. At a task difficulty of = 10. the agents appear to learn simply to search for a single heavy block (which can be found with an. average of two interactions). However, at a task difficulty of 3 = 3 we see a strong bias away from terminating the episode before taking at least four exploratory actions..\nIndividual strategy differentiation For this experiment we trained agents using the same three task difficulties as in the previous experiment, but with an increased discount factor of y = 0.99\nDiscount 0.95 Difficulty 5 Discount 0.99 Difficulty 3 Discount 0.99 100 100 3 80 80 5 10 60 60 40 40 20 20 15 30 0.0 0.5 1.0 0.0 0.5 1.0 Episode Length Normalized Gap Normalized Gap\nFigure 3: Left: Histograms of episode lengths for different task difficulty () settings. There is a transition from = 10 where the agents answer eagerly as soon as they find a heavy block to = 3 where the agents are more conservative about answering before they have acted enough to poke al the blocks at least once. Right: Episode lengths as a function of the normalized mass gap. Unit. on the x-axes are scaled to the range of possible masses, and the y-axis shows the number of step. before the agent takes a labeling action. The black dots show individual episodes, and the red line shows a linear trend fit by OLS and error bars show a histogram estimate of standard deviations. Each plot shows the testing episodes of a single trained agent..\nThis decreases the cost of exploration and encourages the agents to gather more information before producing a label, leading to longer episodes.\nAfter training, each agent was run for 100,o00 steps under the same conditions they were exposed to during training. We record the length of each episode, as well as the mass gap between the twc heaviest blocks in each episode. In the same way that we use the distribution of mass gaps as a measure of task difficulty, we can use the mass gap in a single episode as a measure of the difficulty. of that specific problem instance. We again exclude from analysis the very small proportion o1 episodes that terminate by timeout.\nThe right plots in Figure [3 show the relationship between the mass gap and episode length across the testing runs of two different agents. From these plots we can see how a single agent has learned. to adapt its behavior based on the difficulty of a single problem instance. Although the variance is. high, there is a clear correlation between the mass gap and the length of the episodes. This behavior. reflects what we would expect from a solution to the latent bandit problem; more information is required to identify the best arm when the second best arm is nearly as good..\nRandomized interaction For this experiment we trained several agents using both feature anc pixel observations at the same three task difficulties with a discount of y = 0.95. In total we trainec six sets of agents for this experiment.\nAfter training, each agent was run for 10,o00 steps under the same conditions used during training We record the outcome of each episode, as well as the number of steps taken by each agent before it chooses a label. For each agent we repeat the experiment using both the agent's learned interactior policy as well as a randomized interaction policy.\nThe randomized interaction policy is obtained as follows: At each step the agent chooses a can didate action using its learned policy. If the candidate action is a labeling action then it is passec to the environment unchanged (and the episode terminates). However, if the candidate action is an interaction action then we replace the agent action with a new interaction action chosen uniformly al random from the available action set. When following the randomized interaction policy the agen has no control over the information gathering process, but still controls when each episode ends, and what label is chosen.\nFigure 4 compares the learned interaction policies to the randomized interaction baselines. The results show that the effect on episode length is small, with no consistent bias towards longer ol shorter episodes across difficulties and observation types. However, the learned interaction policies produce more accurate labels across all permutations.\nDiscount 0.95 Difficulty 5 Discount 0.99 Difficulty 3 Discount 0.99 100 100 3 80 80 5 10 60 60 40 40 20 20 15 30 0.0 0.5 1.0 0.0 0.5 1.0 Episode Length Normalized Gap Normalized Gap\nThe Towers environment is designed to ask agents to count the number of cohesive rigid bodies in a scene. The environment is designed so that in its initial configuration it is not possible to determine the number of rigid bodies from vision or features alone.\nThe environment is diagrammed in the left panel of Figure 5 It consists of a tower of five blocks which can move freely in three dimensions. The initial block tower is always in the same configura- tion but in each episode we bolt together different subsets of the blocks to form larger rigid bodies as shown in the figure.\nThe question to answer in this environment is how many rigid bodies are formed from the primitive. blocks. Since which blocks are bound together is randomly assigned in each episode, and binding. forces are invisible, the agent must poke the tower and observe how it falls down in order to deter- mine how many rigid bodies it is composed of. We parameterize the environment in such a way that the distribution over the number of separate blocks in the tower is uniform. This ensures that there. is no single action strategy that achieves high reward.."}, {"section_index": "5", "section_name": "6.2 ACTUATORS", "section_text": "In the Towers environment, we used two actuators: direct actuation, which is similar to the Which is Heavier environment; and the fist actuator, described below. In case of the direct actuation, the agent can output one out of 25 actions. At every time step, the agent can apply a force of fixed. magnitude in either of +x, -x, +y or -y direction to one out of the five blocks. If two blocks are glued together, both blocks move under the effect of force. We use towers of five blocks, which results in 20 different possible actions. The remaining actions are labeling actions that are used by the agent to indicate the number of distinct blocks in the tower..\nThe fist is a large spherical object that the agent can actuate by setting velocities in a 2D horizontal. plane. Unlike direct actuation, the agent cannot apply any direct forces to the objects that constitute the tower, but only manipulate them by pushing or hitting them with the fist. At every time step. agent can output one of nine actions. The first four actions corresponds to setting the velocity ol the fist to a constant amount in (+x, -x, +y, -y) directions respectively. The remaining actions are labeling actions, that are used by the agent to indicate the number of distinct blocks in the tower.\nIn order to investigate if the agent learns a strategy of stopping after a fixed number of time steps or whether it integrates sensory information in a non-trivial manner we used a notion of \"control time step'. The idea of control time step is similar to that of action repeats and if the physics simulatior time step is O.025s and control time step is 0.1s, it means that the same action is repeated 4 times. For the direct actuators we use an episode timeout of 26 steps and for both actuator types.\nEpisode Length. Probability of Success 30 1.0 Active 25 0.8 Randomized 20 0.6 15 0.4 10 Active 0.2 5 Randomized + 0 0.0 10-F 5-F 3-F 10-P 5-P 3-P 10-F 5-F 3-F 10-P 5-P 3-P\nEpisode Length Probability of Success 30 1.0 Active 25 0.8 Randomized 20 0.6 15 0.4 10 Active 0.2 5 Randomized 0 + 0.0 10-F 5-F 3-F 10-P 5-P 3-P 10-F 5-F 3-F 10-P 5-P 3-P\nFigure 4: Comparison between agents in the Which is Heavier environment following their learned interaction policies vs the randomized interaction policy baseline. The x-axes show Difficulty- Observation combinations (e.g. 10-F is difficulty 10 with feature observations and 3-P is difficulty 3 with pixel observations) Left: Episode lengths when gathering information using the different inter- action policies. Right: Probability of choosing the correct label under different conditions (episodes terminating in timeout have been excluded). The dashed line shows chance performance.\nProbability of Success (s) ygbua7/ Episode Length 1.0 0.8 0.6 qoud 0.4 0.2 0.0 0.02 0.04 0.06 0.08 0.10\nProbability of Success (s) yenutt/ qoid Episode Length 1.0 0.8 0.6 0.4 0.2 0.0 0.02 0.04 0.06 0.08 0.10\nFigure 5: Top: Example trajectory of a block tower being knocked down using the fist actuator. Left: Diagram of the hidden structure of the Towers environment. The tower on the left is composed of five blocks, but could decompose into rigid objects in any several ways that can only be distinguished by interacting with the tower. Right: Behavior of a single trained agent using fist actuators when varying the control time step. The x-axis shows different control time step lengths (the training condition O.1). The blue line shows probability of the agent correctly identifying the number of blocks. The red line shows the median episode length (in seconds) with error bars showing 95% confidence intervals computed over 50 episodes. The shaded region shows +/-1 control time step around the median."}, {"section_index": "6", "section_name": "6.3 EXPERIMENTS", "section_text": "Our first experiment is again intended to show that we can train agents in this environment. We show simply that the task is solvable by our agents using both types of actuation..\nThe second experiment shows that the agents learn to wait for an observation where they can identify the number of rigid bodies before producing an answer. This is designed to show that the agents find. a closed loop strategy for counting the number of rigid bodies. An alternative hypothesis would be. that agents learn to wait for (approximately) the same number of steps each time and then take their. best guess.\nOur third experiment compares the learned policy to a randomized interaction policy and shows that agents are able to determine the correct number of blocks in the tower more quickly and more reliably when using their learned policy to gather information..\nSuccess in learning For this experiment we trained several agents on the Towers environment us- ing different pairings of actuators and perception. The features observations include the 3d position of each primitive block, and when training using raw pixels we provide an 84 84 pixel RGB ren- dering of the scene as the agent observation. Figure6 shows learning curves for each combination of actuator and observation type.\nIn all cases we obtain agents that solve the task nearly perfectly, although when training from pixels. we find that the range of hyperparameters which train successfully is narrower than when training. from features. Interestingly, the fist actuators lead to the fastest learning, in spite of the fact that th. agent must manipulate the blocks indirectly through the fist. One possible explanation is that the fis can affect multiple blocks in one action step, whereas in the direct actuation only one block can be. affected per time step.\nWaiting for information For this experiment we trained an agent with pixel observations and the. fist actuator on the towers task with an control time step of O.1 seconds and examine its behavior at test time with a smaller delay between actions. Reducing the control time step means that from. the agent perspective time has been slowed down. Moving the fist a fixed amount of distance takes. longer, as does waiting for the block tower to collapse once it has been hit..\nAfter training the agent was run for 1ooo0 steps for a range of different control time steps. We record. the outcome of each episode, as well as the number of steps taken by the agent before it chooses a. label. None of the test episodes terminate by timeout. so we include all of them in the analysis\nThe plot in Figure[5|shows the probability of answering correctly, as well as the median length of. each episode measured in seconds. In terms of absolute performance we see a small drop compared to the training setting, where the agent is essentially perfect, but the agent performance remains good even for substantially smaller control timesteps than were used during training..\nDirect Features Direct Pixels Fist Features Fist Pixels 1.0 1.0 1.0 1.0 0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.2 0.2 0.2 w wwwwwW 0.0 0.0 0.0 0.0 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Steps (x1e6) Steps (x1e6) Steps (x1e6) Steps (x1e6)\nFigure 6: Learning curves for agents trained on the Towers environment under different conditions The y-axes show the probability of the agent producing the correct answer before the episode times out. The different plots show different pairings of observations and actuators as indicated in the plot titles. Each plot shows the top 50% of runs from 10 random seeds with identical hyper-parameter settings. The black lines show learning curves from individual agents, and the red lines show the median performance of the displayed runs.\nWe also observe that the episodes with different time steps take approximate the same amount of rea time across the majority of the tested range. This corresponds to a large change in episode length as measured by number of agent actions, since with an control time step of O.01 the agent must execute 10x as many actions to cover the same amount of real time as compared to the control time step used during training. From this we can infer that the agent has learned to wait for an informativ observation before producing a label, as opposed to a simpler degenerate strategy of waiting a fixe amount of steps before answering.\nRandomized interaction For this experiment we trained several agents for each combination of actuator and observation type, and examine their behavior when observing an environment driven by a random interaction policy. The randomized interaction policy is identical to the randomized baseline used in the Which is Heavier environment.\nAfter training, each agent was run for 10,oo0 steps. We record the outcome of each episode, as wel. as the number of steps taken by the agent before it chooses a label. For each agent we repeat the experiment using both the agent's learned interaction policy as well as the randomized interaction policy."}, {"section_index": "7", "section_name": "7 RELATED WORK", "section_text": "Deep learning techniques in conjunction with vast labeled datasets have yielded powerful models for image classification (Krizhevsky et al.| 2012]He et al. 2016) and speech recognition (Hinton et al.][2012). In recent years, as we have approached human level performance on these tasks, there has been a strong interest in the computer vision field in moving beyond semantic classification, to tasks that require a deeper and more nuanced understanding of the world.\nInspired by developmental studies (Smith & Gasser,2005), some recent works have focused or learning representations by predicting physical embodiment quantities such as ego-motion (Agrawa et al.]2015] Jayaraman & Grauman2015), instead of symbolic labels.Extending the realm of things-to-be-predicted to include quantities beyond class labels, such as viewer centric parameters (Doersch et al.]2015) or the poses of humans within a scene (Delaitre et al.]2012f Fouhey et al. 2014), has been shown to improve the quality of feature learning and scene understanding. Re. searchers have looked at cross modal learning, for example synthesizing sounds from visual images (Owens et al.]2015), using summary statistics of audio to learn features for object recognition. (Owens et al.|2016) or image colorization (Zhang et al.|2016).\nInverting the prediction tower, another line of work has focused on learning about the visual world by. synthesizing, rather than analyzing, images. Major cornerstones of recent work in this area include. the Variational Autoencoders of Kingma & Welling(2014), the Generative Adversarial Networks\nDirect Features Direct Pixels Fist Features Fist Pixels 1.0 1.0 1.0 1.0 0.8 0.8 0.8 0.8 0.6 0.6 0.6 S sueesss 0.4 0.2 0.2 5 MwwwwW 0.0 0.0 0.0 0.0 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Steps (x1e6) Steps (x1e6) Steps (xle6) Steps (x1e6)\nFigure 7|compares the learned interaction policies to the randomized interaction baselines. The. results show that the agents tend to produce labels more quickly when following their learned inter. action policies. and also that the labels they produce in this way are much more accurate\nFigure 7: Comparison between agents in the Towers environment following their learned interaction policies vs the randomized interaction policy baseline. The x-axes show different Observation- Actuator combinations (e.g. D-F is Direct-Features and F-P is Fist-Pixels). Left: Episode lengths when gathering information using the different interaction policies. Right: Probability of choosing the correct label under different conditions (episodes terminating in timeout have been excluded) The dashed line shows chance performance.\nBuilding on models of single image synthesis there have been many works on predicting the evo. lution of video frames over time (Ranzato et al.]2014] Srivastava et al.2015]van den Oord et al. 2016). Xue et al.(2016) have approached this problem by designing a variational autoencoder archi- tecture that uses the latent stochastic units of the VAE to make choices about the direction of motion of objects, and generates future frames conditioned on these choices..\nA different form of uncertainty in video prediction can arise from the effect of actions taken by an agent. In environments with deterministic dynamics (where the possibility of \"known unknowns'. can, in principle, be eliminated), very accurate action-conditional predictions of future frames can be made (Oh et al.]2015). Introducing actions into the prediction process amounts to learning a latent. forward dynamics model, which can be exploited to plan actions to achieve novel goals (Watter et al.]2015} [Assael et al.]2015] Fragkiadaki et al.]2016). In these works, frame synthesis plays the role of a regularizer, preventing collapse of the feature space where the dynamics model lives..\nAgrawal et al.(2016) break the dependency between frame synthesis and dynamics learning by replacing frame synthesis with an inverse dynamics model. The forward model plays the same role as in the earlier works, but here feature space collapse is prevented by ensuring that the model can decode actions from pairs of time-adjacent images. Several works, includingAgrawal et al.(2016) and|Assael et al.(2015) mentioned above but alsoPinto et al.[(2016);Pinto & Gupta (2016);Levine et al.[(2016), have gone further in coupling feature learning and dynamics. The learned dynamics models can be used for control not only after learning but also during the learning process in order to collect data in a more targeted way, which has been shown to improve the speed and quality of learning in robot manipulation tasks.\nA key challenge of learning from dynamics is collecting the appropriate data. An ingenious solutior. to this is to import real world data into a physics engine and simulate the application of forces ir order to generate ground truth data. This is the approach taken by Mottaghi et al.(2016), whc. generate an \"interactable' data set of scenes, which they use to generate a static data set of image. and force pairs, along with the ground truth trajectory of a target object in response to the applicatior. of the indicated force.\nWhen the purpose is learning an intuitive understanding of dynamics it is possible to do interesting work with entirely synthetic data (Fragkiadaki et al.]2016] Lerer et al.2016).Lerer et al.[(2016] show that convolutional networks can learn to make judgments about the stability of synthetic block towers based on a single image of the tower. They also show that their model trained on synthetic data is able to generalize to make accurate judgments about photographs of similar block towers built in the real world.\nEpisode Length Probability of Success 1.0 25 Active Active 20 0.8 Randomized Randomized 15 0.6 10 0.4 5 0.2 0 0.0 D-F F-F D-P F-P D-F F-F D-P F-P\nMaking intuitive judgments about block towers has been extensively studied in the psychophysics literature. There is substantial evidence connecting the behavior of human judgments to inference over an explicit latent physics model (Hegarty2004] Hamrick et al.]2011) Battaglia et al. 2013) Humans can infer mass by watching movies of complex rigid body dynamics (Hamrick et al. 2016)\nA major component of the above line of work is analysis by synthesis, in which understanding of a physical process is obtained by learning to invert it. Observations are assumed to be generated from an explicitly parameterized generative model of the true physical process, and provide constraints tc an inference process run over the parameters of this model. The analysis by synthesis approach has been extremely influential due to its power to explain human judgments and generalization patterns in a variety of situations (Lake et al.|2015).\nGalileo (Wu et al.][2015) is a particularly relevant instance of tying together analysis by synthesis and. deep learning for understanding dynamics. This system first infers the physical parameters (mass. and friction coefficient) of a variety of blocks by watching videos of them sliding down slopes and. colliding with other blocks. This stage of the system uses an off-the-shelf object tracker to ground. inference over the parameters of a physical simulator, and the inference is achieved by matching. simulated and observed block trajectories. The inferred physical parameters are used to train a deep. network to predict the physical parameters from the initial frame of video. At test time the system is evaluated by using the deep network to infer physical parameters of new blocks, which can be fed. into the physics engine and used to answer questions about behaviors not observed at training time..\nMany of the works discussed thus far, including Galileo and Physics 101, are restricted to passive sensing.Pinto et al.(2016); Pinto & Gupta(2016); Agrawal et al.(2016);Levine et al.(2016) are exceptions to this because they learn their models using a sequential greedy data collection bootstrapping strategy. Active sensing, it appears, is an important aspect of visual object learning in toddlers as argued byBambach et al.(2016), providing motivation for the approach presented here.\nIn computer vision, it is well known that recognition performance can be improved by moving so a to acquire new views of an object or scene. Jayaraman & Grauman (2016), for example, apply dee reinforcement learning to construct an agent that chooses how to acquire new views of an object sc as to classify it into a semantic category, and their related work section surveys many other efforts in active vision.\nDespite recent advances in artificial intelligence, machines still lack a common sense understanding of our physical world. There has been impressive progress in recognizing objects, segmenting object boundaries and even describing visual scenes with natural language. However, these tasks are noi enough for machines to infer physical properties of objects such as mass, friction or deformability.\nWe introduce a deep reinforcement learning agent that actively interacts with physical objects to in-. fer their hidden properties. Our approach is inspired by findings from the developmental psychology literature indicating that infants spend a lot of their early time experimenting with objects through random exploration (Smith & Gasser2005 Gopnik2012] Spelke & Kinzler2007). By letting our agents conduct physical experiments in an interactive simulated environment, they learn to manip.\nPhysics 101 (Wu et al.]2016) is an extension of Galileo that more fully embraces deep learning Instead of using a first pass of analysis by synthesis to infer physical parameters based on observa- tions, a deep network is trained to regress the output of an object tracker directly, and the relevant physical laws are encoded directly into the architecture of the model. The authors show that they can use latent intrinsic physical properties inferred in this way to make novel predictions. The approach of encoding physical models as architecture constraints has also been proposed byStewart & Ermon (2016).\nWhileJayaraman & Grauman|(2016) and others share deep reinforcement learning and active sens ing in common with our work, their goal is to learn a policy that can be applied to images to make decisions based on vision. In contrast, the goal in this paper is to study how agents learn to ex- periment continually so as to learn representations to answer questions about intrinsic properties of objects. In particular, our focus is on tasks that can only be solved by interaction and not by vision alone.\nulate objects and observe the consequences to infer hidden object properties. We demonstrate the efficacy of our approach on two important physical understanding tasks--inferring mass and count- ing the number of objects under strong visual ambiguities. Our empirical findings suggest that our agents learn different strategies for these tasks that balance the cost of gathering information against the cost of making mistakes in different situations.\nScientists and children are able not only to probe the environment to discover things about it, but. they can also leverage their findings to answer new questions. In this paper we have shown that agents can be trained to gather knowledge to answer questions about hidden properties, but we have not addressed the larger issue of theory building, or transfer of this information. Given agents that can make judgments about mass and numerosity, how can they be enticed to leverage this knowledge to solve new tasks?\nFinally, we have made no attempt in this work to optimize data efficiency, but learning physica properties from fewer samples is an important direction to pursue."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Matt Hoffman for several enlightening discussions about bandits. We would also like to thank the ICLR reviewers, whose helpful feedback allowed us to greatly improve the paper."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Pulkit Agrawal, Joao Carreira, and Jitendra Malik. Learning to see by moving. In IEEE International Confer ence on Computer Vision, pp. 37-45, 2015.\nAnother important aspect of understanding through interaction is that that the shape of the interac. tions influences behavior. We touched on this in the Towers environment where we looked at two different actuation styles, but there is much more to be done here. Thinking along these lines leads. naturally to exploring tool use. We showed that agents can make judgments about object mass by. hitting them, but could we train an agent to make similar judgments using a scale?.\nPulkit Agrawal, Joao Carreira, and Jitendra Malik. Learning to see by moving. In IEEE International Confer- ence on Computer Vision, pp. 37-45, 2015. Pulkit Agrawal, Ashvin Nair, Pieter Abbeel, and Jitendra Malik. Learning to poke by poking: Experiential learning of intuitive physics. In Neural Information Processing Systems, 2016. John-Alexander M Assael, Niklas Wahlstrom, Thomas B Schon, and Marc Peter Deisenroth. Data- efficient learning of feedback policies from image pixels using deep dynamical models. arXiv preprint arXiv:1510.02173, 2015. Jean-Yves Audibert and Sebastien Bubeck. Best arm identification in multi-armed bandits. In Conference on Learning Theory, pp. 13-p, 2010. Sven Bambach, David J Crandall, Linda B Smith, and Chen Yu. Active viewing in toddlers facilitates visual object learning: An egocentric vision approach. CogSci, 2016. Peter W Battaglia, Jessica B Hamrick, and Joshua B Tenenbaum. Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences, 110(45):18327-18332, 2013. Vincent Delaitre, David F Fouhey, Ivan Laptev, Josef Sivic, Abhinav Gupta, and Alexei A Efros. Scene se- mantics from long-term observation of people. In European Conference on Computer Vision, pp. 284-298. Springer, 2012. Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1422-1430, 2015. David F Fouhey, Vincent Delaitre, Abhinav Gupta, Alexei A Efros, Ivan Laptev, and Josef Sivic. People watching: Human actions as a cue for single view geometry. International Journal of Computer Vision, 110 (3):259-274, 2014. Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. Learning visual predictive models of physics for playing billiards. ICLR, 2016. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Process- ing Systems, pp. 2672-2680, 2014. Alison Gopnik. Scientific thinking in young children: Theoretical advances, empirical research, and policy implications. Science, 337(6102):1623-1627, 2012. Jessica Hamrick, Peter Battaglia, and Joshua B Tenenbaum. Internal physics models guide probabilistic judg- ments about object dynamics. In Proceedings of the 33rd annual conference of the cognitive science society,\nMary Hegarty. Mechanical reasoning by mental simulation. Trends in cognitive sciences, 8(6):280-5, 2004\npp. 1545-1550. Cognitive Science Society Austin, TX, 2011. Jessica B Hamrick, Peter W Battaglia, Thomas L Griffiths, and Joshua B Tenenbaum. Inferring mass in complex scenes by mental simulation. Cognition, 157:61-76, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition, 2016. Mary Hegarty. Mechanical reasoning by mental simulation. Trends in cognitive sciences, 8(6):280-5, 2004. Geoffrey E. Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, and Brian Kingsbury. Deep neural networks for. acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing. Magazine, 29(6):82-97, 2012. Dinesh Jayaraman and Kristen Grauman. Learning image representations tied to ego-motion. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1413-1421, 2015. Dinesh Jayaraman and Kristen Grauman. Look-ahead before you leap: End-to-end active recognition by fore- casting the effect of motion. In European Conference on Computer Vision, pp. 489-505, 2016. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In International Conference on Learn-. ing Representations, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012. Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015. Adam Lerer, Sam Gross, and Rob Fergus. Learning physical intuition of block towers by example. In Interna- tional Conference on Machine Learning, pp. 430-438, 2016. Sergey Levine, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. arXiv preprint arXiv:1603.02199, 2016. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv. preprint arXiv:1602.01783, 2016. Roozbeh Mottaghi, Mohammad Rastegari, Abhinav Gupta, and Ali Farhadi. \"what happens if...\" learning to. predict the effect of forces in images. In European Conference on Computer Vision, pp. 269285, 2016. Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard Lewis, and Satinder Singh. Action-conditional video pre-. diction using deep networks in Atari games. In Neural Information Processing Systems, pp. 2863-2871, 2015. Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward H Adelson, and William T Freeman.. Visually indicated sounds. arXiv preprint arXiv:1512.08512, 2015. Andrew Owens, Jiajun Wu, Josh H McDermott, William T Freeman, and Antonio Torralba. Ambient sound pro- vides supervision for visual learning. In European Conference on Computer Vision, pp. 801-816. Springer, 2016. Lerrel Pinto and Abhinav Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In IEEE International Conference on Robotics and Automation, pp. 3406-3413, 2016. Lerrel Pinto, Dhiraj Gandhi, Yuanfeng Han, Yong-Lae Park, and Abhinav Gupta. The curious robot: Learning visual representations via physical interactions. In European Conference on Computer Vision, pp. 3-18, 2016. MarcAurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan Collobert, and Sumit Chopra. Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604,. 2014. Linda Smith and Michael Gasser. The development of embodied cognition: Six lessons from babies. Artificial. life, 11(1-2):13-29, 2005. Elizabeth S Spelke and Katherine D Kinzler. Core knowledge. Developmental science, 10(1):89-96, 2007. Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations. using lstms. CoRR, abs/1502.04681, 2, 2015. Russell Stewart and Stefano Ermon. Label-free supervision of neural networks with physics and domain knowl- edge. arXiv preprint arXiv:1609.05566, 2016. Richard S Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M Pilarski, Adam White, and Doina Precup. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor. interaction. In The 1Oth International Conference on Autonomous Agents and Multiagent Systems-Volume. 2, pp. 761-768. International Foundation for Autonomous Agents and Multiagent Systems, 2011. Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016"}]
ryZqPN5xe
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Training generalizable models using only a small amount of data has proved a significant challenge in the field of machine learning since its inception. This is especially true when using artificial neura networks, with millions or billions of parameters. Conventional wisdom gleaned from the surge ir popularity of neural network models indicates that extremely large quantities of data are required foi these models to be effectively trained. Indeed the work fromKrizhevsky et al.(2012) has commonly been cited as only being possible through the development of ImageNet (Russakovsky et al.(2015)) As neural networks become explored by practitioners in more specialized domains, the volume of available labeled data also narrows. Although training methods have improved, it is still difficult tc train deep learning models on small quantities of data, such as only tens or hundreds of examples\nThe current paradigm for solving this problem has come through the use of pre-trained neural net works.Bengio et al.(2012) were able to show that transfer of knowledge in networks could be achieved by first training a neural network on a domain for which there is a large amount of data and then retraining that network on a related but different domain via fine-tuning its weights. Though this approach demonstrated promising results on small data, these models do not retain the ability to function as previously trained. That is, these models end up fine tuning their weights to the new learning task, forgetting many of the important features learned from the previous domain.\nThe utility of pre-training models extends beyond training on small data. It is also used as ar effective initialization technique for many complicated models (Jaderberg et al.(2015); Lakkaraju et al.(2014)). This, in addition to the continuing trend of treating specific network layer architectures as modular components to compose more advanced models (He et al.[(2015);Larsson et al. (2016) Szegedy et al.(2015);[Abadi et al.(2016)) informs our work as we seek to use pre-trained models as"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "an architectural framework to build upon. Instead of overwriting these models and fine-tuning the internal representations to a specific task, we propose composing pre-trained models as modules in a higher order architecture where multiple, potentially distinct representations contribute to the task With this approach, useful representations already learned are not forgotten and new representations specific to the task are learned in other modules in the architecture.\nIn this paper we present our neuro-modular approach to fine-tuning. We demonstrate how mod. ules learn subtle features that pre-trained networks may have missed. We quantitatively compare. traditional fine-tuning with our modular approach, showing that our approach is more accurate or. small amounts of data (<100 examples per class). We also demonstrate how to improve classifica tion in a number of experiments, including CIFAR-100, text classification, and fine-grained image. classification. all with limited data.\nTransferring knowledge from a source domain to a target domain is an important challenge in ma chine learning research. Many shallow methods have been published, those that learn feature in-. variant representations or by approximating value without using an instance's label (Pan & Yang. (2010); Sugiyama et al.(2008); Pan et al.(2011);Zhang et al.(2013); Wang & Schneider(2014); Gong et al.(2016)). More recent deep transfer learning methods enable identification of variational factors in the data and align them to disparate domain distributions (Tzeng et al.(2014); Long et al.. (2015); Ganin & Lempitsky(2014); Tzeng et al.(2015)). Mesnil et al.(2012) presents the Unsu- pervised and Transfer Learning Challenge and discusses the important advances that are needed for. representation learning, and the importance of deep learning in transfer learningOquab et al.(2014). applied these techniques to mid-level image representations using CNNs. Specifically, they showed. that image representations learned in visual recognition tasks (ImageNet) can be transferred to other. visual recognition tasks (Pascal VOC) efficiently. Further study regarding the transferability of fea. tures byYosinski et al.(2014) showed surprising results that features from distant tasks perform. better than random features and that difficulties arise when optimizing splitting networks between . co-adapted neurons. We build on these results by leveraging existing representations to transfer to. target domains without overwriting the pre-trained models through standard fine-tuning approaches..\nLong et al. (2015) developed the Deep Adaptation Network (DAN) architecture for convolutional neural networks that embed hidden representations of all task-specific layers in a reproducing ker- nel Hilbert space. This allows the mean of different domain distributions to be matched. Another feature of their work is that it can linearly scale and provide statistical guarantees on transferable fea tures. The Net2Net approach (Chen et al.(2015)) accelerates training of larger neural networks by allowing them to grow gradually using function preserving transformations to transfer information between neural networks. However, it does not guarantee that existing representational power will be preserved on a different task. Gong et al. (2016) consider domain adaptation where transfer from source to domain is modeled as a causal system. Under these assumptions, conditional transfer able components are extracted which are invariant after location-scale transformations. Long et al. (2016) proposed a new method that overcomes the need for conditional components by comparing joint distributions across domains. Unlike our work, all of these require explicit assumptions or modifications to the pre-trained networks to facilitate adaptation.\nWe note that while writing this paper, the progressive network architecture of Rusu et al.(2016. was released, sharing a number of qualities with our work. Both the results we present here and th. progressive networks allow neural networks to extend their knowledge without forgetting previou. information. In addition, Montone et al. (2015) discusses a semi-modular approach. Montone et al. also froze the weights of the original network, although it did not focus on the small data regime. where only a few tens of examples could be available. However, our modular approach detailed her. focuses on leveraging small data to adapt to different domains. Our architecture also complement. existing network building strategies, such as downloading pre-trained neural networks to then b fine-tuned for domain adaptation.\nFigure 1: The modular approach to neural networks involves feeding data through one or more pre existing neural networks as well as a new network, the module. The existing networks have thei weights locked, so they will not be altered by the training process. Only the module weights are trained. The end result is a representation that adds a new representation to an existing representatior without losing any information from the original network.\nPretrained Network w/o Softmax 'Rooster\" concat concat concat Stitched Modules\nFigure 2: Modular networks do not simply need to be two models in parallel. Here, we presen the stitched module approach. We insert a small neural network between each layer of the origina network. This way, the modules explicitly receive information about the representations at eacl layer of the pre-trained network.\nExisting Network (locked) >\"Rooster\" Module (to be trained)"}, {"section_index": "2", "section_name": "3 MODULAR ARCHITECTURE", "section_text": "Generically, modular neural networks are directed graphs of pre-trained networks linked togethe with auxiliary, untrained networks. Depicted in Fig.1] one only trains the new components of the network. The architecture could take the form of simply placing two networks in parallel (the two towers approach), shown in Fig.1 In addition, the architecture could interleave the modules with the layers of the pre-trained network (the stitch approach), shown in Fig.2"}, {"section_index": "3", "section_name": "3.1 LEARNED FILTERS", "section_text": "To visualize images that maximally stimulate each filter, we followed the approach of Zeiler & Fergus(2014). We set the objective function to be the activation of the filter we were querying. We then conducted back-propagation. Instead of using the gradients to alter the weights, we used the gradients at the input layer to alter the pixels themselves. We initialized with an image of noise smoothed with a Gaussian filter of radius 1. The gradient was normalized, so the input image, X was updated according to\nXt+1 = Xt +0.01* V/|V]\nAfter training a simple neural network on MNIST with 3 convolutional layers, (8 8 8). maxpool2 - (8 4 4) - (8 3 3) - Dense128, which was done using ADAM Kingma & Ba (2014) and augmenting the images with 10% shifts and zooms, we reached an accuracy of 98.8%. We then added an even simpler module to the neural network, (4 8 8) - maxpool2 - (4 4 4) - (4 3 3) - Dense32. This module is trained on the same input as the original model but it is tied together with the output features of the original model, as illustrated in Fig.1 After. training the module, the combined network achieves 99.2% accuracy. The models were intention-. ally kept small, with the original model only having 8 filters per layer, and the module only having. 4 filters per layer.\nAs we can see in Fig.3| the module does not learn filters that merely duplicate the original network. As is common, the first layer learns typical edge and orientation detectors, but the module is more. sensitive to high-frequency diagonal components and details around the edge of the image. In the. second layer, we see that the module is sensitive to diagonal components near the boundary. Anc the third layer shows that the module has indeed concentrated its effort on detecting strokes near the. edge of the image. As we can see from inspecting Figure 3c while the original network concentrated. its efforts on the center of the images (as it should), the module was then able to focus more around. the edges of the image and catch some of the mistakes made by the original network.."}, {"section_index": "4", "section_name": "3.2 SMALL DATA", "section_text": "Although the modular approach can be used to extend and improve a network on its original task its value comes from its ability to facilitate transfer learning. If a network has been trained on thousands or even millions of examples and hand-tuned for weeks or months, one would not want to throw away this valuable representational power by training the network with 100 examples from an out-of-domain dataset. Instead, the modular approach keeps the original, unaltered network, in addition to learning supplementary representations specific to the distribution of the new data.\nThis allows the modular approach to more robustly handle small data sets than naive fine-tuning. To demonstrate this, we trained a network on CIFAR-10 and used it to apply to CIFAR-100 for varying\nThis allows the network as a whole to retain the original representational power of the pre-trained network. Thus, our modular approach bounds from below the performance of transfer learning. Here, we explore some of the properties of these modular architectures, including how they learn new representations and how they perform on small amounts of data.\nIn the case of convolutional networks, we posit that adding modules to networks helps them learn new domains because the original modules contribute well-trained filters, allowing the untrained modules to learn more subtle features that may perhaps be more discriminating. Even slight reg- ularization on the module network will encourage the network to avoid redundancy with the base network.\nwhere is the induced gradient at the input layer. This was repeated 500 times, at which point the image largely had converged.\nFigure 3: After training a vanilla CNN on MNIST, images that maximally stimulate each filter are shown on the bottom rows. Images that maximally stimulate the auxiliary module network, trainec on the same data to supplement the original network, are shown on the top\n0.30 TFARIOO 0.25 0.20 0.15 0.10 0.05 module finetune 0.00 102030405060708090 Examples per class\nFigure 4: By explicitly preserving the original representation learned on pre-trained net, the module is able to learn more robust features using fewer examples than naive fine-tuning\namounts of training data. The CIFAR-10 network was trained until it was 88.9% accurate, using the network in He et al.(2016) with 3 residual units, for a total of 28 layers..\nWe then compared two approaches. For the first approach, we simply fine tuned the CIFAR-10 network by using training data from the CIFAR-100 dataset and replacing the final softmax. Second we froze the original CIFAR-10 network and added an identical copy as a module, which would be trained on the same batches of data as the first approach. That is, we have: Network 1 - fine-tuning the base network and Network 2 - freezing the base network and fine-tuning a module. This doubles the amount of weights in the second network, but Network 1 and Network 2 have an identical numbe. of weights to be trained and those weights have the same starting value. More formally, we present these two approaches in equations|1and2below.\nyft = softmax(NN(x;wo ={C1o})) Ymod = softmax([NN(x;w* ={C1o}), NN(x;wo = {Cio})]\nwhere y ft denotes predictions made from a fine-tuned network and ymod denotes predictions mad from our modular architecture. N N denotes the neural network without softmax activation trainec on CIFAR-10, and wo is the initialization of the weights, which are learned from training on CIFAR 10, i.e. wo = {Cio}. Note that in our modular architecture pre-trained weights are locked as denotec by w* ={Cio} in Equation2i.e., VwNN(w*) = 0.\nTo train, we used the ADAM optimization algorithm (Kingma & Ba(2014)). We added an activity L2 regularization of 1e-6 to the module to help break degeneracy. We used batches of 200, where each batch contained two images per class. Each batch was iterated over five times, before the next batch was used. This iteration allowed simulating multiple epochs over small data. We recorded the results of the performance on the test set after each batch, in Fig.4a\nWe observe that for all amounts of training data, but particularly for small amounts of training data the modular approach outperforms traditional fine-tuning. Of course, we chose to make the module\nModule Filters Module Filters Module Filters Original Filters Original Filters Original Filters (a) first layer. (b) second layer (c) third layer\nStanford Cars 0.6 C P 0.5 In 0.4 0.3 Jonpbn 0.2 0.1 module finetune 0.0 10 20 30 40 50 Epoch\nStanford Cars 0.6 Yaennnen aaneneey 0.5 0.4 0.3 0.2 0.1 module finetune 0.0 10 20 30 40 50 Epoch"}, {"section_index": "5", "section_name": "Effect of training set size on fine-tuning versus modular architectures", "section_text": "ChnteCturcs 0.6 100% 80% 60% 40% 0.45 20% 10% 5% 0.3 0.15 0 Untrained TFT - 1 Layers TFT - 2 Layers TFT - 3 Layers TFT - 4 Layers TFT - 5 Layers Retrain Stitch Model model Softmax\nFigure 5: Comparison of fine tuning vs the stitched module approach. TFT stands for 'Traditiona. Fine Tuning.' and the number of layers fine-tuned is indicated. Notice that our modular approacl outperforms fine-tuning for all amounts of training data. The modular approach's benefit over fine tuning increases as the amount of available training data decreases..\na complete copy of the original CIFAR-10 network. This ensured we could compare with the same number of weights, same initialization, same data, etc. Further research will certainly reveal more compact module networks that outperform our example"}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "To investigate the effectiveness of modular networks for transfer learning, we explore a secon example of transfer learning from CIFAR-10 in order to model CIFAR-100. As we were able to shoy above, a modular network is able to outperform traditional fine-tuning because it learns additiona features that may complement those captured by the pre-trained model. However, there is no reaso why a module needs to only accept input from the input layer nor a reason why it needs to sen its output directly to the softmax layer. Here, we describe stitch networks, where the modules ar actually interwoven with the original network.\nWe believe that in modular networks, the untrained module learns representations that capture the difference from the original distribution of data to the distribution of data under the new task. Ex- panding upon this idea, instead of learning the shift in distributions only at the softmax layer as with our other modular networks, we integrate the signal from the learned modules much more tightly with the paired untrained modules by using a Stitch Network. Use of the Stitch Network allows for the model to learn to correct the distribution difference after each transformation made by the learned module, shown in Fig.2.\nThe stitch network we explore is comprised of layer pairs between a single learned and unlearne. nodule. The learned module is a five layer convolutional neural network where the first two layer are 3x3 convolutional layers with 32 filters, followed by max pooling and two more 3x3 convolution. with 64 filters. The convolutions are followed by a fully connected layer with 512 outputs and finall. a softmax for classification. This model is pre-trained on CIFAR-10 then stripped of the softma. ayer, has its weights locked and is then used as the learned module for the stitch network. Th. untrained module is composed in a similar fashion, with four 3x3 convolutions with maxpooling. and a fully connected layer each with 1/4 the number of outputs as the corresponding pre-traine. ayers. The outputs of each layer pair are concatenated and fed as the input for each proceeding laye. of the untrained module. Both modules feed into the final softmax layer of the composite networl. which then classifies over the new data set. A sketch of this is shown in Fig.2.\nInput Feature Extraction Dense Dense Softmax Car Image Layer Layer (512) Layer (196) Layer Output VGG Probabilities\nFigure 6: Network architecture used for Stanford Cars fine-tuned model.\nInput Feature Extraction Dense Dense Dense Softmax Car Image Layer Layer (256) Layer (256) Layer (196) Layer VGG --- Output Probabilities Res Net\nFigure 7: Network architecture used for Stanford Cars module model. Note, the ResNet used is identical to the one describe in He et al.(2015)\nWe train our entire composite model on a randomly selected subset of ten CIFAR-100 classes. W compare the accuracy over the validation set of the selected classes against traditional fine-tunin using only the learned module, as well as against an uninitialized version of the learned module. W were additionally interested in comparing across all models the effect of limiting the amount of dat available for training. We repeat the experiment with the same subset of classes, varying the amoun of available training data such that the networks are shown only a fraction of each class for training Note that there are 500 available training examples per class in CIFAR-100.\nWe find by using the stitch network, we are able to match or improve upon classification results. (Fig.5) obtained using traditional fine-tuning over a pre-trained model. We also outperform training. from scratch, regardless of the amount of training data used. We note that we find significant gains. over traditional methods as the number of available training examples drops below 200 example. per class."}, {"section_index": "7", "section_name": "4.2 STANFORD CARS DATA SET", "section_text": "The Stanford Cars data set (Krause et al.(2013), which features 16,185 images of 196 classes of cars, is an example of a data set for fine-grained categorization. Rather than train a classifier to distinguish between fundamentally different objects like horses and planes, as required in the. Large Scale Visual Recognition Challenge (Russakovsky et al.|(2015)), fine-grained categorization requires the classifier to learn subtle differences in variations of the same entity. For example, a classifier trained on the Stanford Cars data set would have to learn distinguishing features between a BMW X6 SUV from 2012 and an Isuzu Ascender SUV from 2008.\nIn this research two models are trained on the Stanford Cars data set. Both models utilize a transfer learning approach by leveraging the non-fully connected output from the VGG16 model (Simonyan & Zisserman(2014)). The \"fine-tuned\"' model passes the VGG16 features to a fully connected layer of length 512 followed by a softmax layer of length 196, as seen in Fig. 6 Gradient descent via RMSPROP is used to train the dense layers. The \"module\"' model merges the fixed VGG16 features with a ResNet (He et al.[(2015)) model, whose output is then fed to two consecutive dense layers of length 256 capped by a softmax layer of length 196. The module model architecture is shown in Fig.7] Again, RMSPROP is used to train ResNet and post-merge dense layer weights, but the VGG features are unchanged.\nAs seen in Fig.4b, after 50 epochs the module model appears to significantly outperform the fine tuned model in validation accuracy. However. it should be noted that while the module model carries\nInput Feature Extraction Dense Dense Softmax Car Image Layer Layer (512) Layer (196) Layer Output VGG Probabilities\nInput Feature Extraction Dense Dense Dense Softmax Car Image Layer Layer (256) Layer (256) Layer (196) Layer VGG --.- Output Probabilities Res Net\n19,537,990 trainable parameters the fine-tuned model only has 12,946,116 parameters. Furthermore no hyperparameter optimization is performed on either model.\nWe further investigate the effects of our modular network approach by applying this method to different modeling problem - text classification. Similar to image data, text represents an unstruc tured data-type that often exhibits long-term and interconnected dependencies within the data tha are difficult to model with simpler classifiers. Whereas in the case of images neighboring pixels ma represent semantically related concepts or objects, in text words may exhibit long-term semantic o syntactic dependencies that can be modeled sequentially. These characteristics make text classifica tion particularly well-suited to recurrent neural networks such as long short-term memory (LSTM networks, but these learning methods typically require a great deal of data to be learned efficientl and to avoid overfitting.\nTo test our methodology, we evaluate a modular recurrent network against two individual recurren. neural networks on the IMDB sentiment dataset. Previous work has shown deep learning method. to be effective at sentiment classification performance on this dataset (Maas et al.(2011), howeve we add to this past work by presenting an analysis that demonstrates the effectiveness of modula. networks in the case of extremely small training sets. To this end, we sample only 500 training. examples from the original 25,000 available in the full training set, and evaluate on the full 25,00. validation examples. We use the same 500 training examples for each model evaluated in our exper. iments for consistency, and report accuracy for each model on the full validation set..\nWe evaluate three models in our text-classification experiments, two of which are individual recur- rent networks and the final which is our modular recurrent network. The first model consists of three layers - an initial layer that projects sequences of words into an embedding space, a second LSTM layer with 32 units, and a final sigmoid layer for computing the probability of the text belonging to the positive class. Our second model is identical to the first except that we fix the weights of the embedding layer using pre-trained GloVe word vectors' In particular, we use 100-dimensional vectors computed from a 2014 version of Wikipedia.\nFinally, we detail our modular network, which leverages both individual recurrent neural network described above. To construct our modular network, we take the embedding and LSTM layers fron our individual networks, and concatenate the output of both LSTM layers into a single tensor layer ir the middle of our modular network. Additionally, we modify the output of each of these componen LSTM layers by forcing each to output a weight matrix that tracks the state of the LSTM laye across all timesteps throughout the dataset. In this way, we seek to fully leverage the sequentia dependencies learned by this layer, and this method outperforms the simpler alternative metho of simply outputting the final state of each of the LSTM layers. We then feed this concatenatec layer to a gated recurrent unit (GRU) layer with a sigmoid activation function for calculation o class probabilities. We experimented with an LSTM and densely connected layers after the tenso concatenation layer, but found best performance with the GRU. All models were optimized with th ADAM algorithm, and trained for 15 epochs. An outline of this architecture can be seen in Figur\nHere, we report results for our classification experiments with the three networks described above We see an accuracy of 61.9% for our first model which is trained directly from the data without any pre-training. This is significantly lower than previously reported results, however we are training on only 2% of the available data to test our method's application to small training sets. We see slightly better performance in terms of accuracy (64.9%) from our second model initialized with GloVe vectors. This seems to indicate that despite being trained on more formally written language in Wikipedia, these vectors can still boost performance on a task modeling text that is inherently subjective and opinion-based. Finally, we see an accuracy of 69.6% from our modular network, an increase of almost 5% accuracy over the next best performing model. Because weight initializations of recurrent networks can greatly affect model performance, we ran the classification experiments with our modular network 10 times, and report the average accuracy across these 10 runs. As can be seen here, our modular approach improves on the best performing individual network suggesting that\nInput Embedding LSTM Tensor Concat GRU Layer Review Layer Layer Layer (sigmoid) Pretrained Model 100 32 \"This is a very. Output strange film that. Probabilitie was long thought to be forgotten.... Learned 100 32 Model\nMODEL BM PM MM ACCURACY (%) 61.9 64.9 69.6\nTable 1: Accuracy results for text classification experiments, using only 500 training examples Results are shown for the baseline model (BM), pre-trained (GloVe) model (PM) and modular model (MM).\nthis approach is useful in the domain text classification, and that our modular approach overcomes the poor performance shown by one of its component models."}, {"section_index": "8", "section_name": "5 CONCLUSIONS", "section_text": "We have presented a neuro-modular approach to transfer learning. By mixing pre-trained neura. networks (that have fixed weights) with networks to be trained on the specific domain data, we. are able to learn the shift in distributions between data sets. As we have shown, often the nev modules learn features that complement the features previously learned in the pre-trained network. We have shown that our approach out-performs traditional fine-tuning, particularly when the amount. of training data is small - only tens of examples per class. Further research will explore more. efficient architectures and training strategies, but we have demonstrated that our approach works. well for MNIST, CIFARs, the Stanford Cars dataset, and IMDB sentiment. Thus, the modula. approach will be a valuable strategy when one has a large pre-trained network available but only a. small amount of training data in the transfer task..\nThis work was supported by the US Government"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.\nTianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641. 2015.\nInput Embedding LSTM Tensor Concat GRU Layer Review Layer Layer Layer (sigmoid) Pretrained Model 100 32 \"This is a very Output strange film that Probabilities was long thought to be forgotten...' Learned 100 32 Model\nFigure 8: Diagram of architecture for our modular recurrent text classification network. Dimension ality for embedding layers and number of units for all other layers are given in boxes denoting those. layers.\nMingming Gong, Kun Zhang, Tongliang Liu, Dacheng Tao, Clark Glymour, and Bernhard. Scholkopf. Domain adaptation with conditional transferable components. In Proceedings oJ The 33rd International Conference on Machine Learning, pp. 2839-2848, 2016.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residua networks. arXiv preprint arXiv:1603.05027, 2016.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprini arXiv:1412.6980, 2014.\nJonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-graine categorization. In Proceedings of the IEEE International Conference on Computer Vision Work shops, pp. 554-561, 2013.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo lutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.\nMingsheng Long, Jianmin Wang, and Michael I. Jordan. Deep transfer learning with joint adaptatior networks. CoRR, abs/1605.06636, 2016. URLhttp: //arxiv.0rg/abs/1605.06636\nAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christophe. Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142-150 Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http: //www.aclweb.org/anthology/p11-1015\nGuglielmo Montone, J Kevin ORegan, and Alexander V Terekhov. The usefulness of past knowledg when learning a new task in deep neural networks. 2015..\nSinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199-210, 2011\nMax Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Ad vances in Neural Information Processing Svstems.. . 2017-2025. 2015.\nMingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In Proceedings of The 32nd International Conference on Machine Learning, pp. 97-105, 2015.\nGregoire Mesnil, Yann Dauphin, Xavier Glorot, Salah Rifai, Yoshua Bengio, Ian J Goodfellow Erick Lavoie, Xavier Muller, Guillaume Desjardins, David Warde-Farley, et al. Unsupervised and transfer learning challenge: a deep learning approach. ICML Unsupervised and Transfer Learning, 27:97-110, 2012.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imag recognition. arXiv preprint arXiv:1409.1556. 2014\nEric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion Maximizing for domain invariance. arXiy preprint arXiv:1412.3474. 2014\nXuezhi Wang and Jeff Schneider. Flexible transfer learning under support and model shift. I Advances in Neural Information Processing Systems, pp. 1898-1906, 2014.\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep. neural networks? In Advances in neural information processing systems, pp. 3320-3328, 2014.\nKun Zhang, Bernhard Scholkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation unde. target and conditional shift. In ICML (3). pp. 819-827. 2013.\nMatthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pp. 818-833. Springer, 2014."}]
S1HcOI5le
[{"section_index": "0", "section_name": "OMG: ORTHOGONAL METHOD OF GROUPING WITH APPLICATION OF K-SHOT LEARNING", "section_text": "Haoqi Fan Yu Zhang Kris M. Kitani The Robotics Institute, School of Computer Science Carnegie Mellon University Pittsburgh. P.A.. 15213\nTraining a classifier with only a few examples remains a significant barrier when. using neural networks with a large number of parameters. Though various spe-. cialized network architectures have been proposed for these k-shot learning tasks. to avoid overfitting, a question remains: is there a generalizable framework for. the k-shot learning problem that can leverage existing deep models as well as. avoid model overfitting? In this paper, we proposed a generalizable k-shot learn-. ing framework that can be used on any pre-trained network, by grouping network. parameters to produce a low-dimensional representation of the parameter space. The grouping of the parameters is based on an orthogonal decomposition of the pa. rameter space. To avoid overfitting, groups of parameters will be updated together. during the k-shot training process. Furthermore, this framework can be integrated. with any existing popular deep neural networks such as VGG, GoogleNet, ResNet,. without any changes in the original network structure or any sacrifices in perfor mance. We evaluate our framework on a wide range of k-shot learning tasks and. show state-of-the-art performance."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "As many deep learning network architectures have been proposed, several key models have emerged as default models for many image classification tasks. In particular, deep neural network architec- tures such as the VGG network by Karen & Zisserman|(2014), Inception by Christian et al.(2015) and ResNets byKaiming et al.(2015), have proven their superb performance on image classification when using datasets such as ImageNet (Alex et al.(2012)) and MS COCO (Lin et al.[(2014)). With enough data and fine tuning, these 'go to' models have been shown to be successful for many visual classification tasks.\nHowever, there is a problem when one does not have access to a large labeled training dataset t fine-tune these models. This task of training a classifier using only a small k number of example is often referred to as k-shot learning and has problems when dealing with high capacity models such as deep convolutional neural networks. The problem of training with only a small number o training examples is that it often leads to overfitting, where the model essentially memories those. data points without gaining the ability to generalize to new instances..\nCurrent state-of-the-art methods (Oriol et al.(2016) Adam et al.(2016)) apply deep learning tech niques by using specialized network architectures for k-shot learning to avoid overfitting. While this is a reasonable strategy for proposing diverse kinds of new frameworks for k-shot learning tasks, i is hard for different k-shot learning methods to borrow structures from each other since they are all highly customized networks.\nWe propose a method called the Orthogonal Method of Grouping (OMG) to facilitate a better k-sho1 learning process. OMG ensures that similar (near duplicate) features in the classifier will be grouped. and modified together during the training process. This process of grouping features naturally in duces dimension reduction of the parameter space and imposes a form of subspace regularizatior during training. We implement OMG by adding a new loss layer that essentially clusters (groups) pa rameters by slighting perturbing them according to an orthogonality constraint. This para-loss laye only augments the network and does not require any changes to the original architecture. Once the"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "feature has been grouped, the network can be used to learn from only a few examples and paramete updates are propagated to parameter groups instead of individual parameter updates.\nOur contribution is threefold: (1) we proposed a general k-shot learning approach which does not rely on any task-specific prior knowledge; (2) our approach can be added to any network without changing the original network structure; and (3) the proposed method provides an effective technique for decomposing the parameter space for high capacity classifiers..\nDomain Adaptation is another related topic to our work. It aims at learning from a source dat distribution a well-performing model on a different (but related) target data distribution. Many suc cessful works as[Boqing et al.(2012) and Basura et al.(2013) seek an embedding of transformatio from the source to target point that minimizes domain shift. Daume III (Daum III(2009)) is anothe simple feature replication method that augments feature vectors with a source component, a targe component, and a shared component. Then an SVM is trained on the augmented source and targe data. These methods are proven to be effective for many tasks, but none of these methods abov could feed into an end to end learning framework. One end to end learning framework would be th supervised adaptation method proposed byJudy et al.(2013a). It trains different deep networks fo source and target domain and concatenating the high-level features as final embedding. Howeve the limitation is - the high-level features trained by one domain can not borrow knowledge from th other domain. In contrast, our method is an end-to-end method learning on target data without train ing additional networks. So the OMG can contain knowledge from both source and target domain which is to say, the source and target domain could borrow knowledge from each other.\nOur OMG model could decompose existing feature to a compact feature space, which is closely. related to dimension reduction. Many cookbook works such as|Aapo(1999) Jolliffe.(1986) Najim et al.(2011) have been studied extensively in the past. However, these works can't integrate into. any current end-to-end deep learning frameworks. As far as we know, there is limited work having. visited the topic of end-to-end dimension reduction. E. & Salakhutdinov[(2006) used auto-encoder as a dimension reduction method. Stacked restricted Boltzmann machine (RBM) is proposed to. embed images to lower dimension. However, this work is also hard to be integrated into existing. network structures or utilize pre-trained models. The reason is that it has a different architecture. than CNN networks, and it requires to train from scratch. Our OMG is an end-to-end approach that's. able to feed into any architectures smoothly for both training and finetuning. Furthermore, OMG. could reduce the output to an arbitrary dimension without changing the architecture and sacrifice the. performance.\nOne-shot learning is an interesting topic first presented in Li et al.(2006). The key idea of one shot learning is to make a prediction on a test instance by only observing a few examples of one- shot classes before.Li et al. (2006) solved this problem by adopting Variational Bayesian approach where object categories are represented by probabilistic models. More recently, researchers revisited one-shot learning with highly customized models: M. et al.[(2011) address one-shot learning for character recognition with a method called Hierarchical Bayesian Program Learning (HBPL). It modeled the process of drawing characters generatively to decompose the image into small pieces. The goal of HBPL is to determine a structural explanation for observed pixels. However, inference under HBPL is difficult since the joint parameter space is very large, which leads to an intractable integration problem.Gregory (2015) also presented a strategy for performing one-shot classification by learning deep convolutional siamese neural network for verification. Oriol et al.(2016) proposed a Matching Nets utilizing external memory with attention kernel. Adam et al.[(2016) proposed the memory-augmented neural networks with an external content based memory. These works achieve state-of-the-art performance on specific datasets. However, one important issue would be, works mentioned above proposed highly customized networks, whose structures is hard to be borrowed from other. Conversely, OMG is a general one-shot learning model can fit into any existing networks So its structure could borrow by any other works.\nFigure 1: One illustration of the parameter basis in VGG net. Visualization of the first convolutional layer from VGG|Karen & Zisserman(2014) network pre-trained on ImageNet (Alex et al.(2012)) Filters (parameter basis) with the same color of the bounding box are correlated. These filters in the same color are functionally similar, which mean they will have similar activation given same input"}, {"section_index": "3", "section_name": "3 APPROACH", "section_text": "We observe the fact that the parameters of each layer of the deep network are always correlated. This can be best illustrated by Fig. 1 The correlated parameters will result in correlated outputs with a lower capacity. When learning new classifier on top of these correlated outputs, it is easier to get an instance specific classifier if only small amount of data is seen. In most of the cases, an instance specific classifier is not what we always want. If the correlation of output is removed, which output is becoming orthogonal, then a better classifier will be fetched given a few data. So we proposed an Orthogonal Method of Grouping (OMG) to remove the correlation of outputs by decomposing the. correlated the parameters to orthogonal parameters.\na) b) C)"}, {"section_index": "4", "section_name": "3.1 ORTHOGONAL METHOD OF GROUPING", "section_text": "In the first step of OMG (Figure2|(b)), it finds correlated parameters and groups them in the same. subspace. Since correlated parameters will result in correlated outputs, we can find the relatior between parameters by analysis the output (e.g., the relation between convolutional kernels could be. found by analysis the activation of the convolutional layers). In the second step of OMG (Figure|2. (c)), each parameter is slightly perturbed such that the orthogonality between each grouped subspace. is maximized. The groups are represented as an Orthogonal Group Mapping that each parameter is mapped to the corresponding group. The mapping is learned by optimizing the orthogonal constrain on both Ow and Omap, where Ow is the parameters of the neural networks, and Omap is the parameters. of the Orthogonal Group Mapping\nFigure 2: The initial parameters as shown in (a), where parameters are correlated. (b) is an illustra- tion of our grouping algorithm, the algorithm assigns each different parameters to a corresponding group with a one to one mapping. (c) illustrates our algorithm cast a constraint to force parameters from each group orthogonal to those from other groups\nIdentity Mapping Orthogonal Groups g1 g2 g3 g4 g5 g6 g7 gL\nFigure 3: This figure illustrates the framework of the Orthogonal Method of Grouping. n1...nM. represent the M different neural units (basis vectors). The dotted arrows represent the one to one. mapping from each neural unit to their corresponding orthogonal groups. Each orthogonal group gi. is represented as a red square. In a) it illustrates a special case of Orthogonal Method of Grouping. with identity mapping. This special case can represent the connection between any normal layers This means every normal layer are a special case of OMG. (b) is a normal case of OMG learning the. one to one mapping from neural units to their corresponding orthogonal groups..\nThe Orthogonal Group Mapping is firstly introduced in Sec 3.1.1] followed with loss function anc orthogonal constraint in Sec[3.1.2] Then the optimizing algorithm is introduced in Sec[3.1.3] Finally the k-shot learning method learned on orthogonal grouped parameters is introduced in|3.2"}, {"section_index": "5", "section_name": "3.1.1 ORTHOGONAL GROUP MAPPING", "section_text": "Orthogonal Group Mapping (OGM) maps neural units to the corresponding groups by a mapping Let neural units in a layer of the network be denoted as a set n = {n1, n2,..., nt}, where L is the number of neural unit in that layer. (For example, a filter of a fully convolutional layer can represent as a unit.) The Orthogonal Group Mapping could represent as orthogonal group sets g where gk is the kth orthogonal group in g. Each orthogonal group gi contains the corresponding units g = {ni, ..., nt}. Since the OGM is a map that, one unit is only mapped to one single group."}, {"section_index": "6", "section_name": "3.1.2 PARA-LOSS", "section_text": "We cast constraints to force orthogonal groups orthogonal to each other. The constraint is achieve by a loss function with two terms: intra-class loss Lintra and inter-class class Linter. Lintra minimize the divergence in each group, and Linter force the basis in each orthogonal group orthogonal to each other.\nWhen a mini-batch of input data of size B is propagated through the network, the output (activation) of units n over the mini-batch can be denoted as a matrix A E RBL, where each element A E R1 L denotes the output of the l-th unit over the entire mini-batch. Intuitively, if the outputs A, and A, of two units ni, n, are similar, we would like to let the two units belong to the same orthogonal group gk. Conversely if the outputs A, and A, of two units ni, n; are different, then they should belong to different orthogonal groups.\nWe define the intra-group loss by the sum of squared distances between each orthogonal groups\n01 06 nL Input Identity Mapping Orthogonal Groupse n nL g1 g2 g3 g4 g5 g6 g7 a) n1 n2 nk3 05 n6 nL Input One to One Mapping Orthogonal Groups n3 n6 n4 g1 g2 gk b)\nLintra=ki,jEgk AA2\nThe time complexity is O(Lintra) = K kmax kmax, where kmax = max, ks. This could be. efficiently computed when kmax is small, but we still want to reduce O(Lintra) since the computa. tional cost of this loss can be significant when there are many units in a single layer. We can use a lower bound to approximate the distance:.\nLintra =kiEgr(Ai-Aanchor)\nwhere anchor is an index randomly selected from the k-th orthogonal group. This approximation reduces the time complexity to O(Lintra) = K kmax ~ L, which is linear..\nLinter =, l|M, M,ll?\nWhere matrix M, E RB|g! represents the output of all the units in orthogonal group g; over the. mini-batch, where [gi| is the number of units in the orthogonal group gi. That is to say, M, = [Aj, ..., A]T, where nj, nk E gi. The | . ll? is the squared Frobenius norm. This term is minimized when feature vectors are exactly orthogonal..\nThe entire para-loss function is denoted as:\nL =akiEgk (A, - Aanchor)2 + ; |M M l?\nIt is easy to see the L does not contain any term of ground truth label. So the OMG is an unsupervisec method without requiring ground truth. However, this method could work smoothly with othe losses. For example, OMG could train with loss of softmax Lsoftmax simultaneously for a supervise. learning task."}, {"section_index": "7", "section_name": "3.1.3 OPTIMIZATION", "section_text": "First step: We use the standard SGD to optimize arqmine.. L:\n0w:=0w-nV(0w)=0w-n>;Li(0w)\nWhere n is the step size, and L,(0w) is the value of the loss function at itn iteration\nSecond step: We propose Algorithm 1 to optimize argmine L\nAlgorithm 1 Optimization Algorithm Initialization Random Initialize Mapping Omap Given a batch of training set for each iteration do for each group gi, i E [1, K] do Find the max violated unit ny in gi that l E gi by. l = argmaxi mEg(At- Am)2 Reassign the unit ni from gi to new group gk where: k = argmink megn(At- Am)2 end end\nIn addition to quantifying the intra-group similarity (tightness of each cluster), we also want to measure the separation between each group. We can define an inter group loss in terms of an orthogonality measure:\nWe proposed optimizing method for OMG to optimize the constraint on both parameters of the neural networks Ow, and parameters of the Orthogonal Group Mapping Omap. We use a two-step. approach to optimize argminemap,ow L by optimizing argminew L and argminemapL iteratively,. where Ow, is the parameter of original network, and Omap is the parameter of Orthogonal Group. Mapping. For the first step, we optimize the weights of the net Ow with SGD. For the second step. we optimize the weights of mapping 0map with Algorithm 1..\nIt is easy to see that, optimizing argminemapL will not change any parameter of the original deep. network Ow. That is to say, given any pre-train network, we could switch the network to an orthog onal grouping network without changing its original parameters.."}, {"section_index": "8", "section_name": "3.2 DIMENSION REDUCTION AND k-SHOT LEARNING", "section_text": "Given an Orthogonal Method of Grouping with corresponding Ow and Omap, we could assign I outputs to K groups. The outputs in each group is correlated. we assign an additional weights wadd E RK and bias badd E RK corresponding to each group. We omit the bias for simplicity. For each group gi it shares the same weight wadd. While doing k-shot learning on a few samples, the original weights Ow are fixed, and only the wadd is update. In another view, we could regard this as an end-to-end dimension reduction method. It could reduce the original output dimension L tc arbitrary dimension K. Each dimension in the K is largely orthogonal to each other."}, {"section_index": "9", "section_name": "4 DATASET", "section_text": "The Orthogonal Method of Grouping is evaluated on three standard datasets: ImageNet, MNIST anc. Office Dataset. ImageNet (Alex et al.(2012)) is the largest publicly available dataset with image. category labels. The MNIST (LeCun et al.(1998)) dataset of handwritten digits contains a training. set of 60K examples and a test set of 10K examples. The Office (Saenko et al.(2010)) dataset is a collection of images from three distinct domains: Amazon, DSLR, and Webcam. The datase contains objects of 31 categories commonly spotted in office, such as keyboards, file cabinets, anc laptops. Among the 31 categories, there are 16 overlaps with the categories present in the 1000. category ImageNet classification task. This dataset is first used by Judy et al.(2013c) for k-sho1. adaptation task and we follow the same data split of their work.."}, {"section_index": "10", "section_name": "5 EVALUATION", "section_text": "We evaluate the OMG from three different tasks: training from scratch, finetuning and k-shot learn ing. We report the performance for training from scratch on MNIST to show that OMG could help. to improve the performance by cast orthogonal constraint. Then we smoothly integrated OMG intc standard pre-trained networks to show OMG can successfully enhance the orthogonality among. groups of parameters on arbitrary neural networks, and results in more discriminative parameters. For k-shot learning tasks, we report our k-shot learning results on MNIST and Office Datasets (Saenko et al.(2010)). Experiments show our learned compact orthogonal parameters could facili tate learning classifier on limited data.\nIn the following sections, Experiments for training from scratch are reported in|5.1 Experiments for finetuning are reported in|5.2|and the k-shot learning experiments are reported in Sec. 5.3"}, {"section_index": "11", "section_name": "5.1 TRAINING FROM SCRATCH", "section_text": "We show that our OMG facilitates the performance of neural network during the training. We trained. a standard Convolutional Neural Networks (Y. et al.(2003), which reported achieving 1.19% error rates) on MNIST dataset as a baseline. For OMG model, we report the difference of accuracies with different a, , and group size. The difference of accuracy here denotes the difference between. baseline's accuracy and the proposed model's accuracy. For example, if the proposed model has. 98% of accuracy and the baseline is 97%, then the difference of accuracy is 1. OMG is used to train. every convolutional and fully connected layer in the baseline convolutional network rather than one specific layer\nWe train OMG from scratch with the different set of and . We set or to 0, 1e-6, 1e-5, 1e-4 1e-3, 1e-2, 1e-1 separately, and keep the other hyperparameter to 0. The group size is set to half of the neural unit's size when evaluating the effectiveness of a and . We report the difference of accuracy in Table. 1 and Fig. 4 It is easy to see that when a and E (0, 1e-3), the OMG car boost the performance of the convolutional neural network. We find when the value of hyperparam eters is around 1e-3, the generated gradient from L is about 1 - 5% of the gradient from Lsigmoid which is a reasonable ratio (hyperparameter of L2 normalization is around 1 - 5%). In Fig. 4] it\nshows when the hyperparameters are extremely small, the OMG will not change the performance. of the network. When the hyperparameters are too large, the constraint will be too strong to learn. the parameters properly. When the hyperparameters are in ideal range, then the OMG is casting. the orthogonal constraint on the network and force the network to learn more discriminative param eters. This shows the OMG is working smoothly with normal neural networks and able boosting. its performance. For the following experiments, we set the to 5e-5 and to 1e-4 if we do not. specific mentioned. Theoretically, the Lintra would force the filters in each group to be similar, sc. intuitively, it would jeopardize the performance of the original network. However, as we find in Fig. 4] Lintra can actually boost the performance. This is because practically the Lintra is being used as. a regularization term.\nWe report the effect of group size as shown in Fig.5 In Fig.5when the group size is set as the same as the neural unit size, the OMG is not really grouped. But the network still has a better performance since the OMG enforces the network to learn more discriminative parameters. The best performance is achieved when the group size is around half of the neural unit size. If the group size is too small, it will force all the neural units to learn the same parameters, which would jeopardize the performance of the network. When the group is 1/64 of the neural unit size, it achieves the worst performance For the following experiments, if we do not mention specifically, we set the group size as half of the neural unit size.\nAccuracy Difference Accuracy Difference 0.5 0.4 0.0 0.2 0.0 0.5 birree bire -0.2 1.0 0.4 1.5 0.6 2.0 0.8 2.5 1.0 0.0 1e-6 1e-5 1e-4 1e-3 1e-2 le-1 0.0 1e-6 1e-5 1e-4 1e-3 1e-2 le-1 alpha beta\nFigure 4: The difference of accuracy between proposed model and baseline as functions of and 3. The dashed line in each charts is the baseline performance. As we can see from the charts when alpha and beta are in the range of [1e-6, 1e-4] and [1e-6, 1e-3], the performance of OMG outperforms the baseline.\nAccuracy Difference 10 0 10 Pinne -20 30 40 -50 1 1/2 1/4 1/8 1/16 1/32 1/64 group size / neural unit size\nAccuracy Difference 10 0 10 bireee 20 30 40 -50 1 1/2 1/4 1/8 1/16 1/32 1/64 group size / neural unit size\nFigure 5: The difference of accuracy between our proposed model and the baseline as functions of a ratio between group size and neural unit size. From the chart, when the number of groups is around [1, 4], the proposed model outperforms the baseline\nAlthough the OMG can work smoothly with different types of layers as convolutional layer, fully. connected layer, and etc, we only visualize the orthogonal groups of first convolutional layer in Fig. 6|since it is the easiest to observe. It is hard to visualize the rest of convolutional layers since the. filter size is smaller. And it is also hard to visualize the fully connect layer since we can not observe\nFigure 6: The Visualization of the orthogonal grouped filter maps in the first convolutional layer. The filters in each blue block belong to the same group. It is easy to see that filters in each group share the similar appearance, and the filters from different groups are different in terms of appearance.\nclear pattern directly. For a better visualization, we choose the convolutional neural network witl. 11 11 kernels. Filters in the same blue bounding boxes belong to the same group. We can se filters in each different groups are highly correlated to each other, and the filters in the differen. group are visually very different..\nTable 1: The effect of different hyperparameters a and"}, {"section_index": "12", "section_name": "5.2 FINETUNING", "section_text": "In this section, we prove that OMG forces deep neural network learn more discriminative features and the OMG works smoothly with pre-train neural networks. We finetune existing pre-trainec networks on ImageNet with the help of OMG. The original network is fully trained on ImageNe dataset, so directly finetune on the same dataset should not change any thing significantly. Evei finetune on different datasets, empirically the parameters in the very first layers are not going t change. However, we observe the significant changing after finetuning on the same dataset for only 5 epoches.\nWe choose to visualize VGG-F (Chatfield et al.(2014) net since it has the largest kernel size amon. the VGG zoo (11 11 rather than 7 7). We visualize the grouped filters in the first convolutio. layer in Fig. 7 It is bacause the same reason, we only visualize the first convoutional layer Th. kernels in the same vertical line belong to the same group. We have 20 groups and each group ha. the number of kernels from 1 to 6. It is easy to see that the filters within each group share simila appearance (highly correlated), and the filters from different groups have divergent appearances. Fo. example, the second and the 8th. - 13th, they are all white and black edges with the same directior the kernels in the 6th. group are all like gaussian kernels with a little jitter in location..\nIn order to prove that the OMG can help to learn the more discriminative parameters, we visualize the filters after finetuning in Fig. 8] The filters on the left have strong pattern before finetune, and they do not change much after finetuning. For the filters on the right, the filters do not have the strong patterns, but after finetuning with OMG, the pattern of the filters become more distinct with strongly colorful textures. Our interpretation is, our OMG assigns additional orthogonal constraint to all the filters, force them to learn more discriminative parameters. The filters with strong patterns are not changed too much because they are originally orthogonal to other filters. But for the filters without strong patterns, the orthogonal constraint are highly effective. As a result, the OMG helps the VGG network pretrained on ImageNet to learn the new discriminative paramters during finetuning on the same dataset."}, {"section_index": "13", "section_name": "5.3 K-SHOT LEARNING", "section_text": "Difference of Accuracy 0 1e -6 1e-5 1e-4 1e-3 1e-2 1e Q 0 0.01 0.12 -0.12 -1.74 -1.95 -2.06 B 0 0.01 0.07 0.23 -0.22 -0.68 -0.9\nFigure 8: Visualization of the filter maps from the first convolutional layer. 10 filters are selected from the original filter map to show what have the OMG learned. The first column shows the filters from the original VGG-F. The second column is the corresponding filters after finetune with OMG. The filters on the left originally have the strong patterns, and they are mainly unchanged after finetuning. The right ones do not have strong pattern originally, and the pattern becomes more distinct after finetune with OMG. All the filters are normalized to the same magnitude before visualization."}, {"section_index": "14", "section_name": "5.3.1 K-SHOT ON MNIST", "section_text": "We perform k-shot learning on MNIST dataset. The performances are evaluated on a 10-way classi. fication where each class is provided with 1, 5 training examples. For the MNIST one shot learning. task, we split the data to pre-knowledge set, and one shot learning set with a ratio of 1:9. The. models we compared with are k-Nearest Neighbors(K-NN), Support Vector Machines(SVM), Tra. ditional Convolution Neural Networks, and Deep Boltzmann Machines (DBM), and compositional. patch model (CPM). The CPM (Alex & Yuille(2015) is a model designed for learning a compact. dictionary of image patches representing meaningful components of an object. The performances. were evaluated on a 10-way classification where each class is provided with 1 and 5 training ex-. amples to show the growth in accuracy. For a given run, each model is given a set of hand-written digits picked at random from each class from the one-shot learning set. For CNN, we used a struc-. ture with four convolutional layers and two pooling layers. For DBM, it contains two hidden layers. with 1000 units each. For the OMG model, it learns the grouping information on pre-knowledge. set without using ground truth. Then it is trained on the samples from one shot learning set, where. the group sizes are set by grid search. The performance is reported in Table. 2. The performance. of OMG is better than the baselines, especially better than the previous CNN method. It is because. the huge parameter space of original CNN is hard to optimized by only a few samples. But with the. help of OMG, the parameter space is significant reduced. The OMG can learn the group of filters. representing the shared common parts as each part holds immense amounts of information on how. a visual concept is constructed. And using these patches as features to learn a better classifier with. limited data.\nFigure 7: Visualization of the orthogonal grouped filter maps in the first convolutional layer. Filters in each row belong to the same group. It is easy to see that filters in each group share the similar. appearance, and the filters from different groups are different in terms of appearance..\nUnchanged Significantly changed ine e\nMethods Sample n = 1 Sample n = 5 DBM 24.37 41.76 CNN 28.01 39.8 K-NN 42.08 64.26 SVM 2.78 10.08 CPM 68.86 83.79 OMG 70.17 84.35\nTable 2: Comparison of accuracy with other models on k-shot learning tasks. The proposed OMG achieves better performance on both one-shot learning and 5-shot learning case.."}, {"section_index": "15", "section_name": "5.3.2 K -SHOT ON OFFICE DATASET", "section_text": "In this section, we conduct the experiment on k-Shot learning task on Amazon Office Dataset. (Saenko et al.(2010)).We followed the work described in Judy et al.(2013c), conduct k-shot. learning on the 16 categories of Office dataset (approximately 1,200 examples per category or 20K images total). We evaluated our method across 20 random train/ test splits, and each test split has. 160 examples. Then the averages error are reported. For each random train/ test split we choose. one example for training and 10 other examples for testing. Following the previous work, we use pre-trained DECAF from ImageNet. The model is additionally trained with Orthogonal Method of Grouping on the last 3 convolutional and fully connect layers, where the group sizes are set by grid search. The group size of each layer is always around half of the neural unit size of the layer. Then we perform k-shot learning on the Office dataset with the reduced dimension. The accuracy is re- ported in Table. 3. The models we compared with are SVM, PMG, Daume III and Late fusion. Late fusion (Judy et al.(2013b)) is a simple approach to independently train a source and target classifier. and combine the scores of the two to create a final scoring function..\nTable 3: One shot learning result on Office dataset, the numbers of baseline are borrow from Judy et al.(2013b).\nWe achieve the best performance comparing to previous state-of-the-art method Judy et al.(2013c) With the help of OMG, the dimension reduced network achieve a better performance when training. on limited data. The main reason is because, the original feature space on DeCaf is large and redundant. By grouping the feature space, the dimension is largely reduced and the network is still. representative enough for the Office Dataset. Then a better performance is achieved on Amazon one shot learning task with OMG."}, {"section_index": "16", "section_name": "REFERENCES", "section_text": "Hyvrinen Aapo. Survey on independent component analysis. 1999\nWe proposed a generalizable k-shot learning framework that can be easily integrated into any ex. isting deep net architectures. By grouping parameters together and forcing orthogonality among groups, the method is able to reduce parameter space dimensionality for avoiding overfitting. Ex-. periments on k-shot learning tasks have proven that OMG is able to have a good performance on k-shot class and easily to be adopted to the current existing deep net framework like VGG, ResNet. and so on.\nSantoro Adam, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One. shot Learning with Memory-Augmented Neural Networks. arXiv preprint, pp. 1605.06065, 2016\nHal Daum III. Frustratingly easy domain adaptation. arXiv, pp. 0907.1815, 2009\nKoch Gregory. Siamese neural networks for one-shot image recognition. 32nd International Con ference on Machine Learning, pp. 2252-2259, 2015\nI.T. Jolliffe. Principal Component Analysis. Springer-Verlag, 1986\nHoffman Judy, Eric Tzeng, Jeff Donahue, Yangqing Jia, Kate Saenko, and Trevor Darrell. One-shc adaptation of supervised deep convolutional models. arXiv, pp. 1312.6204, 2013a\nHoffman Judy, Eric Tzeng, Jeff Donahue, Yangqing Jia, Kate Saenko, and Trevor Darrell. One-shot adaptation of supervised deep convolutional models. arXiv preprint, pp. 1312.6204, 2013b.\nHoffman Judy, Eric Tzeng, Jeff Donahue, Yangqing Jia, Kate Saenko, and Trevor Darrell. One-shot. adaptation of supervised deep convolutional models. arXiv preprint, pp. 1312.6204, 2013c.\nHe Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition, 2015.\nFei-Fei Li, Rob Fergus, and Pietro Perona. One-shot learning of object categories, 2006\nTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr. Dollr, , and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer VisionECCV, pp. pp. 740-755, 2014.\nLake Brenden M., Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. One shot learning of simple visual concepts, 2011.\nHinton Geoffrey E. and Ruslan R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, pp. 504-507, 2006.\nSimonyan Karen and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint, pp. 1409.1556, 2014.\nSimard Patrice Y., David Steinkraus, and John C. Platt. Best practices for convolutional neura networks applied to visual document analysis. ICDAR, pp. 958-962, 2003\n/inyals Oriol, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Match ing Networks for One Shot Learning. arXiv preprint, pp. 1606.04080, 2016.."}]
SyCSsUDee
[{"section_index": "0", "section_name": "SEMANTIC NOISE MODELING FOR BETTER REPRESENTATION LEARNINC", "section_text": "Hyo-Eun Kim* and Sangheum Hwang\nSeoul. South Korea\nLatent representation learned from multi-layered neural networks via hierarchica. feature abstraction enables recent success of deep learning. Under the deep learn ing framework, generalization performance highly depends on the learned laten. representation. In this work, we propose a novel latent space modeling method tc. learn better latent representation. We designed a neural network mode1 based or. the assumption that good base representation for supervised tasks can be attained. by maximizing the sum of hierarchical mutual informations between the input. latent, and output variables. From this base model, we introduce a semantic noise. modeling method which enables semantic perturbation on the latent space to en. hance the representational power of learned latent feature. During training, laten. vector representation can be stochastically perturbed by a modeled additive noise. while preserving its original semantics. It implicitly brings the effect of semantic. augmentation on the latent space. The proposed model can be easily learned by. back-propagation with common gradient-based optimization algorithms. Experi. mental results show that the proposed method helps to achieve performance ben. efits against various previous approaches. We also provide the empirical analyses. for the proposed latent space modeling method including t-SNE visualization.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Enhancing the generalization performance against unseen data given some sample data is the maii. objective in machine learning. Under that point of view, deep learning has been achieved man breakthroughs in several domains such as computer vision (Krizhevsky et al.|2012. Simonyan & Zisserman]2015, He et al.]2016), natural language processing (Collobert & Weston2008] Bah danau et al.[2015), and speech recognition (Hinton et al.[[2012] Graves et al.[2013). Deep learning is basically realized on deep layered neural network architecture, and it learns appropriate task. specific latent representation based on given training data. Better latent representation learned fron training data results in better generalization over the future unseen data. Representation learning or latent space modeling becomes one of the key research topics in deep learning. During the pas decade, researchers focused on unsupervised representation learning and achieved several remark. able landmarks on deep learning history (Vincent et al.]2010] Hinton et al.f[2006, Salakhutdinov 8 Hinton2009). In terms of utilizing good base features for supervised learning, the base representa. tion learned from unsupervised learning can be a good solution for supervised tasks (Bengio et al. 2007;Masci et al.]2011).\nThe definition of 'good' representation is, however, different according to target tasks. In unsuper. vised learning, a model is learned from unlabelled examples. Its main objective is to build a mode"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "to estimate true data distribution given examples available for training, so the learned latent rep. resentation normally includes broadly-informative components of the raw input data (e.g., mutual information between the input and the latent variable can be maximized for this objective). In su- pervised learning, however, a model is learned from labelled examples. In the case of classification. a supervised model learns to discriminate input data in terms of the target task using correspond-. ing labels. Latent representation is therefore obtained to maximize the performance on the target supervised tasks.\nSince the meaning of good representations vary according to target tasks (unsupervised or super. vised), pre-trained features from the unsupervised model are not be guaranteed to be useful for subsequent supervised tasks. Instead of the two stage learning strategy (unsupervised pre-training. followed by supervised fine-tuning), several works focused on a joint learning model which opti. mizes unsupervised and supervised objectives concurrently, resulting in better generalization per formance (Goodfellow et al.2013] Larochelle & Bengio]2008a] Rasmus et al.][2015] Zhao et al. 2015}Zhang et al. 2016f|Cho & Chen2014).\nIn this work, we propose a novel latent space modeling method for supervised learning as an exten. sion of the joint learning approach. We define a good latent representation of standard feed-forward. neural networks under the basis of information theory. Then. we introduce a semantic noise model ing method in order to enhance the generalization performance. The proposed method stochastically. perturbs the latent representation of a training example by injecting a modeled semantic additive. noise. Since the additive noise is randomly sampled from a pre-defined probability distribution ev-. ery training iteration, different latent vectors from a single training example can be fully utilized. during training. The multiple different latent vectors produced from a single training example are. semantically similar under the proposed latent space modeling method, so we can expect semantic. augmentation effect on the latent space..\nExperiments are performed on two datasets; MNIST and CIFAR-10. The proposed model results in better classification performance compared to previous approaches through notable generalization effect (stochastically perturbed training examples well cover the distribution of unseen data)"}, {"section_index": "3", "section_name": "2 METHODOLOGY", "section_text": "The proposed method starts from the existing joint learning viewpoint. This section first explains the process of obtaining a good base representation for supervised learning which is the basis of th proposed latent space modeling method. And then, we will describe how the proposed semanti noise modeling method perturbs the latent space while maintaining the original semantics\nIn a traditional feed-forward neural network model (Figure[1(a)), output Y of input data X is com pared with its true label, and the error is propagated backward from top to bottom, which implicitly learns a task-specific latent representation Z of the input X. As an extension of a joint learning approach, an objective to be optimized can be described in general as below (Larochelle & Bengio 2008b):\nwhere Lunsup and Lsup are respectively an unsupervised loss and a supervised loss, and 0 and . are model parameters to be optimized during training and a loss weighting coefficient, respectively. In terms of modeling Lunsup in Eq. (1), we assume that good latent representation Z is attained. by maximizing the sum of hierarchical mutual informations between the input, latent, and output variables; i.e. the sum of the mutual information between the input X and the Z and the mutual information between the Z and the output Y. Each mutual information is decomposed into an entropy and a conditional entropy terms, so the sum of hierarchical mutual informations is expressed as follows:\nmin ALunsup + Lsup\nI(X;Z) +I(Z;Y) =H(X) -H(XZ) +H(Z) -H(ZY)\ne X (a) H X Y X Z Y ge's go'2 ge. ge X X R (b) (c)\nFigure 1: (a) Standard feed-forward neural network model, (b) feed-forward neural network mode with reconstruction paths, and (c) feed-forward neural network model with reconstruction an stochastic perturbation paths\nOur objective is to find the model parameters which maximize I(X; Z) + I(Z; Y). Since H(X) and H(Z) are non-negative, and H(X) is constant in this case, the lower bound on I(X; Z) + I(Z; Y) can be reduced to\nI(X;Z)+I(Z;Y)>-H(XZ)-H(ZY)\nIt is known that maximizing -H(X|Z) can be formulated as the reconstruction error\nmin 0\nFigure 1(b) shows the target model obtained from the assumption that good latent representation Z can be obtained by maximizing the sum of hierarchical mutual informations. Given an input sample x, feed-forward vectors and their reconstructions are attained deterministically by:\n'Although H(Z) is an upper bound of H(Z|Y), H(Z) is anyway affected by the process of H(Z|Y) bein minimized in Eq. (3). In Section[4] we experimentally show that we can obtain good base model even from th relatively loose lower bound defined in Eq. (3).\n10 0 X Z V (a) fe1 X Z Y X Z Y ge ge'2 ge' go'2 X X R ZR R (b) (c)\nwhere I(; ) is the mutual information between random variables, and H(.) and H(|-) are the entropy. and the conditional entropy of random variables, respectively. Note that the sum of those mutual informations becomes equivalent to the total correlation of X, Z, and Y under the graphical structure. of the general feed-forward model described in Figure[1[a); P(X, Z, Y) = P(Y|Z)P(Z|X)P(X) The total correlation is equal to the sum of all pairwise mutual informations (Watanabel 1960)..\nIt is known that maximizing -H(X|Z) can be formulated as minimizing the reconstruction error between the input x(i) (i-th example sampled from X) and its reconstruction x) under the general .(i) audo-encoder framework (Vincent et al.2010). Since H(X|Z) + H(Z|Y) is proportional to the mathematical derivations):\nz = fe(x) y = fe(fo,(x)) xR=ge(z)=ge(fe(x)) ZR= ge,(y) = ge,(fe(fo(x)).\nGiven a set of training pairs (x(i), t(i)) where x(i) and t(i) are the i-th input example and its label target objective in Eq. (1) under the model described in Figure[1(b) can be organized as below (with real-valued input samples, L2 loss L12 is a proper choice for the reconstruction loss Lrec):\nmin LNLL(y) 0:{01,0},02,02}\nwhere Lyrr is a negative log-likelihood loss for the target supervised task. Note that Eq. represents the 'proposed-base' in our experiment (see Section4.3).\nBased on the architecture shown in Figure[1(b) with the target objective in Eq. (6), we conjecture that stochastic perturbation on the latent space during training helps to achieve better generalizatior performance for supervised tasks. Figure 1(c) shows this strategy which integrates the stochastic perturbation process during training. Suppose that Zp is a perturbed version of Z, and Yp is an output which is feed-forwarded from Zp. Given a latent vector z = fe, (x) from an input sample x.\n'=z+ze and y=fe,(z)\nwhere z' and y are a perturbed latent vector and its output respectively, and ze is an additive noise used in the perturbation process of z. Based on the architecture shown in Figure1(c), target objective can be modified as:\n(LL2(x(i),xR))+ LL2(z(i),z} +A2LNLL(y(i),t(i))+ LNLL(y(i),t(i) min 0:{01,0,02,02}\nJsing random additive noise directly on ze is the most intuitive approach ('proposed-perturb (rar lom)' in Section4.3). However, preserving the semantics of the original latent representation annot be guaranteed under the direct random perturbation on the latent space. While the laten pace is not directly interpretable in general, the output logit y of the latent representation z is inter retable, because the output logit is tightly coupled to the prediction of the target label. In order t reserve the semantics of the original latent representation after perturbation, we indirectly model emantic noise on the latent space by adding small random noise directly on the output space.\nBased on the output (pre-softmax) logit y, the semantic-preserving variation of y (i.e. y') can be. modeled by y' = y + ye, where ye is a random noise vector stochastically sampled from a zero- mean Gaussian with small standard deviation o; N(0, o2I). Now, the semantic perturbation z' can be reconstructed from the random perturbation y' through the decoding path go, in Figure [1(c) From the original output logit y and the randomly perturbed output logit y', semantic additive noise ze on the latent space can be approximately modeled as below:.\nZR = ge,(y) R=ge,(y)=ge,(y+ye ZRZR=ge,(y+ ye)-go,(y\nx Y ....t x Z y ge! x R (a) (b)\n102 Y .... t x x Z y ge x R (a) (b)\nPrevious works on deep neural networks for supervised learning can be categorized into two types a. shown in Figure2} (a) a general feed-forward neural network model (LeCun et al.||1998; Krizhevsk et al.2012] Simonyan & Zisserman]2015] He et al.]2016), and (b) a joint learning model whicl optimizes unsupervised and supervised objectives at the same time (Zhao et al.]2015] Zhang et al.. 2016,[Cho & Chen2014). Here are the corresponding objective functions:.\nmin 0:{01,0,02}\nwhere X is a loss weighting coefficient between unsupervised and supervised losses\nSince the feed-forward neural network model is normally implemented with multiple layers in a deep learning framework, the joint learning model can be sub-classified into two types according to the type of reconstruction; reconstruction only with the input data x (Eq. (11)) and reconstruction with all the intermediate features including the input data x as follows:\nmin e\nAnother type of the joint learning model, a ladder network (Figure3), was introduced for semi supervised learning (Rasmus et al.] 2015). The key concept of the ladder network is to obtair robust features by learning de-noising functions (ge) of the representations at every layer of the model via reconstruction losses, and the supervised loss is combined with the reconstruction losse in order to build the semi-supervised model. The ladder network achieved the best performance ir semi-supervised tasks, but it is not appropriate for supervised tasks with small-scale training set (ex perimental analysis for supervised learning on permutation-invariant MNIST is briefly summarize.\n+ noise A + noise go' ge' Ye! x Z YR R R f01 f02 x Z y\n+ noise + noise x go' ge! ge' YR x Z R fe1 02 x Z y\nFigure 3: Ladder network; a representative model for semi-supervised learning (Rasmus et al 2015).\nFigure 2: Previous works for supervised learning; (a) traditional feed-forward model, and (b) joint learning model with both supervised and unsupervised losses.\nmin 0:{01,02}\nwhere and h are the j-th hidden representation of the i-th training example and its reconstruc. tion."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "For quantitative analysis, we compare the proposed methodology with previous approaches de. scribed in Section[3] a traditional feed-forward supervised learning model and a joint learning model. with two different types of reconstruction losses (reconstruction only with the first layer or with al). the intermediate layers including the first layer). The proposed methodology includes a baseline. model in Figure 1(b) as well as a stochastic perturbation model in Figure[1(c). Especially in the stochastic perturbation model, we compare the random and semantic perturbations and present some qualitative analysis on the meaning of the proposed perturbation methodology.."}, {"section_index": "5", "section_name": "4.1 DATASETS", "section_text": "We experiment with two public datasets; MNIST (including a permutation-invariant MNIST case. and CIFAR-10. MNIST (10 classes) consists of 50k, 10k, and 10k 2828 gray-scale images for training, validation, and test datasets, respectively. CIFAR-10 (10 classes) consists of 50k and 10k 32 32 3-channel images for training and test sets, respectively. We split the 50k CIFAR-10 training. images into 40k and 10k for training and validation. Experiments are performed with different. sizes of training set (from 10 examples per class to the entire training set) in order to verify the effectiveness of the proposed model in terms of generalization performance under varying sizes of. training set."}, {"section_index": "6", "section_name": "4.2 IMPLEMENTATION", "section_text": "Figure 4|shows the architecture of the neural network model used in this experiment. W's are convolution or fully-connected weights (biases are excluded for visual brevity). Three convolution (33 (2) 32, 33 (2) 64, 33 (2) 96, where each item means the filter kernel size and (stride) with the number of filters) and two fully-connected (the numbers of output nodes are 128 and 10. respectively) layers are used for MNIST. For the permutation-invariant MNIST setting, 784-512 256-256-128-10 nodes of fully-connected layers are used. Four convolution (55 (1) 64, 33 (2) 64, 33 (2) 64, and 3 3 (2) 96) and three fully-connected (128, 128, and 10 nodes) layers are used for CIFAR-10. Weights on the decoding (reconstruction) path are tied with corresponding weights on the encoding path as shown in Figure 4|(transposed convolution for the tied convolution layer and transposed matrix multiplication for the tied fully-connected layer).\nIn Figure 4] z' is perturbed directly from z by adding Gaussian random noise for random pertur bation. For semantic perturbation, z' is indirectly generated from y' which is perturbed by adding Gaussian random noise on y based on Eq. (9). For perturbation, base activation vector (z is the base\nWN 4N W1 W2 WN x ... . WT A W WT N x Z y R R WT N R\nFigure 4: Target network architecture; 3 convolution and 2 fully-connected layers were used fo MNIST, 5 fully-connected layers were used for permutation-invariant MNIST, and 4 convolution and 3 fully-connected layers were used for CIFAR-10.\nin Appendix (A2)). The proposed model in this work can be extended to semi-supervised learning. but our main focus is to enhance the representational power on latent space given labelled data for. supervised learning. We leave the study for semi-supervised learning scenario based on the proposed methodology as our future research..\nY .... t 4 W W WN WN x Z y: WT WT WT x Z ~ R WT N R\nvector for the random perturbation and y is the base vector for the semantic perturbation) is scaled to [0.0, 1.0], and the zero-mean Gaussian noise with 0.2 of standard deviation is added (via element- wise addition) on the normalized base activation. This perturbed scaled activation is de-scaled with the original min and max activations of the base vector.\nInitial learning rates are 0.005 and O.001 for MNIST and permutation-invariant MNIST, and O.002. for CIFAR-10, respectively. The learning rates are decayed by a factor of 5 every 40 epochs until the. 120-th epoch. For both datasets, the minibatch size is set to 100, and the target objective is optimized. using Adam optimizer (Kingma & Ba2015) with a momentum O.9. All the X's for reconstruction. losses in Eq. (11) and Eq. (12) are 0.03 and 0.01 for MNIST and CIFAR-10, respectively. The same weighting factors for reconstruction losses (0.03 for MNIST and 0.01 for CIFAR-10) are used for A1 in Eq (8), and 1.0 is used for 2.\n[nput data is first scaled to [0.0, 1.0] and then whitened by the average across all the training exam ples. In CIFAR-10, random cropping (2424 image is randomly cropped from the original 3232 image) and random horizontal flipping (mirroring) are used for data augmentation. We selected the network that performed best on the validation dataset for evaluation on the test dataset. All the experiments are performed with TensorFlow (Abadi et al.]2015)."}, {"section_index": "7", "section_name": "4.3 OUANTITATIVE ANALYSIS", "section_text": "Three previous approaches (a traditional feed-forward model, a joint learning model with the inpu. reconstruction loss, and a joint learning model with reconstruction losses of all the intermediat. layers including the input layer) are compared with the proposed methods (the baseline model i1. Figure 1(b), and the stochastic perturbation model in Figure 1(c) with two different perturbatioi. methods: random and semantic). We measure the classification performance according to varyin sizes of training set (examples randomly chosen from the original training dataset). Performance is. averaged over three different random trials.\nTable 1: Error rate (%) on the test set using the model with the best performance on the validation set. Numbers on the first row of each sub-table are the number of randomly chosen per-class train- ing examples. The average performance and the standard deviation of three different random-split datasets (except for the case using the entire training set in the last column) are described in this table (error rate on each random set is summarized in Appendix (A3)). Performance of three previous ap- proaches (with gray background; previous-1, 2, 3 are feed-forward model Figure|2[a), joint learning model with recon-one Figure[2(b), joint learning model with recon-all Figure[2(b), respectively) and the proposed methods (proposed-1, 2, 3 are baseline Figure[1(b), random perturbation Figure|1(c) semantic perturbation Figure(1(c), respectively) is summarized.\ndataset number of per-class examples chosen from 5Ok entire MNIST training examples entire set MNIST 10 20 50 100 200 500 1k 2k 50k previous-1 24.55 (3.04) 16.00 (1.33) 10.35 (0.66) 6.58 (0.42) 4.71 (0.28) 2.94 (0.23) 1.90 (0.27) 1.45 (0.08) 1.04 previous-2 21.67 (3.19) 13.60 (0.99) 7.85 (0.10) 5.44 (0.37) 4.14 (0.08) 2.50 (0.15) 1.84 (0.07) 1.45 (0.07) 1.12 previous-3 20.11 (2.81) 13.69 (0.62) 9.15 (0.15) 6.77 (0.25) 5.39 (0.11) 3.89 (0.27) 2.91 (0.17) 2.28 (0.10) 1.87 proposed-1 21.35 (1.16) 11.65 (1.15) 6.33 (0.10) 4.32 (0.31) 3.07 (0.11) 1.98 (0.11) 1.29 (0.09) 0.94 (0.02) 0.80 proposed-2 20.17 (1.52) 11.68 (0.81) 6.24 (0.29) 4.12 (0.24) 3.04 (0.13) 1.88 (0.05) 1.24 (0.03) 0.96 (0.08) 0.65 proposed-3 20.11 (0.81) 10.59 (0.74) 5.92 (0.12) 3.79 (0.23) 2.72 (0.09) 1.78 (0.05) 1.15 (0.01) 0.88 (0.03) 0.62 dataset number of per-class examples chosen from 40k entire CIFAR-10 training examples entire set CIFAR-10 10 20 50 100 200 500 1k 2k 40k previous-1 73.82 (1.43) 68.99 (0.54) 61.30 (0.83) 54.93 (0.56) 46.97 (0.59) 33.69 (0.43) 26.63 (0.39) 20.97 (0.09) 17.80 previous-2 75.68 (1.56) 69.05 (1.13) 61.44 (0.63) 55.02 (0.34) 46.18 (0.51) 33.62 (0.38) 26.78 (0.48) 21.25 (0.40) 17.68 previous-3 73.33 (1.06) 67.63 (0.56) 62.59 (0.76) 56.37 (0.20) 50.51 (0.61) 41.26 (0.73) 32.55 (1.20) 26.38 (0.08) 22.71 proposed-1 71.63 (0.69) 66.17 (0.40) 58.91 (0.86) 52.65 (0.28) 43.46 (0.30) 31.86 (0.54) 25.76 (0.31) 21.06 (0.18) 17.45 proposed-2 71.69 (0.25) 66.75 (0.54) 58.95 (0.63) 53.01 (0.26) 43.71 (0.19) 31.80 (0.18) 25.50 (0.33) 20.81 (0.27) 17.43 proposed-3 71.50 (1.14) 66.87 (0.17) 58.30 (0.62) 52.32 (0.08) 42.98 (0.34) 30.91 (0.23) 24.81 (0.26) 20.19 (0.25) 16.16\nFigure 5: Examples reconstructed from the perturbed latent vectors via (a) random perturbation and (b) semantic perturbation (top row shows the original training examples). More examples are. summarized in Appendix (A4.1).\nTable [1summarizes the classification performance for MNIST and CIFAR-10. As we expected, the base model obtained by maximizing the sum of mutual informations (proposed-base) mostly performs better than previous approaches, and the model with the semantic perturbation (proposed-. perturb (semantic)) performs best among all the comparison targets. Especially in MNIST, the error rate of proposed-perturb (semantic)' with 2k per-class training examples is less than the error rate of all types of previous works with the entire training set (approximately 5k per-class examples)..\nWe further verify the proposed method on the permutation-invariant MNIST task with a standard. feed-forward neural network. Classification performance is measured against three different sizes of training set (1k, 2k, and 5k per-class training examples). Proposed-perturb (semantic)' achieves the. best performance among all the configurations; 2.57%, 1.82%, and 1.28% error rates for 1k, 2k, and 5k per-class training examples, respectively. The joint learning model with the input reconstruction loss performs best among three previous approaches; 2.72%, 1.97%, and 1.38% error rates for 1k,. 2k, and 5k per-class training examples, respectively.."}, {"section_index": "8", "section_name": "4.4 OUALITATIVE ANALYSIS", "section_text": "As mentioned before, random perturbation by adding unstructured noise directly to the latent rep. resentation cannot guarantee preserving the semantics of the original representation. We com pared two different perturbation methods (random and semantic) by visualizing the examples recon structed from the perturbed latent vectors (Figure|5). Top row is the original examples selected fron training set (among 2k per-class training examples), and the rest are the reconstructions of their per. turbed latent representations. Based on the architecture described in Figure[1(b), we generated five. different perturbed latent representations according to the type of perturbation, and reconstructec. the perturbed latent vectors through decoding path for reconstruction..\nFigure[5(a) and (b) show the examples reconstructed from the random and semantic perturbations. respectively. For both cases, zero-mean Gaussian random noise (0.2 standard deviation) is used for perturbation. As shown in Figure|5(a), random perturbation partially destroys the original semantics; for example, semantics of '1' is mostly destroyed under random perturbation, and some examples of '3' are reconstructed as being similar to '8' rather than its original content '3'. Figure 5(b) shows the examples reconstructed from the semantic perturbation. The reconstructed examples show subtle semantic variations while preserving the original semantic contents; for example, thickness difference in '3' (example on the third row) or writing style difference in '8' (openness of the top left corner).\nFigure[6|shows the overall effect of the perturbation. In this analysis, 100 per-class MNIST exam- ples are used for training. From the trained model based on the architecture described in Figure[1(b). latent representations z of all the 50k examples (among 50k examples, only 1k examples were used for training) are visualized by using t-SNE (Maaten & Hinton2008). Only the training examples of three classes (0, 1, and 9) among ten classes are depicted as black circles for visual discrimination in\nOL (a) (b)\nFigure[6[a). The rest of the examples which were not used for training (approximately 4.9k exam ples per class) are depicted as a background with different colors. We treat the colored backgroun. examples (not used for training) as a true distribution of unseen data in order to estimate the gener alization level of learned representation according to the type of perturbation. Figure [6(b) and (c show the training examples (100 examples per class with yellow circles) and their perturbed one. (3 sampled from each example with blue crosses) through random and semantic perturbations respectively.\nIn Figure6(b), perturbed samples are distributed near the original training examples, but some sam ples outside the true distribution cannot be identified easily with appropriate classes. This can be explained with Figure 5(a), since some perturbed samples are ambiguous semantically. In Fig ure 6(c), however, most of the perturbed samples evenly cover the true distribution. As mentioned before, stochastic perturbation with the semantic additive noise during training implicitly incurs the effect of augmentation on the latent space while resulting in better generalization. Per-class t-SNE results are summarized in Appendix (A4.2)."}, {"section_index": "9", "section_name": "5 DISCUSSION", "section_text": "We introduced a novel latent space modeling method for supervised tasks based on the standard. feed-forward neural network architecture. The presented model simultaneously optimizes both su-. pervised and unsupervised losses based on the assumption that the better latent representation can be obtained by maximizing the sum of hierarchical mutual informations. Especially the stochas. tic perturbation process which is achieved by modeling the semantic additive noise during training. enhances the representational power of the latent space. From the proposed semantic noise model ing process, we can expect improvement of generalization performance in supervised learning with. implicit semantic augmentation effect on the latent space..\nThe presented model architecture can be intuitively extended to semi-supervised learning because it is implemented as the joint optimization of supervised and unsupervised objectives. For semi-. supervised learning, however, logical link between features learned from labelled and unlabelled. data needs to be considered additionally. We leave the extension of the presented approach to semi-. supervised learning for the future."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew. Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath. Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,. Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-\n(a) (b) (c)\n(a) (b) (c)\nFigure 6: Training examples (circles or crosses with colors described below) over the examples not used for training (depicted as background with different colors); (a) training examples (black circles), (b) training examples (yellow circles) with 3 random-perturbed samples (blue crosses), and (c) training examples (yellow circles) with 3 semantic-perturbed samples (blue crosses). Best viewed in color.\nKyunghyun Cho and Xi Chen.. Classifying and visualizing motion capture sequences using deep\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. In Computer Vision and Pattern Recognition (CVPR), 2O16.\nGeoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly. Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural network for acoustic modeling in speech recognition: The shared views of four research groups. Signa. Processing Magazine, IEEE, 29(6):82-97, 2012\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo. lutional neural networks. In Advances in Neural Information Processing Systems (NIPs). 2012\nHugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma X\nHugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma chines. In International Conference on Machine Learning (ICML), 2008b.\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research (JMLR), 9(Nov):2579-2605, 2008.\nAlex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur rent neural networks. In International conference on acoustics, speech and signal processing. 2013.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document reco 78-23241998\nJonathan Masci, Ueli Meier, Dan Ciresan, and Jurgen Schmidhuber. Stacked convolutional auto encoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks, 2011.\nPascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local. denoising criterion. Journal of Machine Learning Research (JMLR). 11:3371-3408. 2010.\nSatosi Watanabe. Information theoretical analysis of multivariate correlation. IBM Journal of re search and development, 4(1):66-82. 1960\nYuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsu pervised objectives for large-scale image classification. In International Conference on Machin Learning (ICML), 2016.\nJunbo Zhao. Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders In International Conference on Learning Representations (ICLR). 2015"}, {"section_index": "11", "section_name": "(A1) DERIVATION OF RECONSTRUCTION ERRORS FROM CONDITIONAL ENTROPY TERMS", "section_text": "max Eq(X,z,Y) [logq(X|Z)] + Eq(X,z,Y) [logq(Z|Y)] {01,0,02,0}\nHere, we denote q(X, Z, Y) an unknown joint distribution. Note that Z and Y are respectively the variables transformed from parametric mappings Z = fe,(X) and Y = fe,(Z) (see Fig.1 q(X,Z,Y) then can be reduced to q(X) from q(Z|X;01) = s(Z - fe (X)) and q(Y|Z;02) (Y fe,(Z)) where & denotes Dirac-delta function..\nmax Eq(x) [logp(X|Z;0)]+ Eq(x) [logp(Z|Y;02) {01,0,02,02}\nEqo(x) [logp(X|Z =fe(X);0)]+Eqo(x)[logp(Z|Y =fe(fe(X));02)] max {01,0,02,0}\nFor a given input sample x of X, it is general to interpret x R and zR as the parameters of distributions p(X|XR = xR) and p(Z|ZR = zR) which reconstruct x and z with high probability (i.e. xR and zR are not exact reconstructions of x and z). Since x R and zR are real-valued, we assume Gaussian distribution for these conditional distributions, that is,.\np(X|XR=xR) =N(xR,o?I) p(Z|ZR = zR) =N(zR,o?I)\nFrom the Kullback-Leibler divergence that DkL(q|p) 0 for any two distributions p and q, the. optimization in Eq. q13) corresponds to the following optimization problem where p() denotes a parametric distribution:\np(X|Z =fe(x);0)=p(X|XR=ge(fe(x))) p(Z|Y =fe(fo(x));02)=p(ZZR= go(fe(fe(x))),\nthe optimization problem in Eq. q15) corresponds to the minimization problem of reconstruction errors for input examples x(i) as below:.\nmin {01,0},02,0} 7"}, {"section_index": "12", "section_name": "(A2) LADDER NETWORK, A REPRESENTATIVE SEMI-SUPERVISED LEARNING MODEL", "section_text": "Extended from Section[3] We performed experiments with a ladder network model (Rasmus et al. 2015) in order to estimate the performance on pure supervised tasks according to different sizes o training set. We used the code (https://github.com/rinuboney/ladder.git) for this experiment. Th network architecture implemented on the source code is used as is; (784-1000-500-250-250-250 10). Based on the same network architecture, we implemented the proposed stochastic perturbatior model described in Figure[1(c) and compared the classification performance with the ladder networl as described in Table|2|(we did not focus on searching the optimal hyperparameters for the propose model in this experiment). As summarized in the bottom of the table (mean over 3 random trials) the proposed semantic noise modeling method shows a fairly large performance gain compared t the ladder network model with small-scale datasets (e.g., in a case of 10 per-class training examples the proposed method achieves 22.11% of error rate, while the ladder network shows 29.66%).\nTable 2: Classification performance (error rate in %) of the ladder network and the proposed model on three different sets of randomly chosen training examples (MNIST)..\nset No.1 (# training examples per class) 10 20 50 100 200 500 1k 2k (all) 5k ladder network model; Figure 3 25.85 16.48 9.26 6.00 4.66 3.07 2.15 1.26 0.91 proposed-perturb (semantic); Figure[1[c) 19.76 12.33 8.77 6.06 4.59 2.93 1.87 1.31 0.93 set No.2 (# training examples per class) 10 20 50 100 200 500 1k 2k ladder network model; Figure 3 33.14 17.46 10.44 6.67 4.43 2.82 1.94 1.37 proposed-perturb (semantic); Figure|1[c) 23.36 15.35 9.43 5.75 4.43 2.99 1.87 1.39 set No.3 (# training examples per class) 10 20 50 100 200 500 1k 2k ladder network model; Figure 3 29.99 16.99 9.73 7.34 4.39 3.00 2.12 1.47 proposed-perturb (semantic); Figure1 23.21 13.98 8.83 6.51 4.32 2.94 2.22 1.49 mean over 3 random trials 10 20 50 100 200 500 1k 2k (all) 5k ladder network model; Figure 29.66 16.98 9.81 6.67 4.49 2.96 2.07 1.37 0.91 proposed-perturb (semantic); Figure1 22.11 13.89 9.01 6.11 4.45 2.95 1(c) 1.99 1.40 0.93\nset No.1 (# training examples per class) 10 20 50 100 200 500 1k 2k (all) 5k ladder network model; Figure3 25.85 16.48 9.26 6.00 4.66 3.07 2.15 1.26 0.91 proposed-perturb (semantic); Figure 19.76 12.33 8.77 6.06 4.59 2.93 1.87 1.31 0.93 set No.2 (# training examples per class) 10 20 50 100 200 500 1k 2k ladder network model; Figure 3 33.14 17.46 10.44 6.67 4.43 2.82 1.94 1.37 proposed-perturb (semantic); Figure 23.36 15.35 9.43 5.75 4.43 2.99 1.87 1.39 set No.3 (# training examples per class) 10 20 50 100 200 500 1k 2k ladder network model; Figure 29.99 16.99 9.73 7.34 4.39 3.00 2.12 1.47 proposed-perturb (semantic); Figur 23.21 13.98 8.83 6.51 4.32 2.94 2.22 1.49 mean over 3 random trials 10 20 50 100 200 500 1k 2k (all) 5k ladder network model; Figure 29.66 16.98 9.81 6.67 4.49 2.96 2.07 1.37 0.91 proposed-perturb (semantic); Figure 22.11 13.89 9.01 6.11 4.45 2.95 1.99 1.40 0.93"}, {"section_index": "13", "section_name": "(A3) QUANTITATIVE ANALYSIS", "section_text": "Table 3: Classification performance (error rate in %) on three different sets of randomly choser training examples (MNIST).\nSet No.1 (# train examples per class). 10 20 50 100 200 500 1k 2k (all) 5k feed-forward model; Figure2[a) 22.61 14.20 11.25 6.37 4.34 2.63 1.83 1.56 1.04 joint learning model with recon-one; Figure2[b 18.69 12.21 7.84 5.17 4.02 2.58 1.79 1.47 1.12 joint learning model with recon-one with denoising constraints.. 20.39 11.91 7.41 4.64 3.65 2.57 1.97 1.53 0.97 joint learning model with recon-all; Figure 2(b) 18.82 12.82 9.34 6.43 5.23 4.12 2.68 2.42 1.87 joint learning model with recon-all with denoising constraints 17.93 11.76 7.32 4.78 3.91 3.04 2.52 1.99 1.36 proposed-base; Figure1(b) 20.23 10.18 6.47 3.89 3.04 1.89 1.33 0.91 0.80 proposed-base with denoising constraints 19.88 10.89 6.62 4.26 3.40 2.44 2.11 1.54 1.13 proposed-perturb (random); Figure 1 c) 18.38 10.58 6.64 3.78 3.14 1.90 1.21 0.89 0.65 proposed-perturb (semantic); Figure1(c) 19.33 9.72 5.98 3.47 2.84 1.84 1.16 0.84 0.62 Set No.2 (# train examples per class). 10 20 50 100 200 500 1k 2k feed-forward model; Figure[2[a) 28.84 17.36 10.14 6.20 4.78 3.02 1.61 1.41 joint learning model with recon-one; Figure2 26.09 14.40 7.98 5.18 4.17 2.29 1.94 1.52 joint learning model with recon-one with denoising constraints 27.69 13.11 6.95 5.07 3.54 2.37 1.83 1.28 joint learning model with recon-all; Figure[2[b) 24.01 14.13 8.98 6.84 5.44 3.51 2.98 2.18 joint learning model with recon-all with denoising constraints 23.05 13.29 7.79 5.12 3.92 3.01 2.27 1.84 proposed-base; Figure1(b) 22.95 12.98 6.27 4.43 3.22 2.14 1.37 0.96 proposed-base with denoising constraints 26.96 12.21 6.45 4.62 3.13 2.53 1.88 1.49 proposed-perturb (random); Figure1[c) 22.10 12.52 5.97 4.26 2.86 1.94 1.23 0.92 proposed-perturb (semantic); Figure1(c) 21.22 11.52 5.75 3.91 2.61 1.73 1.14 0.89 Set No.3 (# train examples per class) 10 20 50 100 200 500 1k 2k feed-forward model; Figure2[a) 22.20 16.43 9.67 7.16 5.02 3.17 2.25 1.39 - joint learning model with recon-one; Figure 2[b 20.23 14.19 7.73 5.96 4.22 2.62 1.79 1.35 joint learning model with recon-one with denoising constraints 19.32 12.25 7.44 5.39 3.58 2.37 1.49 1.56 joint learning model with recon-all; Figure[2[b] 17.51 14.12 9.12 7.04 5.49 4.05 3.08 2.25 joint learning model with recon-all with denoising constraints 17.07 12.50 7.86 5.48 4.05 2.97 2.02 1.98 proposed-base; Figure1(b) 20.86 11.79 6.25 4.63 2.96 1.91 1.16 0.96 proposed-base with denoising constraints 19.89 11.30 6.26 4.57 3.50 2.63 1.61 1.47 proposed-perturb (random); Figure 1[c) 20.02 11.94 6.12 4.32 3.13 1.81 1.28 1.08 proposed-perturb (semantic); Figure1(c) 19.78 10.53 6.03 4.00 2.70 1.76 1.14 0.92\nSet No.1 (# train examples per class) 10 20 50 100 200 500 1k 2k (all) 5k feed-forward model; Figure 2 2a) 22.61 14.20 11.25 6.37 4.34 2.63 1.83 1.56 1.04 joint learning model with recon-one; Figure2[b 18.69 12.21 7.84 5.17 4.02 2.58 1.79 1.47 1.12 joint learning model with recon-one with denoising constraints 20.39 11.91 7.41 4.64 3.65 2.57 1.97 1.53 0.97 joint learning model with recon-all; Figure[2[b) 18.82 12.82 9.34 6.43 5.23 4.12 2.68 2.42 1.87 joint learning model with recon-all with denoising constraints 17.93 11.76 7.32 4.78 3.91 3.04 2.52 1.99 1.36 proposed-base; Figure1(b) 20.23 10.18 6.47 3.89 3.04 1.89 1.33 0.91 0.80 proposed-base with denoising constraints 19.88 10.89 6.62 4.26 3.40 2.44 2.11 1.54 1.13 proposed-perturb (random); Figure 1[c) 18.38 10.58 6.64 3.78 3.14 1.90 1.21 0.89 0.65\nExtended from Section4.3 Among the total 50k and 40k training examples in MNIST and CIFAR 10, we randomly select the examples for training. Classification performance according to three. different randomly chosen training sets are summarized in Table 3](MNIST) and Table 4](CIFAR 10). Further experiments with denoising constraints are also included. Zero-mean Gaussian random. noise with O.1 standard deviation is used for noise injection. Denoising function helps to achieve slightly better performance on MNIST, but it results in performance degradation on CIFAR-1O (we did not focus on searching the optimal parameters for noise injection in this experiments)..\nTable 4: Classification performance (error rate in %) on three different sets of randomly chose. training examples (CIFAR-10)\n10 20 50 Set No.1 (# train examples per class) 100 200 500 1k 2k (all) 4k feed-forward model; Figure2 [a) 73.30 69.25 62.42 55.65 47.71 34.30 27.04 21.06 17.80 joint learning model with recon-one; Figure 2 [b 75.19 70.38 62.25 55.30 46.89 34.12 26.63 21.05 17.68 joint learning model with recon-one with denoising constraints 73.72 68.20 61.99 55.23 46.64 36.37 29.78 25.53 21.73 joint learning model with recon-all; Figure[2[b] 74.79 68.33 62.92 56.24 51.37 40.30 30.91 26.49 22.71 joint learning model with recon-all with denoising constraints 76.56 69.67 64.53 57.88 52.74 42.24 36.90 30.93 27.41 proposed-base; Figure[1b] 70.79 66.57 59.91 52.98 43.29 32.25 26.19 20.92 17.45 proposed-base with denoising constraints 71.03 67.49 60.37 53.52 44.28 33.40 28.00 25.06 21.34 proposed-perturb (random); Figure1[c) 71.89 67.12 59.22 52.79 43.87 31.82 25.04 20.97 17.43 proposed-perturb (semantic); Figure 1(c) 71.59 66.90 58.64 52.34 42.74 30.94 24.45 20.10 16.16 Set No.2 (# train examples per class) 10 20 50 100 200 500 1k 2k feed-forward model; Figure[2[a) 72.39 69.49 60.45 54.85 46.91 33.39 26.73 21.00 joint learning model with recon-one; Figure[2[b) 74.06 69.14 60.71 54.54 45.70 33.54 27.43 20.90 joint learning model with recon-one with denoising constraints 76.40 69.33 60.28 55.38 47.40 36.29 29.31 24.60 joint learning model with recon-all; Figure2[b] 72.28 67.60 61.53 56.65 49.99 42.08 32.99 26.33 joint learning model with recon-all with denoising constraints 73.90 69.23 61.90 57.99 52.35 45.12 37.23 30.14 proposed-base; Figure(1 b) 72.49 65.62 57.82 52.66 43.20 32.24 25.60 21.32 proposed-base with denoising constraints 72.99 66.75 57.78 53.81 44.33 33.56 28.40 25.03 proposed-perturb (random); Figure1[c) 71.84 65.98 58.08 53.37 43.44 31.56 25.69 21.03 proposed-perturb (semantic); Figure1[c 72.85 66.65 57.44 52.21 42.74 31.17 24.99 20.54 Set No.3 (# train examples per class) 10 20 50 100 200 500 1k 2k feed-forward model; Figure[2 a) 75.78 68.24 61.02 54.29 46.28 33.38 26.11 20.85 joint learning model with recon-one; Figure 2 b 77.79 67.62 61.37 55.22 45.96 33.21 26.29 21.81 joint learning model with recon-one with denoising constraints 76.60 69.27 61.13 55.10 47.50 37.12 29.63 24.88 joint learning model with recon-all; Figure[2[b) 72.92 66.97 63.31 56.23 50.16 41.41 33.75 26.31 joint learning model with recon-all with denoising constraints 76.83 68.53 65.58 58.29 52.43 45.42 39.01 32.32 proposed-base; Figure1 b) 71.60 66.31 58.99 52.30 43.88 31.10 25.48 20.95 proposed-base with denoising constraints 72.39 67.20 60.60 52.64 44.62 33.52 28.01 25.25 proposed-perturb (random); Figure1 [c) 71.34 67.15 59.55 52.86 43.81 32.01 25.78 20.42 proposed-perturb (semantic); Figure1(c) 70.06 67.07 58.83 52.41 43.47 30.61 25.00 19.94\nSet No.1 (# train examples per class) 10 20 50 100 200 500 1k 2k (all) 4k feed-forward model; Figure[2[a) 73.30 69.25 62.42 55.65 47.71 34.30 27.04 21.06 17.80 joint learning model with recon-one; Figure2(b) 75.19 70.38 62.25 55.30 46.89 34.12 26.63 21.05 17.68 joint learning model with recon-one with denoising constraints 73.72 68.20 61.99 55.23 46.64 36.37 29.78 25.53 21.73 joint learning model with recon-all; Figure[2(b) 74.79 68.33 62.92 56.24 51.37 40.30 30.91 26.49 22.71 joint learning model with recon-all with denoising constraints 76.56 69.67 64.53 57.88 52.74 42.24 36.90 30.93 27.41 proposed-base; Figure1 b) 70.79 66.57 59.91 52.98 43.29 32.25 26.19 20.92 17.45 proposed-base with denoising constraints 71.03 67.49 60.37 53.52 44.28 33.40 28.00 25.06 21.34 proposed-perturb (random); Figure1 [c) 71.89 67.12 59.22 52.79 43.87 31.82 25.04 20.97 17.43 +\nExtended from Section4.4] Figure 7|shows reconstructed examples from perturbed (random or semantic) latent representations (refer to Figure[5|and the analysis described in Section4.4)\nFigure 7: For each example, top row is the original examples selected from the training set, an the rest are reconstructed from the perturbed representations via random (left) and semantic (right perturbations.\ntended from Section|4.4] Figure [7|shows reconstructed examples from perturbed (random nantic) latent representations (refer to Figure[5|and the analysis described in Section|4.4) Example.1 random perturbation Example.1 semantic perturbation Example.2 random perturbation Example.2 semantic perturbatior\n9.\nExtended from Section4.4Figure 8 shows the t-SNE results per class on MNIST. The overall tendency is similar to the description in Section4.4\nFigure 8: From top to bottom: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. From left to right: training exam- ples (circle), training examples (circle) + random-perturbed samples (cross), and training examples (circle) + semantic-perturbed samples (cross). Best viewed in color.."}]
H1zJ-v5xl
[{"section_index": "0", "section_name": "OUASI-RECURRENT NEURAL NETWORKS", "section_text": "James Bradbury* Stephen Merity* Caiming Xiong & Richard Socher\njames.bradbury, smerity, cxiong, rsocher}@salesforce.com\nRecurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's out- put limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural se- quence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in par- allel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viabil- ity of QRNNs as a basic building block for a variety of sequence tasks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recurrent neural networks (RNNs), including gated variants such as the long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) have become the standard model architecture for deep learning approaches to sequence modeling tasks. RNNs repeatedly apply a function with trainable parameters to a hidden state. Recurrent layers can also be stacked, increasing network depth, repre- sentational power and often accuracy. RNN applications in the natural language domain range from sentence classification (Wang et al., 2015) to word- and character-level language modeling (Zaremba et al., 2014). RNNs are also commonly the basic building block for more complex models for tasks such as machine translation (Bahdanau et al., 2015; Luong et al., 2015; Bradbury & Socher, 2016) or question answering (Kumar et al., 2016; Xiong et al., 2016). Unfortunately standard RNNs, in- cluding LSTMs, are limited in their capability to handle tasks involving very long sequences, such as document classification or character-level machine translation, as the computation of features or states for different parts of the document cannot occur in parallel.\nConvolutional neural networks (CNNs) (Krizhevsky et al., 2012), though more popular on tasks in. volving image data, have also been applied to sequence encoding tasks (Zhang et al., 2015). Sucl. models apply time-invariant filter functions in parallel to windows along the input sequence. CNNs. possess several advantages over recurrent models, including increased parallelism and better scal. ing to long sequences such as those often seen with character-level language data. Convolutiona. models for sequence processing have been more successful when combined with RNN layers in a. hybrid architecture (Lee et al., 2016), because traditional max- and average-pooling approaches tc. combining convolutional features across timesteps assume time invariance and hence cannot mak full use of large-scale sequence order information..\nWe present quasi-recurrent neural networks for neural sequence modeling. QRNNs address both drawbacks of standard models: like CNNs, QRNNs allow for parallel computation across both timestep and minibatch dimensions, enabling high throughput and good scaling to long sequences.. Like RNNs, QRNNs allow the output to depend on the overall order of elements in the sequence.. We describe QRNN variants tailored to several natural language tasks, including document-level sentiment classification, language modeling, and character-level machine translation. These models outperform strong LSTM baselines on all three tasks while dramatically reducing computation time."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Figure 1: Block diagrams showing the computation structure of the QRNN compared with typica LSTM and CNN architectures. Red signifies convolutions or matrix multiplications; a continuous block means that those computations can proceed in parallel. Blue signifies parameterless functions that operate in parallel along the channel/feature dimension. LSTMs can be factored into (red) lineal blocks and (blue) elementwise blocks, but computation at each timestep still depends on the results. from the previous timestep."}, {"section_index": "3", "section_name": "2 MODEL", "section_text": "Each layer of a quasi-recurrent neural network consists of two kinds of subcomponents, analogous to convolution and pooling layers in CNNs. The convolutional component, like convolutional layers in CNNs, allows fully parallel computation across both minibatches and spatial dimensions, in this case the sequence dimension. The pooling component, like pooling layers in CNNs, lacks trainable parameters and allows fully parallel computation across minibatch and feature dimensions.\nGiven an input sequence X E RTn of T n-dimensional vectors x1 ... XT, the convolutional sub component of a QRNN performs convolutions in the timestep dimension with a bank of m filters producing a sequence Z E RT'xm of m-dimensional candidate vectors zt. In order to be useful for tasks that include prediction of the next token, the filters must not allow the computation for any given timestep to access information from future timesteps. That is, with filters of width k, each z depends only on xt-k+1 through xt. This concept, known as a masked convolution (van den Oorc et al., 2016a), is implemented by padding the input to the left by the convolution's filter size minus One.\nWe apply additional convolutions with separate filter banks to obtain sequences of vectors for the elementwise gates that are needed for the pooling function. While the candidate vectors are passed through a tanh nonlinearity, the gates use an elementwise sigmoid. If the pooling function requires a forget gate f and an output gate ot at each timestep, the full set of computations in the convolutional component is then:\nZ = tanh(Wz * X F = o(Wf* X) O = o(W, * X),\nZt = tanh(W)xt-1+ W? ft =0(W}xt-1+W) Ot = 0(W|xt-1+ W?\nConvolution filters of larger width effectively compute higher n-gram features at each timestep; thu larger widths are especially important for character-level tasks..\nSuitable functions for the pooling subcomponent can be constructed from the familiar elementwise gates of the traditional LSTM cell. We seek a function controlled by gates that can mix states across timesteps, but which acts independently on each channel of the state vector. The simplest option. which Balduzzi & Ghifary (2016) term \"dynamic average pooling\", uses only a forget gate:.\nh=fOht-1+(1-ft)OZt\nLSTM CNN QRNN Linear Convolution Convolution LSTM/Linear Max-Pool fo-Pool Linear Convolution Convolution LSTM/Linear Max-Pool fo-Pool\nwhere Wz,Wf, and Wo, each in Rkxnxm, are the convolutional filter banks and * denotes a masked convolution along the timestep dimension. Note that if the filter width is 2, these equations reduce to the LSTM-like\nt = tanh(W,xt- W = A\nwhere O denotes elementwise multiplication. The function may also include an output gate\nCt=ftO Ct-1+(1-ft)O Zi ht = Ot O Ct.\nOr the recurrence relation may include an independent input and forget gate\nCt =ftO Ct-1+it O Zt ht = Ot O Ct\nWe term these three options f-pooling, fo-pooling, and ifo-pooling respectively; in each case w initialize h or c to zero. Although the recurrent parts of these functions must be calculated fc each timestep in sequence, their simplicity and parallelism along feature dimensions means tha in practice, evaluating them over even long sequences requires a negligible amount of computatio time.\nA single QRNN layer thus performs an input-dependent pooling, followed by a gated linear combi nation of convolutional features. As with convolutional neural networks, two or more QRNN layers should be stacked to create a model with the capacity to approximate more complex functions."}, {"section_index": "4", "section_name": "2.1 VARIANTS", "section_text": "Motivated by several common natural language tasks, and the long history of work on related ar- chitectures, we introduce several extensions to the stacked QRNN described above. Notably, many extensions to both recurrent and convolutional models can be applied directly to the QRNN as it combines elements of both model types.\nRegularization An important extension to the stacked QRNN is a robust regularization schem inspired by recent work in regularizing LSTMs..\nThe need for an effective regularization method for LSTMs, and dropout's relative lack of efficacy. when applied to recurrent connections, led to the development of recurrent dropout schemes, in cluding variational inference-based dropout (Gal & Ghahramani, 2016) and zoneout (Krueger et al.,. 2016). These schemes extend dropout to the recurrent setting by taking advantage of the repeating. structure of recurrent networks, providing more powerful and less destructive regularization..\nVariational inference-based dropout locks the dropout mask used for the recurrent connections across timesteps, so a single RNN pass uses a single stochastic subset of the recurrent weights. Zoneout stochastically chooses a new subset of channels to \"zone out\"' at each timestep; for these channels the network copies states from one timestep to the next without modification..\nThus the pooling function itself need not be modified at all. We note that when using an off-the. shelf dropout layer in this context, it is important to remove automatic rescaling functionality from the implementation if it is present. In many experiments, we also apply ordinary dropout between. layers, including between word embeddings and the first QRNN layer..\nDensely-Connected Layers We can also extend the QRNN architecture using techniques intro duced for convolutional networks. For sequence classification tasks, we found it helpful to use skip-connections between every QRNN layer, a technique termed \"dense convolution\" by Huang. et al. (2016). Where traditional feed-forward or convolutional networks have connections only be- tween subsequent layers, a \"DenseNet\" with L layers has feed-forward or convolutional connections between every pair of layers, for a total of L(L-1). This can improve gradient flow and convergence. properties, especially in deeper networks, although it requires a parameter count that is quadratic in. the number of layers\nWhen applying this technique to the QRNN, we include connections between the input embeddings and every QRNN layer and between every pair of QRNN layers. This is equivalent to concatenating\nAs QRNNs lack recurrent weights, the variational inference approach does not apply. Thus we extended zoneout to the QRNN architecture by modifying the pooling function to keep the previous pooling state for a stochastic subset of channels. Conveniently, this is equivalent to stochastically setting a subset of the QRNN's f gate channels to 1, or applying dropout on 1 f:\nF = 1 - dropout(1 - (W c * X))\nConvolution Linear fo-Pool fo-Pool Convolution Linear fo-Pool f-Pool Attention Linear Output gates\nFigure 2: The QRNN encoder-decoder architecture used for machine translation experiments\neach QRNN layer's input to its output along the channel dimension before feeding the state into the next layer. The output of the last layer alone is then used as the overall encoding result..\nEncoder-Decoder Models To demonstrate the generality of QRNNs, we extend the model architec ture to sequence-to-sequence tasks, such as machine translation, by using a QRNN as encoder and a modified QRNN, enhanced with attention, as decoder. The motivation for modifying the decoder i that simply feeding the last encoder hidden state (the output of the encoder's pooling layer) into th decoder's recurrent pooling layer, analogously to conventional recurrent encoder-decoder architec tures, would not allow the encoder state to affect the gate or update values that are provided to the decoder's pooling layer. This would substantially limit the representational power of the decoder.\nInstead, the output of each decoder QRNN layer's convolution functions is supplemented at every. timestep with the final encoder hidden state. This is accomplished by adding the result of the convo lution for layer l (e.g., W * X', in RT m) with broadcasting to a linearly projected copy of layer. l's last encoder state (e.g., Vh, in Rm):\nZ = tanh(W, * X + V,h) Fl = (W f* Xl+V{hT) Ol = (W * Xl + V(hT)\nwhere the tilde denotes that h is an encoder variable. Encoder-decoder models which operate o long sequences are made significantly more powerful with the addition of soft attention (Bahdanai et al., 2015), which removes the need for the entire input representation to fit into a fixed-lengtl encoding vector. In our experiments, we computed an attentional sum of the encoder's last layer' hidden states. We used the dot products of these encoder hidden states with the decoder's last layer' un-gated hidden states, applying a softmax along the encoder timesteps, to weight the encoder state into an attentional sum k for each decoder timestep. This context, and the decoder state, are ther fed into a linear layer followed by the output gate:\nwhere L is the last layer. This procedure is closely analogous to the attention mechanism described by Luong et al. (2015) as \"global-dot without input feeding\". The reason for avoiding input feeding. is to allow the QRNN layers to run in a maximally timestep-parallel way during training, even\nQst = softmax( h1 all s kt = hL = OtO (Wgkt + Wcct)\nModel Time / Epoch (s) Test Acc (%) NBSVM-bi (Wang & Manning, 2012) 91.2 2 layer sequential BoW CNN (Johnson & Zhang, 2014) 92.3 Ensemble of RNNs and NB-SVM (Mesnil et al., 2014) 92.6 2-layer LSTM (Longpre et al., 2016) 87.6 Residual 2-layer bi-LSTM (Longpre et al., 2016) 90.1 Our models Densely-connected 4-layer LSTM (cuDNN optimized) 480 90.9 Densely-connected 4-layer QRNN 150 91.4 Densely-connected 4-layer QRNN with k = 4 160 91.1\nTable 1: Accuracy comparison on the IMDb binary sentiment classification task. All of our models use 256 units per layer; all layers other than the first layer, whose filter width may vary, use filte. width k = 2. Train times are reported on a single NVIDIA K40 GPU. We exclude semi-supervised models that conduct additional training on the unlabeled portion of the dataset..\nif they can't during inference; any kind of input feeding would make this impossible, although i would likely result in slightly better translation performance.\nWhile the first step of this attention procedure is quadratic in the sequence length, in practice i1 takes significantly less computation time than the model's linear and convolutional layers due to the simple and highly parallel dot-product scoring function."}, {"section_index": "5", "section_name": "3 EXPERIMENTS", "section_text": "We evaluate the performance of the QRNN on three different natural language tasks: document-level. sentiment classification, language modeling, and character-based neural machine translation. Our QRNN models outperform LSTM-based models of equal hidden size on all three tasks while dra matically improving computation speed. Experiments were implemented in Chainer (Tokui et al.)."}, {"section_index": "6", "section_name": "3.1 SENTIMENT CLASSIFICATION", "section_text": "We evaluate the QRNN architecture on a popular document-level sentiment classification bench mark, the IMDb movie review dataset (Maas et al., 2011). The dataset consists of a balanced sample of 25,000 positive and 25,000 negative reviews, divided into equal-size train and test sets, with an average document length of 231 words (Wang & Manning, 2012). We compare only to other results that do not make use of additional unlabeled data (thus excluding e.g., Miyato et al. (2016)).\nSmall batch sizes and long sequence lengths provide an ideal situation for demonstrating the QRNN's performance advantages over traditional recurrent architectures. This is because tradi tional architectures are only fully parallel over the batch dimension, while the QRNN parallelize. over batch and timestep dimensions in the convolutional layer and over batch and feature dimensions in the pooling layer. We observed a speedup of 3.2x on IMDb train time per epoch compared to th optimized LSTM implementation provided in NVIDIA's cuDNN library. For specific batch sizes and sequence lengths, a 16x speed gain is possible. Figure 4 provides extensive speed comparisons\nOur best performance on a held-out development set was achieved using a four-layer densely connected QRNN with 256 units per layer and word vectors initialized using 300-dimensional cased GloVe embeddings (Pennington et al., 2014). Dropout of 0.3 was applied between layers, and we. used L2 regularization of 4 10-6. Optimization was performed on minibatches of 24 examples. using RMSprop (Tieleman & Hinton, 2012) with learning rate of 0.001, = 0.9, and e = 10-8\nIn Figure 3, we visualize the hidden state vectors c of the final QRNN layer on part of an example from the IMDb dataset. Even without any post-processing, changes in the hidden state are visible and interpretable in regards to the input. This is a consequence of the elementwise nature of the recurrent pooling function, which delays direct interaction between different channels of the hidden state until the computation of the next QRNN layer.\n110 (spuom) 120 130 140 150 160 170 0 50 100 150 200 250 Hidden units"}, {"section_index": "7", "section_name": "3.2 LANGUAGE MODELING", "section_text": "We implemented a gated QRNN model with medium hidden size: 2 layers with 640 units in each layer. Both QRNN layers use a convolutional filter width k of two timesteps. While the \"medium' models used in other work (Zaremba et al.. 2014: Gal & Ghahramani. 2016) consist of 650 units in each layer, it was more computationally convenient to use a multiple of 32. As the Penn Treebank is a relatively small dataset, preventing overfitting is of considerable importance and a major focus of recent research. It is not obvious in advance which of the many RNN regularization schemes would perform well when applied to the QRNN. Our tests showed encouraging results from zoneout applied to the QRNN's recurrent pooling layer, implemented as described in Section 2.1.\nThe experimental settings largely followed the \"medium' setup of Zaremba et al. (2014). Optimiza tion was performed by stochastic gradient descent (SGD) without momentum. The learning rate was set at 1 for six epochs, then decayed by 0.95 for each subsequent epoch, for a total of 72 epochs We additionally used L2 regularization of 2 10-4 and rescaled gradients with norm above 10 Zoneout was applied by performing dropout with ratio O.1 on the forget gates of the QRNN, without rescaling the output of the dropout function. Batches consist of 20 examples, each 105 timesteps.\nModel Parameters Validation Test LSTM (medium) (Zaremba et al., 2014) 20M 86.2 82.7 Variational LSTM (medium, MC) (Gal & Ghahramani, 2016) 20M 81.9 79.7 LSTM with CharCNN embeddings (Kim et al., 2016) 19M 78.9 Zoneout + Variational LSTM (medium) (Merity et al., 2016) 20M 84.4 80.6 Our models LSTM (medium) 20M 85.7 82.0 QRNN (medium) 18M 82.9 79.9 QRNN + zoneout (p = 0.1) (medium) 18M 82.1 78.3\nTable 2: Single model perplexity on validation and test sets for the Penn Treebank language model. ing task. Lower is better. \"Medium' refers to a two-layer network with 640 or 650 hidden units pei. layer. All QRNN models include dropout of O.5 on embeddings and between layers. MC refers tc Monte Carlo dropout averaging at test time..\nFigure 3: Visualization of the final QRNN layer's hidden state vectors c in the IMDb task, with. timesteps along the vertical axis. Colors denote neuron activations. After an initial positive statement \"This movie is simply gorgeous\" (off graph at timestep 9), timestep 117 triggers a reset of most. hidden states due to the phrase \"not exactly a bad story\" (soon after \"main weakness is its story\"). Only at timestep 158, after \"I recommend this movie to everyone, even if you've never played the. game\", do the hidden units recover.\n500 RNN Sequence length 400 Softmax 32 64 128 256 512 (ypgeq/sw) Optimization Overhead 8 5.5x 8.8x 11.0x 12.4x 16.9x 300 az!s 1 16 5.5x 6.7x 7.8x 8.3x 10.8x 32 4.2x 4.5x 4.9x 4.9x 200 6.4x Tmee aatth 64 3.0x 3.0x 3.0x 3.0x 3.7x 128 2.1x 1.9x 100 2.0x 2.0x 2.4x 256 1.4x 1.4x 1.3x 1.3x 1.3x 0 LSTM LSTM (cuDNN) QRNN\n400 (ynaeq/sw) fmee 300 200 100 0\nWithout zoneout, early stopping based upon validation loss was required as the QRNN would be-. gin overfitting. By applying a small amount of zoneout (p = 0.1), no early stopping is required and the QRNN achieves competitive levels of perplexity to the variational LSTM of Gal & Ghahra- mani (2016), which had variational inference based dropout of 0.2 applied recurrently. Their best performing variation also used Monte Carlo (MC) dropout averaging at test time of 1000 different. masks, making it computationally more expensive to run..\nWhen training on the PTB dataset with an NVIDIA K40 GPU, we found that the QRNN is sub- stantially faster than a standard LSTM, even when comparing against the optimized cuDNN LSTM In Figure 4 we provide a breakdown of the time taken for Chainer's default LSTM, the cuDNN LSTM, and QRNN to perform a full forward and backward pass on a single batch during training of the RNN LM on PTB. For both LSTM implementations, running time was dominated by the RNN computations, even with the highly optimized cuDNN implementation. For the QRNN implementa- tion, however, the \"RNN\" layers are no longer the bottleneck. Indeed, there are diminishing returns from further optimization of the QRNN itself as the softmax and optimization overhead take equal or greater time. Note that the softmax, over a vocabulary size of only 10,o00 words, is relatively small; for tasks with larger vocabularies, the softmax would likely dominate computation time.\nIt is also important to note that the cuDNN library's RNN primitives do not natively support any form of recurrent dropout. That is, running an LSTM that uses a state-of-the-art regularization scheme at cuDNN-like speeds would likely require an entirely custom kernel..\nWe evaluate the sequence-to-sequence QRNN architecture described in 2.1 on a challenging neu ral machine translation task, IWsLT German-English spoken-domain translation, applying fully character-level segmentation. This dataset consists of 209,772 sentence pairs of parallel training data from transcribed TED and TEDx presentations, with a mean sentence length of 103 characters for German and 93 for English. We remove training sentences with more than 300 characters ir English or German, and use a unified vocabulary of 187 Unicode code points.\nOur best performance on a development set (TED.tst2013) was achieved using a four-layer encoder-. decoder QRNN with 320 units per layer, no dropout or L2 regularization, and gradient rescaling. to a maximum magnitude of 5. Inputs were supplied to the encoder reversed, while the encoder convolutions were not masked. The first encoder layer used convolutional filter width k = 6, while\nFigure 4: Left: Training speed for two-layer 640-unit PTB LM on a batch of 20 examples of 105 timesteps. \"RNN\" and \"softmax\" include the forward and backward times, while \"optimization overhead\"' includes gradient clipping, L2 regularization, and SGD computations. Right: Inference speed advantage of a 320-unit QRNN layer alone over an equal-sized cuDNN LSTM layer for data with the given batch size and sequence length. Training results are similar.\nComparing our results on the gated QRNN with zoneout to the results of LSTMs with both ordinary and variationa1 dropout in Table 2, we see that the QRNN is highly competitive. The QRNN without zoneout strongly outperforms both our medium LSTM and the medium LSTM of Zaremba et al. 2014) which do not use recurrent dropout and is even competitive with variational LSTMs. This may be due to the limited computational capacity that the QRNN's pooling layer has relative to the LSTM's recurrent weights, providing structural regularization over the recurrence.\nModel Train Time BLEU (TED.tst2014) Word-level LSTM w/attn (Ranzato et al., 2016) 20.2 Word-level CNN w/attn, input feeding (Wiseman & Rush, 2016) 24.0 Char-level ByteNet' 24.7 Our models Char-level 4-layer LSTM 4.2 hrs/epoch 16.53 Char-level 4-layer QRNN with k = 6 1.0 hrs/epoch 19.41\nTable 3: Translation performance, measured by BLEU, and train speed in hours per epoch, for the. IWSLT German-English spoken language translation task. All models were trained on in-domain data only, and use negative log-likelihood as the training criterion. Our models were trained for 10 epochs. The QRNN model uses k = 2 for all layers other than the first encoder layer..\nthe other encoder layers used k = 2. Optimization was performed for 10 epochs on minibatches of 16 examples using Adam (Kingma & Ba, 2014) with a = 0.001, = 0.9, 2 = 0.999, anc e = 10-8. Decoding was performed using beam search with beam width 8 and length normalization. = 0.6. The modified log-probability ranking criterion is provided in the appendix..\nResults using this architecture were compared to an equal-sized four-layer encoder-decoder LSTM with attention, applying dropout of 0.2. We again optimized using Adam; other hyperparameters were equal to their values for the QRNN and the same beam search procedure was applied. Table 3 shows that the QRNN outperformed the character-level LSTM, almost matching the performance of a word-level attentional baseline"}, {"section_index": "8", "section_name": "4 RELATED WORK", "section_text": "Exploring alternatives to traditional RNNs for sequence tasks is a major area of current researcl Quasi-recurrent neural networks are related to several such recently described models, especially th strongly-typed recurrent neural networks (T-RNN) introduced by Balduzzi & Ghifary (2016). Whil the motivation and constraints described in that work are different, Balduzzi & Ghifary (2016) concepts of \"learnware\" and \"firmware\" parallel our discussion of convolution-like and pooling-lik subcomponents. As the use of a fully connected layer for recurrent connections violates the cor straint of \"strong typing\", all strongly-typed RNN architectures (including the T-RNN, T-GRU, an T-LSTM) are also quasi-recurrent. However, some QRNN models (including those with attentio or skip-connections) are not \"strongly typed\". In particular, a T-RNN differs from a QRNN as de scribed in this paper with filter size 1 and f-pooling only in the absence of an activation functio on z. Similarly, T-GRUs and T-LSTMs differ from QRNNs with filter size 2 and fo- or ifo-poolin respectively in that they lack tanh on z and use tanh rather than sigmoid on o.\nThe PixelCNN model (van den Oord et al., 2016a) was the first to tackle a sequence prediction prob lem (in particular, the computer vision equivalent of language modeling) using masked convolutions in place of recurrent units. Like the QRNN, the PixelCNN architecture allows for highly paralle computation whenever the whole input is available ahead of time (e.g., during training). In order tc enable conditional image generation (a setting similar to the QRNN encoder-decoder), the outputs of the convolutions in a PixelCNN can be augmented with a term that depends on the encoder state while better generation performance was obtained by adding an elementwise gate to the model out put (van den Oord et al., 2016b). The PixelCNN, however, relies on depth and large filter sizes tc provide long-term context dependence; unlike the QRNN, the gating mechanism is not recurrent.\n1Unpublished result from NIPs 2016 tutorial by Nal Kalchbrenner, given after the submission of this paper. (slides at https://drive.google.com/fi1e/d/0B7jhGCaUwDJezwzwuxJ4cktxVU0/view). See Related Work for discussion.\nAnother related sequence model is the query-reduction network introduced by Seo et al. (2016). Such a network without its query component could be rewritten as a QRNN with filter size 1, while. the full QRN is similar to a single layer of the decoder component of our sequence-to-sequence architecture.\nThe QRNN encoder-decoder model shares the favorable parallelism and path-length properties ex. hibited by the ByteNet (Kalchbrenner et al., 2016), a PixelCNN-like architecture for character-leve]. machine translation based on residual convolutions over binary trees. Their model was constructec to achieve three desired properties: parallelism, linear-time computational complexity, and short. paths between any pair of words in order to better propagate gradient signals. While the ByteNet. outperforms the QRNN encoder-decoder by about five BLEU points on the IWSLT dataset, it is. unclear how much of this difference can be attributed to the overall ByteNet model architecture, as opposed to the many other contributions of that paper, like residual multiplicative blocks or sub-. batch normalization.\nThe ORNN is also related to work in hybrid convolutional-recurrent models. Zhou et al. (2015 apply CNNs at the word level to generate n-gram features used by an LSTM for text classification. Xiao & Cho (2016) also tackle text classification by applying convolutions at the character level.. with a stride to reduce sequence length, then feeding these features into a bidirectional LSTM. A similar approach was taken by Lee et al. (2016) for character-level machine translation. Thei. model's encoder uses a convolutional layer followed by max-pooling to reduce sequence length, a. four-layer highway network, and a bidirectional GRU. The parallelism of the convolutional, pooling. and highway layers allows training speed comparable to subword-level models without hard-coded text segmentation."}, {"section_index": "9", "section_name": "5 CONCLUSION", "section_text": "Intuitively, many aspects of the semantics of long sequences are context-invariant and can be com-. puted in parallel (e.g., convolutionally), but some aspects require long-distance context and must be. computed recurrently. Many existing neural network architectures either fail to take advantage of the. contextual information or fail to take advantage of the parallelism. QRNNs exploit both parallelism and context, exhibiting advantages from both convolutional and recurrent neural networks. QRNNs. have better predictive accuracy than LSTM-based models of equal hidden size, even though they use. fewer parameters and run substantially faster. Our experiments show that the speed and accuracy. advantages remain consistent across tasks and at both word and character levels.."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. In ICLR, 2015..\nDavid Balduzzi and Muhammad Ghifary. Strongly-typed recurrent neural networks. In ICML, 2016\nYarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In NIPS, 2016.\nGao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks arXiv preprint arXiv:1608.06993, 2016.\nRie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutiona neural networks. arXiv preprint arXiv:1412.1058, 2014.\nNal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099, 2016.\nExtensions to both CNNs and RNNs are often directly applicable to the QRNN, while the model's hidden states are more interpretable than those of other recurrent architectures as its channels main- tain their independence across timesteps. We believe that QRNNs can serve as a building block for long-sequence tasks that were previously impractical with traditional RNNs.\nYoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. Character-aware neural language models. arXiv preprint arXiv:1508.06615, 2016.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convo lutional neural networks. In NIPS, 2012.\nAnkit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. In ICML, 2016.\nJason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully character-level neural machine translation without explicit segmentation. arXiv preprint arXiv:1610.03017, 2016.\nShayne Longpre, Sabeek Pradhan, Caiming Xiong, and Richard Socher. A way out of the odyssey. Analyzing and combining recent insights for LSTMs. Submitted to ICLR, 2016\nM. T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machir translation. In EMNLP, 2015.\nAndrew L Maas, Andrew Y Ng, and Christopher Potts. Multi-dimensional sentiment analysis witl learned representations. Technical report, 2011.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurren neural network based language model. In INTERSPEECH, 2010..\nMarc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train ing with recurrent neural networks. In ICLR, 2016.\nMinjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. Query-reduction networks fo question answering. arXiv preprint arXiv:1606.04582, 2016\nTijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURsERA: Neural Networks for Machine Learning, 4(2) 2012.\nAaron van den Oord. Nal Kalchbrenner. Oriol Vinyals. Lasse Espeholt. Alex Graves. and Ko ray Kavukcuoglu. Conditional image generation with PixelCNN decoders. arXiv preprini arXiv:1606.05328, 2016b\nStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016\nTakeru Miyato, Andrew M Dai, and Ian Goodfellow. Virtual adversarial training for semi-supervised text classification. arXiv preprint arXiv:1605.07725, 2016\nYijun Xiao and Kyunghyun Cho. Efficient character-level document classification by combinin convolution and recurrent layers. arXiv preprint arXiv:1602.00367, 2016.\nXiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas sification. In NIPS, 2015.\nXin Wang, Yuanchao Liu, Chengjie Sun, Baoxun Wang, and Xiaolong Wang. Predicting polaritie. of tweets by composing word embeddings with long short-term memory. In ACL, 2015.\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey. Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine trans. lation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016."}, {"section_index": "11", "section_name": "APPENDIX", "section_text": "The modified log-probability ranking criterion we used in beam search for translation experiment.\nTtrg + a T T + a log(Pcand) `log(p(w;[W1...Wi-1)) Ttrg T i=1\nT log(Pcand) =) log(p(w;|w1... wi-1 i=1\nand at a = 1 to beam search with probabilities normalized by length (up to the target length):\nConveniently, this ranking criterion can be computed at intermediate beam-search timesteps, obvi ating the need to apply a separate reranking on complete hypotheses."}, {"section_index": "12", "section_name": "RESULTS ON COPY AND ADDITION TASKS", "section_text": "We take the addition task to mean sequence-to-sequence decimal addition of unaligned, variable length numbers (the hardest of several versions). An example of this task for maximum length 5 digits is \"73952+9462\"->\"83414\". We train on ntrain randomly generated examples for up to 100 epochs with early stopping, and report the smallest model setup that achieves > 99% character-level. validation accuracy, or the best validation accuracy achieved by any model setup if none achieve 99%.\nFor ndigits = 5 and ntrain = 100000, the QRNN converges with models larger than 3 layers of 256 units, while in our experiments LSTMs require only 2 layers of 128 units. For ndigits = 10 anc. ntrain = 100000, an LSTM reaches 98.5% with 3 layers of 1024 units, while the best QRNN mode (4 layers of 512 units) reaches only 95.0%.\nThe copy task is similarly implemented as sequence-to-sequence reconstruction of variable-length. decimal numbers. For 5 digits, an example is \"23532\"->\"23532'. We train on 10000 randomly generated examples for up to 100 epochs. For ndigits = 5, the QRNN converges with models larger. than 2 layers of 32 units or 1 layer of 256 units, while the LSTM requires only 1 layer of 32 units For ndigits = 10, the QRNN requires a model larger than 2 layers of 128 units while the LSTM. requires a model of at least 2 layers of 64 units or 1 layer of 256 units. For ndigits = 40, a QRNN. with 5 layers of 512 units reaches 98.0% while the best LSTM model, with 3 layers of 512 units. only reaches 95.5%.\nwhere a is a length normalization parameter (Wu et al., 2016), w; is the ith output character, and Ttrg is a \"target length' equal to the source sentence length plus five characters. This reduces at. Q = 0 to ordinary beam search with probabilities:.\nT 1 L og(P log(p(wi|W1...Wi-1)) cand T i=1\nA deeper LSTM would likely need some kind of residual or highway connections to converge on this task, while the deep QRNN converges relatively well despite our earlier experiences with 4-layer QRNNs without dense connections not converging successfully on the sentiment task."}]
HyAddcLge
[{"section_index": "0", "section_name": "REVISITING DISTRIBUTED SYNCHRONOUS SGD", "section_text": "Jianmin Chen* Xinghao Pan* Rajat Monga, Samy Bengio\njmchen, xinghao, rajatmonga, bengio}@google.con"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The recent success of deep learning approaches for domains like speech recognition (Hinton et al.. 2012) and computer vision (Ioffe & Szegedyl2015) stems from many algorithmic improvements but also from the fact that the size of available training data has grown significantly over the years together with the computing power, in terms of both CPUs and GPUs. While a single GPU oftei provides algorithmic simplicity and speed up to a given scale of data and model, there exist ar operating point where a distributed implementation of training algorithms for deep architectures. becomes necessary.\nCurrently, popular distributed training algorithms include mini-batch versions of stochastic gradien. descent (SGD) and other stochastic optimization algorithms such as AdaGrad (Duchi et al.|2011). RMSProp (Tieleman & Hinton2012), and ADAM (Kingma & Ba2014).Unfortunately, bulk synchronous implementations of stochastic optimization are often slow in practice due to the neec to wait for the slowest machine in each synchronous batch. To circumvent this problem, practi. tioners have resorted to asynchronous approaches which emphasize speed by using potentially stale. information for computation. While asynchronous training have proven to be faster than their syn. chronous counterparts, they often result in convergence to poorer results..\nIn this paper'] we revisit synchronous learning, and propose a method for mitigating stragglers in synchronous stochastic optimization. Specifically, we synchronously compute a mini-batch gradient with only a subset of worker machines, thus alleviating the straggler effect while avoiding any staleness in our gradients. The primary contributions of our paper are:\nRafal Jozefowicz\nSan Francisco. CA. USA"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Distributed training of deep learning models on large-scale training data is typi- cally conducted with asynchronous stochastic optimization to maximize the rate of updates, at the cost of additional noise introduced from asynchrony. In con- trast, the synchronous approach is often thought to be impractical due to idle time wasted on waiting for straggling workers. We revisit these conventional beliefs in this paper, and examine the weaknesses of both approaches. We demonstrate that a third approach, synchronous optimization with backup workers, can avoid asynchronous noise while mitigating for the worst stragglers. Our approach is empirically validated and shown to converge faster and to better test accuracies.\nIllustration of how gradient staleness in asynchronous training negatively impacts test ac- curacy and is exacerbated by deep models. Measurement of machine response times for synchronous stochastic optimization in a large deployment of 100 GPUs, showing how stragglers in the tail end affect convergence speed. Proposal of synchronous stochastic optimization with backup workers to mitigate straggler. effects without gradient staleness. Establishing the need to measure both speed of convergence and test accuracy of optimum for empirical validation.\nThe remainder of this paper is organized as follows. We briefly present preliminaries and notatior in Section 1.1 Section2|describes asynchronous stochastic optimization and presents experimental evidence of gradient staleness in deep neural network models. We present our approach in Section[3 and exhibit straggler effects that motivate the approach. We then empirically evaluate our approach in Sections4] Related work is discussed in Section[5] and we conclude in Section[6\nA first-order stochastic optimization algorithm achieves this by iteratively updating 0 using a stochastic gradient G VF(x; 0) computed at a randomly sampled xi, producing a sequence of models e(o), g(1),.... Stochastic optimization algorithms differ in their update equations. For example, the update of SGD is 0(t+1) = 9(t) 7tG(t) = 9(t) _ 7tVF(xi; 0(t)), where 7t is the learning rate or step size at iteration t. A mini-batch version of the stochastic optimization algo rithm computes the stochastic gradient over mini-batch of size B instead of a single datapoint, i.e., performance on an exponential moving average 3(t) = 0(t-1) + (1 - )0(t) with decay rate a.\nSince each worker communicates with the parameter servers independently of the others, this is called Asynchronous Stochastic Gradient Descent (Async-SGD), or more generally, Asynchronous. Stochastic Optimization (Async-Opt). A similar approach was later proposed byChilimbi et al.. (2014). Async-Opt is presented in Algorithms1and\nIn practice, the updates of Async-Opt are different than those of serially running the stochastic. optimization algorithm for two reasons. Firstly, the read operation (Algo[1Line[2) on a worker may be interleaved with updates by other workers to different parameter servers, so the resultant O may. not be consistent with any parameter incarnation o(t). Secondly, model updates may have occurred while a worker is computing its stochastic gradient; hence, the resultant gradients are typically. computed with respect to outdated parameters. We refer to these as stale gradients, and its staleness. as the number of updates that have occurred between its corresponding read and update operations\nEmpirical demonstration that our proposed synchronous training method outperforms asyn chronous training by converging faster and to better test accuracies.\nOur interest is in distributed stochastic optimization using N worker machines in charge of comput ing stochastic gradients that are sent to M parameter servers. Each parameter server j is responsible for storing a subset 0[j] of the model, and performing updates on 0[j]. In the synchronous setting, we will also introduce additional b backup workers for straggler mitigation.\nAn approach for a distributed stochastic gradient descent algorithm was presented in Dean et al. (2012), consisting of two main ingredients. First, the parameters of the model are distributed on multiple servers, depending on the architecture. This set of servers are called the parameter servers. Second, there can be multiple workers processing data in parallel and communicating with the pa rameter servers. Each worker processes a mini-batch of data independently of the others, as follows:\nThe worker fetches from the parameter servers the most up-to-date parameters of the model needed to process the current mini-batch; It then computes gradients of the loss with respect to these parameters; Finally, these gradients are sent back to the parameter servers, which then updates the model accordingly.\nAlgorithm 1: Async-SGD worker k. Algorithm 2: Async-SG Input: Dataset Input: /o, /1, ... learning Input: B mini-batch size. Input: a decay rate. 1while True do Input: 0(o) model initializa Read 0k = (0[0], ..., 0[M]) from PSs. 1 for t = 0, 1,... do 2 G(t) := 0. 2 Wait for gradient G fro. 3 g(t+1) [j] g(t) [j] - 4 for i = 1, ..., B do 3 g(t) [j] = a0(t-1) [j] + 5 Sample datapoint x from I'. 4 t t)+1VF(xi;Ok). 5 end 6 end 7 8 9 end Te 1.6 Read 0[L] Send G[L] Step 2. Back Prop 1.4 doud puemoy'I dadd Send G[L-1 1.2 Read 0[L-1 1.0 0.8 Read 0[1] Send G[1] 0.6 Read 0[0] eest 0.4 Send G[0] 0.2 0 20 Av Figure 1: Gradient staleness dependence on model layer. Gradients\nAlgorithm 1: Async-SGD worker k\nFigure 1: Gradient staleness dependence on model layer. Gradients are computed in a bottom-up forward propagation step followed by a top-down back propagation step. Parameters are read from servers in the forward prop, but gradients are sent to servers during the back prop. Thus, gradients of lower layers are more stale than top layers\nDespite the abovementioned prob. lems, Async-Opt has been shown to. be scale well up to a few dozens of. workers for some models. However,. at larger scales, increasing the num-. ber of machines (and thus staleness of gradients) can result in poorer. trained models\nWe explore how increased staleness contributes to training of poorer models. In order to mimic the. setting on a smaller scale, we trained a state-of-the-art MNIST CNN model but simulated staleness. by using old gradients for the parameter updates. Details of the model and training are provided in AppendixA.1\nThe best final classification error on a test set was 0.36%, which increases to 0.47% with average gradient staleness of 20 steps, and up to 0.79% with 50 steps (see Figure[2)\nAlgorithm 2: Async-SGD Parameter Server J\n1.6 Read 0[L] Send G[L] Step 2. Back Prop 1.4 Send G[L-1] 1.2 Read 0[L-1] 1.0 0.8 Read 0[1] Send G[1] 0.6 Send G[0] 0.4 Read 0[0] 0.2 0 20 40 60 80 100\nFigure 2: Degradation of test classi-. fication error with increasing average gradient staleness in MNIST CNN model.\nDe Sa et al.(2015); Mania et al.(2015), most of which focus on individual algorithms, under strong. assumptions that may not hold up in practice. This is further complicated by deep models with mul-. tiple layers, since the times at which model parameters are read and which gradients are computed and sent are dependent on the depth of the layers (Figure[1). To better understand this dependence. in real models, we collected staleness statistics on a Async-Opt run with 40 workers on a 18-layer. Inception model (Szegedy et al.] 2016) trained on the ImageNet Challenge dataset (Russakovsky. et al.12015), as shown in Table1\nLayer Min Mean Median Max Std Dev Count 18 4 14.54 13.94 29 3.83 10908 12 5 11.35 11.3 23 3.09 44478 11 8 19.8 19.59 34 3.65 187 0 24 38.97 38.43 61 5.43 178\nTable 1: Staleness of gradients in a 18-layer Inception model. Gra. dients were collected in a run of asynchronous training using 40. machines. Staleness of a gradient is measured as the number of updates that have occurred between its corresponding read and up. date operations. The staleness of gradients increases from a mean. of ~14.5 in the top layer (Layer 18) to ~39.0 in the bottom layer. (Layer 0).\nOnce the average simulated staleness was chosen to be more than 15 steps, the results started tc significantly deteriorate and the training itself became much less stable. We had to employ following tricks to prevent the results from blowing up:"}, {"section_index": "3", "section_name": "REVISTING SYNCHRONOUS STOCHASTIC OPTIMIZATION", "section_text": "Both Dean et al.(2012) and Chilimbi et al.(2014) use versions of Async-SGD where the main po tential problem is that each worker computes gradients over a potentially old version of the model. In order to remove this discrepancy, we propose here to reconsider a synchronous version of dis. tributed stochastic gradient descent (Sync-SGD), or more generally, Synchronous Stochastic Op. timization (Sync-Opt), where the parameter servers wait for all workers to send their gradients. aggregate them, and send the updated parameters to all workers afterward. This ensures that the. actual algorithm is a true mini-batch stochastic gradient descent, with an effective batch size equa to the sum of all the mini-batch sizes of the workers..\nWhile this approach solves the staleness problem, it also introduces the potential problem that the actual update time now depends on the slowest worker. Although workers have equivalent compu- tation and network communication workload, slow stragglers may result from failing hardware, or contention on shared underlying hardware resources in data centers, or even due to preemption by other jobs.\nTo alleviate the straggler problem, we introduce backup workers (Dean & Barroso|2013) as follows instead of having only N workers, we add b extra workers, but as soon as the parameter servers receive gradients from any N workers, they stop waiting and update their parameters using th N gradients. The slowest b workers' gradients will be dropped when they arrive. Our method is presented in Algorithms 34\nAlgorithm 3: Sync-SGD worker k, where k = :..,N +6\nSlowly increase the staleness over the first 3 epochs of training. This mimics increasing the number of asynchronous workers and is also very important in practice for some of the models we experimented with (e.g. large word-level language models). The trick was no relevant with a simulated staleness less than 15 but became crucial for larger values. Use lower initial learning rates when staleness is at least 20, which reduces a frequency o explosions (train error goes to 90%). This observation is similar to what we found in othe experiments - we were able to use much larger learning rates with synchronous training and the results were also more stable. Even with above tricks the divergence occurs occasionally and we found that restarting training from random weights can lead to more successful runs. The best results were ther chosen based on validation set performance.\nAlgorithm 4: Sync-SGD Parameter Server j"}, {"section_index": "4", "section_name": "3.1 STRAGGLER EFFECTS", "section_text": "The use of backup workers is motivated by the need to mitigate slow stragglers while maximizin computation. We investigate the effect of stragglers on Sync-Opt model training here\nWe ran Sync-Opt with N = 100 workers, b = 0 backups, and 19 parameter servers on the Inceptior model. Using one variable as a proxy, we collected for each iteration both the start time of the iteration and the time when the kth gradient of that variable arrived at the parameter server. These times are presented in Figure[3|for k = 1, 50, 90, 97, 98, 99, 100. Note that 80% of the 98th gradien arrives in under 2s, whereas only 30% of the final gradient do. Furthermore, the time to collect the final few gradients grows exponentially, resulting in wasted idle resources and time expended to wait. for the slowest gradients. This exponential increase is also seen in Figure4\nCDF of Time to Collect k gradients Time to collect k gradients 1.0 2.8 k=1 Mean 2.6 0.8 k=50 Median 2.4 k = 90 0.6 S 2.2 k = 97 fine 0.4 2.0 k=98 k = 99 1.8 0.2 k = 100 1.6 0.0 1.4 1 2 3 4 5 6 0 20 40 60 80 100 Time / s Number of gradients,k\nFigure 3: CDF of time taken to aggregate gradients from N machines. For clarity, we only show times of < 6s; the maximum observed time is 310s.\nThus, one might choose to drop slow stragglers to decrease the iteration time. However, using fewer machines implies a smaller effective mini-batch size and thus greater gradient variance, which in turn could require more iterations for convergence. We examine this relationship by running Sync-Opl with N = 50, 70, 80, 90, 100 and b = 6, and note the number of iterations required for convergence in Figure|5] Additional details of this training are provided in Appendix|A.2] As N is doubled from 50 to 100. the number of iterations to converge nearly halves from 137.5e3 to 76.2e3.\nIterations to Converge 150 140 130 120 110 100 90 80 70 50 60 70 80 90 100 Number of workers aggregated\nIterations to Converge Tlalea Time to Converge 150 65 fhhnnnnn fo dahnnnnnn 140 60 130 55 120 110 rinee 50 100 45 90 40 80 70 35 50 60 70 80 90 100 50 60 70 80 90 100 Number of workers aggregated Number of workers aggregated\nFigure 5: Number of iterations to converge when ag gregating gradient from N machines..\n2 Since we are interested in the gradient quality and convergence behavior but not running time in this experiment, the backups serve only to reduce our data collection time but do not affect our analysis\nnts Time to collect k gradients 2.8 Mean 2.6 Median 2.4 S 2.2 mmee 2.0 1.8 1.6 1.4 6 0 20 40 60 80 10 Number of gradients, k\nFigure 4: Mean and median times, across all itera- tions, to collect k gradients on N = 100 workers and 6 = 0 backups. Most mean times fall between 1.4s and 1.8s, except of final few gradients\nEstimated Time to Converge 65 60 55 y/aw! 50 45 40 35 100 50 60 70 80 90 10 Number of workers aggregated\nFigure 6: Estimated time to converge when aggregat-. ing gradients from N machines on a N + b = 100. nachine configuration. Convergence is fastest when choosing N = 96. b = 4."}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "Epochs to Reach e Precision @ 1 0.785Convergence of Sync-Opt, Inception 80 70 Initial. Test Epochs 0.780 0001X 60 precision to O 0.775 rate 50 at converge 0.770 Yo res 7= 1.125 40 0.765 convergence = 75.00% 7= 2.25 = 77.29% 1.125 77.29% 52628 %= 4.5 = 77.75% 0.755 20 2.25 77.75% 65811 Y = 9.0 = 78.15% 0.750 10 4.5 78.15% 76209 20 30 40 50 60 70 80 90 100 2 4 6 8 10 0 Initial learning rate ^/0 Thousands of iterations 9.0 78.17% 77235\nTable 2: Test accuracies at con- vergence and number of epochs to converge for different initial learning rates Yo. Low initial learning rates result in faster convergence to poorer local optimum."}, {"section_index": "6", "section_name": "4.2 INCEPTION", "section_text": "3Convergence is defined as the point where maximum test accuracy or lowest test error is reached\nHence, there is a trade-off between dropping more stragglers to reduce iteration time, and waiting. for more gradients to improve the gradient quality. Consider a hypothetical setting where we have N + b = 100 machines, and we wish to choose the best configuration of N and b to minimize running time to convergencq For each configuration, we can estimate the iterations required from Figure5|(linearly interpolating for values of N for which we did not collect data). We can multiply. this with the mean iteration times (Figure4) to obtain the running time required to converge for each. setting of N and b. These results are shown in Figure[6] indicating that N = 96, b = 4 converges fastest. Therefore, this motivates our choice to use a few backup workers for mitigating stragglers...\nIn this section, we present our empirical comparisons of synchronous and asynchronous distributed. stochastic optimization algorithms as applied to models such as Inception and PixelCNN. All exper iments in this paper are using the TensorFlow system (Abadi et al.|2015)..\nWe are interested in two metrics of comparison for our empirical validation: (1) test error or ac- curacy, and (2) speed of convergence3[ We point out that for non-convex deep learning models.. it is possible to converge faster to a poorer local optimum. Here we show a simple example with. Inception using different learning rates..\nConvergence of Sync-Opt, Inception 0.785 70 Initial Test Epochs 0.780 11 0001X 60 precision to O 0.775 rate 50 at converge 0.770 Y0 7o= 1.125 40 = 75.00% convergence 7= 2.25 = 77.29% 1.125 77.29% 52628 %= 4.5 = 77.75% 0.755 ~o= 9.0 20 2.25 77.75% 65811 = 78.15% 0.750 10 4.5 78.15% 76209 40 50 60 70 80 90 100 0 6 20 30 A 10 Thousands of iterations Initial learning rate /o 9.0 78.17% 77235 (a) Convergence (b) Epochs to e test precision 1\nFigure 7: Convergence of Sync-Opt on Inception model using N =- 100 workers and b = 6 backups, with varying initial learning rates yo. To reach a lower e test precision, small yo's require fewer epochs than large yo's. However, small yo's either fail to attain high e precision, or take more epochs than higher yo's..\nWe ran Sync-Opt on Inception with N = 100 and b = 6, but varied the initial learning rate /o. between 1.125 and 9.0. (Learning rates are exponentially decreased with iterations.) Table2|shows that smaller yo converge faster, but to poorer test precisions. Focusing on speed on an early phase of training could lead to misleading conclusions if we fail to account for eventual convergence.. For example, Figure|3b|shows that using %o = 1.125 reaches e = 75% precision 1.5 faster than Yo = 4.5, but is slower for e = 77.75%, and fails to reach higher precisions..\nWe conducted experiments on the Inception model (Szegedy et al.2016) trained on ImageNet Chal- lenge dataset (Russakovsky et al.2015), where the task is to classify images out of 1000 categories We used several configurations, varying N + b from 53 to 212 workers. Additional details of the training are provided in Appendix [A.3] An epoch is a synchronous iteration for Sync-Opt, or a full pass of N updates for Async-Opt, which represent similar amounts of computation. Results of this experiment are presented in Figure[8\nFigure 8b|shows that Sync-Opt outperforms Async-Opt in test precision: Sync-Opt attains ~0.5% better test precision than Async-Opt for comparable N + b workers. Furthermore, Sync-Opt con-\nConvergence on Inception. Test Precision @ 1. 0.785 0.786 1 0.780 Sync 0.775 0.784 Async 0.770 0.782 53 async 0.765 106 async 0.780 0.760 212 async 50+3 sync eest 0.755 est 100+6 sync 0.776 0.750 200+12 sync 0.745 0.774 0 20 40 60 80 100 40 60 80100120140160180200220 Time / h Number of workers (a) Convergence (b) Test precision @ 1 Time to converge Epochs to converge Mean Epoch Time 160 70 2.6 Sync 5 Sync Sync 140 60 awog yoodae 2.4 Async Async Async 120 50 2.2 100 40 2.0 80 30 1.8 60 Mean 40 Tn 20 1.6 20 10 1.4 40 60 80 100 120 140 160 180200 220 40 60 80 100 120 140 160 180 200 220 40 60 80100 120140160180200220 Number of workers. Number of workers. Number of workers. (c) Epochs to converge (d) Time to converge (e) Mean epoch time\nFigure 8: Convergence of Sync-Opt and Async-Opt on Inception model using varying number of machines Sync-Opt with backup workers converge faster, with fewer epochs, to higher test accuracies.\nverges 6h and 18h faster than Async-Opt for 106 and 212 workers respectively, and is 3h slower when 53 workers are used, as seen in Figure 8d This difference in speed is largely due to the fewer epochs (Figure8c) needed by Sync-Opt, but comparable or better epoch time (Figure|8e)"}, {"section_index": "7", "section_name": "4.3 PIXELCNN EXPERIMENTS", "section_text": "The second model we experimented on is PixelCNN (Oord et al.][2016), a conditional image gener. ation deep neural network, which we train on the CIFAR-10 (Krizhevsky & Hinton,2009) dataset Configurations of N + b = 1, 8, 16 workers were used; for Sync-Opt, we always used b = 1 backup. worker. Additional details are provided in Appendix A.4.\nPixelCNN Min NLL Attained Time to Reach e Test NLL. 300 3.0 16 async =2.329 8 async 250 =2.196 2.8 1 200 =2.145 7+1 sync 150 2.6 Tmee 15+1 sync 100 2.4 50 MwWw 0 2.2 -50 16 async 8 async 1 7+1 sync 15+1 sy 0 50 100 150 200 250 Number of Workers. Time / h (b) (a)\nFigure 9: Convergence of synchronous and asynchronous training on PixelCNN model. Sync-Opt achieves lower negative log likelihood in less time than Async-Opt.."}, {"section_index": "8", "section_name": "5 RELATED WORK", "section_text": "An alternative solution, \"softsync\", was presented inZhang et al.(2015b), which proposed batching. gradients from multiple machines before performing an asynchronous SGD update, thereby reducing. the effective staleness of gradients. Similar to our proposal, softsync avoids stragglers by not forcing. updates to wait for the slowest worker. However, softsync allows the use of stale gradients but we do not. The two solutions provide different explorations of the trade-off between high accuracy (by. minimizing staleness) and fast throughput (by avoiding stragglers)..\nWatcharapichat etal. (2016) introduces a distributed deep learning system without parameter servers, by having workers interleave gradient computation and communication in a round-robin pattern. Like Async-Opt, this approach suffers from staleness. We also note that in principle, work. ers in Sync-Opt can double as parameter servers and execute the update operations and avoid the need to partition hardware resources between workers and servers.\nDas et al.(2016) analyzes distributed stochastic optimization and optimizes the system by solving detailed system balance equations. We believe this approach is complimentary to our work, and could potentially be applied to guide the choice of systems configurations for Sync-Opt.\nKeskar et al.(2016) suggests that large batch sizes for synchronous stochastic optimization leads to poorer generalization. Our effective batch size increases linearly with the number of workers N. However, we did not observe this effect in our experiments; we believe we are not yet in the large batch size regime examined byKeskar et al.(2016).\nDistributed training strategies for deep learning architectures will become ever more important as. the size of datasets increases. In this work, we have shown how both synchronous and asynchronous distributed stochastic optimization suffer from their respective weaknesses of stragglers and stal-. eness. This has motivated our development of synchronous stochastic optimization with backup. workers, which we show to be a viable and scalable strategy.."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Convergence of the test negative log likelihood (NLL) on PixelCNN is shown in Figure9a] where lower is better. Observe that Sync-Opt obtains lower NLL than Async-Opt; in fact, Async-Opt is even outperformed by serial RMSProp with N = 1 worker, with degrading performance as N increases from 8 to 16. Figure9b|further shows the time taken to reach e test NLL. Sync-Opt reduces the time to reach e = 2.145 from 247h to 58.3h; this NLL is not even achieved by Async-Opt.\nWe are currently experimenting with different kinds of datasets, including word-level language mod- els where parts of the model (the embedding layers) are often very sparse, which involves very different communication constraints. We are also working on further improving the performance of synchronous training like combining gradients from multiple workers sharing the same machine before sending them to the parameter servers to reduce the communication overhead. An alternative of using time-outs instead of backup workers is also being explored.\nJianmin Chen, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. Revisiting distributed synchronous sgd arXiv preprint arXiv:1604.00981, 2016.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011.\nJohn Duchi, Michael I Jordan, and Brendan McMahan. Estimation, optimization, and parallelism when data is sparse. In Advances in Neural Information Processing Systems, pp. 2832-2840, 2013.\nG. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 29:82-97, 2012. S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nMu Li, David G Andersen, Jun Woo Park, Alexander J Smola, Amr Ahmed, Vanja Josifovski, James Long. Eugene J Shekita, and Bor- Yiing Su. Scaling distributed machine learning with the parameter server. In 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), pp. 583-598, 2014.\nHoria Mania, Xinghao Pan, Dimitris Papailiopoulos, Benjamin Recht, Kannan Ramchandran, anc Michael I Jordan. Perturbed iterate analysis for asynchronous stochastic optimization. arXiv preprini arXiv:1507.06970, 2015.\nAaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuogl Conditional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328, 2016.\nRemi Leblond, Fabian Pedregosa, and Simon Lacoste-Julien. Asaga: Asynchronous parallel saga. arXiv preprint arXiv:1606.04809. 2016.\nBenjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach to paral. lelizing stochastic gradient descent. In Advances in Neural Information Processing Systems, pp. 693-701 2011.\nSixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging sgd. In Advance in Neural Information Processing Systems, pp. 685-693, 2015a\nYuchen Zhang and Michael I Jordan. Splash: User-friendly programming interface for parallelizing stochastic algorithms. arXiv preprint arXiv:1506.07552, 2015.\nMartin Zinkevich, Markus Weimer, Lihong Li, and Alex J Smola. Parallelized stochastic gradient descent. Ir Advances in neural information processing systems, pp. 2595-2603, 2010\nTijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of it recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012."}, {"section_index": "10", "section_name": "A.1 MNIST CNN, SECTION2.1", "section_text": "The model used in our experiments is a 4-layer CNN that have 3x3 filters with max-pooling anc weight normalization in every layer. We trained the model with SGD for 25 epochs and evaluatec performance on the exponential moving average 0 using a decay rate of a = 0.9999. Initial learning rate was set to be 0.1 and linearly annealed to O in the last 10 epochs. We also used small image rotations and zooms as a data augmentation scheme."}, {"section_index": "11", "section_name": "A.2 INCEPTION, SECTION3.1", "section_text": "For our straggler experiments, we trained the Inception (Szegedy et al.[ 2016) model on the Im ageNet Challenge dataset (Russakovsky et al.2015). 10 parameter servers were used, and each worker was equipped with a k40 GPU.\nThe underlying optimizer was RMSProp with momentum, with decay of O.9 and momentum of 0.9 Mini-batch size B = 32 was used. Initial learning rates yo were set at 0.045N, which we found to. provide good test precisions for Inception. Learning rates were also exponentially decreased with\nTest precisions were evaluated on the exponential moving 0 using Q = 0.9999"}, {"section_index": "12", "section_name": "A.3 INCEPTION, SECTION4.2", "section_text": "For experiments comparing Async-Opt and Sync-Opt on the Inception model in Section|4.2] eacl worker is equipped with a k40 GPU. For N + b = 53 workers, 17 parameter servers were used for N + b = 106 workers, we used 27 parameter servers; and 37 parameter servers were used for N + b = 212.\nIn the asynchronous training mode, gradient clipping is also needed for stabilization, which requires. each worker to collect the gradient across all layers of the deep model, compute the global norm G and then clip all gradient accordingly. However, synchronization turns out to be very stable so. gradient clipping is no longer needed, which means that we can pipeline the update of parameters. in different layers: the gradient of top layers' parameters can be sent to parameter servers while. concurrently computing gradients for the lower layers..\nThe underlying optimizer is RMSProp with momentum, with decay of O.9 and momentum of 0.9.. Mini-batch size B = 32 was used. Initial learning rates yo for Async-Opt were set to O.045; for. Sync-Opt, we found as a rule-of-thumb that a learning rate of 0.045N worked well for this model.. Learning rates were then exponentially decayed with decay rate = 0.94 as yoBt/(2T) for Async- Opt, where T = t|/B is the number of mini-batches in the dataset. For Sync-Opt, we learning rates were also exponentially decreased at rate of oBt N/(2T), so that the learning rates after computing the same number of datapoints are comparable for Async-Opt and Sync-Opt..\nTest precisions were evaluated on the exponential moving\nThe PixelCNN (Oord et al.]2016) model was trained on the CIFAR-10 (Krizhevsky & Hinton 2009) dataset. Configurations of N + b = 1, 8, 16 workers each with a k80 GPU, and 10 parameter servers were used. For Sync-Opt, we always used b = 1 backup worker. The underlying optimizer. is RMSProp with momentum, using decay of O.95 and momentum of 0.9. Initial learning rates yo. were set to 1e - 4 and slowly decreased to 3e - 6 after 200,o00 iterations. Mini-batch size B = 4 was used."}]
H1eLE8qlx
[{"section_index": "0", "section_name": "OPTIONS DISCOVERY WITH BUDGETED REINFORCE MENT LEARNING", "section_text": "Aurelia Leon\nSorbonne Universites. UPMC Univ Paris 06, UMR 7606, LIP6, F-75005, Paris, France aurelia.leon@lip6.fr\nSorbonne Universites, UPMC Univ Paris 06, UMR 7606, LIP6, F-75005, Paris, France ludovic.denoyer@lip6.fr"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Research in cognitive science based on the study of human or animal behavior have long emphasize that the internal policy of such agents can be seen as a hierarchical process where solving a task i obtained by sequentially solving sub-tasks, each sub-task being treated by choosing a sequence o primitive actions (Botvinick et al., 2o09). In the computer science domain, these researches hav oeen echoed during the last decade with the apparition of the hierarchical reinforcement learnin paradigm (Dayan & Hinton, 1993; Dietterich, 1998; Parr & Russell, 1998) and its generalization t options (Sutton et al., 1999). The underlying idea is to define a policy at two different levels: a leve which goal is to choose between options, and a level which will select the actions to apply to th environment based on the current option. Informally, in a maze an option can correspond to an orde like go to the door, while the actions are primitive moves (up, down, left, right). In the literature, th catalog of available options is usually specified manually which is not satisfactory.\ncode available here: https://github.com/aureliale/BONN-model"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We consider the problem of learning hierarchical policies for Reinforcement Learning able to discover options, an option corresponding to a sub-policy over a set of primitive actions. Different models have been proposed during the last decade that usually rely on a predefined set of options. We specifically address the oroblem of automatically discovering options in decision processes. We describe a new learning model called Budgeted Option Neural Network (BONN) 1 able to discover options based on a budgeted learning objective. The BONN model is evaluated on different classical RL problems, demonstrating both quantitative and qualitative interesting results.\nReinforcement Learning (RL) is one of the key problem in machine learning , and the interest of. the research community has been recently renewed with the apparition of models mixing classical reinforcement learning techniques and deep neural networks. These new methods include for ex. ample the DQN algorithm (Mnih et al., 2015) and its variants (Van Hasselt et al., 2015), the use of recurrent architectures with policy gradient models (Wierstra et al., 2010), or even approaches. like Guided Policy Search (Levine & Koltun, 2013) or actor-critic algorithms (Konda & Tsitsiklis,. 1999).\nWe propose a new architecture called BONN (Budgeted Options Neural Network) able to simul- taneously discover options and to learn how and when to use them. It is based on the idea that a good policy is a trade-off between policy efficiency and cognitive effort: a system will learn relevant options if these options allow it to reduce the cognitive effort for solving the task, without decreasing the quality of the solution. This idea is implemented here through a budgeted learning problem that encourages the BONN model to learn to acquire as few information as possible.\nThe contributions of the paper are: (i) We propose the BONN model able to discover options basec on a Budgeted Reinforcement Learning problem where information acquisition has a cost, each option being a continuous vector in an learned latent option space. (ii) We propose a discrete variant of BONN (D-BONN) where a discrete set of options is learned, each option corresponding to a particular embedding in the latent option space. (iii) The model is tested on different RL tasks and exhibits interesting properties and a strong ability to capture relevant options.\nThe paper is organized as follows: we present the background in RL and on recurrent policy gra dients methods in Section 2. The BONN model is presented in Sections 3.1 while the budgeted learning problem is described in Section 3.2. The variant of BONN able to extract a discrete set of options is given in Section 3.3. At last, experiments are proposed in Section 4 while the related. works are presented in Section 5..\nLet us denote a MDP as a set of states S, a discrete set of possible actions A, a transition distributior. P(st+1|St, at) and a reward function r(s, a) E R+. We consider that each state st is associated with. an observation xt E Rn, and that xt is a partial view of st (i.e POMDP), n being the size of the. observation space. Moreover, we denote P1 the probability distribution over the possible initia states of the MDP.\nGiven a current trajectory x1, a1, x2, a2, ..., xt, a policy is defined by a probability distribution such that (x1, a1, x2, a2, ...., xt, a) = P(a|x1, a1, x2, a2, ...., xt) which is the probability of each possible action a at time t, knowing the history of the agent.\nWe can define the reinforcement learning problem as the optimization problem such that the optima policy r* is computed by maximizing the expected discounted return J():\nJ() = Eso~P1,a0,...,aT-1~ [Ro]\nwhere so is sampled following P1 and the actions are sampled based on\nM T-1 1 VxJ(x) ~ M m=1 t=0\n3We describe finite-horizon problems where T is the size of the horizon and y < 1, but the approach ca also be applied to infinite horizon problems with discount factor y < 1.\nDifferent learning algorithms aim at maximizing J(). In the case of policy gradient techniques if we consider that, for sake of simplicity, also denotes the set of parameters of the policy, the gradient of the objective can be approximated with:.\nwhere M is the number of sampled trajectories used for approximating the gradient using Monte. Carlo sampling techniques, bt is a variance reduction term at time t estimated during learning, and. we consider that future actions do not depend on past rewards (see Wierstra et al. (2010) for details on recurrent policy gradients).\nFigure 1: The BONN Architecture. Arrows correspond to dependencies, dashed arrows correspond to sampled values. Note that when ot = 1 the model observes yt and compute a new option (in this example we have 3 = 1 and 6 = 1 ), and that when t = 0 (everywhere else in this example) the model doesn't use yt and keeps the same option..\nWe consider here a particular case of POMDP where the agent always observe xt, but can also. ask for the supplementary observation yt that will help him to decide which action to choose. This. situation corresponds to many practical cases: for example a robot that acquires information through its camera (xt) but can sometimes decide to make a complete scan of the room (yt); a user driving a car (using xt) but who decides to consult its map or GPS (yt); a virtual agent taking decisions in a. virtual world (based on xt) but that can ask instructions from a human (yt), etc. Note that xt or yt. can be an empty observation.\nWe now describe the budgeted option neural network. This model is composed of three components. The underlying idea is that the first component will use the additional observations yt to compute. which option to use, while the second component will use the basic observations xt and the lastly. chosen option to sample primitive actions; the third component being used to decide when to switcl between options. A new option will thus be computed each time yt is acquired. Let us now describe. how each component works: (i) The first one (or option model) aims at choosing which option tc apply depending on the observations yt collected over the states of the process. In our model, ai. option is represented by a vector denoted ot E RO, O being the size of the options representatior. space. (ii) Choosing a new option ot will then initialize the second component (or actor model) During the next time steps, the actor model will sequentially choose actions based on observations x and update its state until a new option is generated. (iii) The acquisition model denoted ot E {0; 1 will decide if the model has to acquire yt or not..\nTo better understand this two-levels architecture, we provide the inference pseudo-code in Algorithn 2 and the architecture of BONN in Figure 1. We now describe the resulting components (the details are given in the Appendix):\nOption model: The option model will be denoted f such that f(yt) = ot generates an option o. as a latent vector in a latent space RO, O being the dimension of this space. Note that the optior. model is a deterministic model where the option is computed based on the current observation usin, a neural network (see Appendix). Recurrent versions of the option model will be studied in a future. work.\nActor Model: The state of the actor model is represented by a vector zt E RZ, Z being the size. of the latent space of the actor model. At each time step, the distribution over the possible set of actions is computed by the function d such that P(at[zt) ~ d(zt, at). Note that d is typically based on a soft-max function mapping action scores to action probabilities (see Appendix). If a new. option ot is computed by the option model, then the state zt is re-initialized with zt = p(ot, xt). p is a reset function which aims at choosing the 'initial' state of the actor for each new option. If\ny3 Y6 Option Model. 06 x4 x5 x 6 X7 Actor Model. a3 OA Acquisition Model 02 03 04 05 06 07 08\nAlgorithm 2 The pseudo code of the inference algorithm for the BONN model.\na new option is not generated, the actor state is updated with a classical recurrent mechanism i.e Zt+1 = g(Zt, At, Xt+1)\nAcquisition Model: The acquisition model aims at deciding if a new option has to be generated. It is a stochastic process such that ot = 1 (new option) or t = 0 (keep the same option) and is computed over the state of the actor and the new observation xt: P(t+1 = 1) = h(Zt, at, Xt+1). In. our case, this probability is based on a Bernoulli distribution over a sigmoid-based h function (see. Appendix)."}, {"section_index": "3", "section_name": "3.2 BUDGETED LEARNING FOR OPTIONS DISCOVERY", "section_text": "The way options emerge in a hierarchical reinforcement learning system has been the topic of many. different works in both reinforcement learning and cognitive science. Most of these techniques as. sociate the problem of option discovery with the problem of sub-goals discovery where differen. strategies are used to discover the sub-goals - see Botvinick et al. (2009) for a review on links be tween cognitive research and hierarchical reinforcement learning. The BONN model is based on a. different approach, where we consider that the discovery of options will result in learning a gooc. trade-off between policy efficiency and the cognitive effort generated by such a policy. The underly. ing idea is that a system will learn relevant options if these options allow to reduce the cognitive effor. that is generated when solving the task, without decreasing the quality of the solution. Note that the. reduction of the cognitive effort has already been studied in cognitive science (Kool & Botvinick. 2014), and very recently in the RL context (Bacon & Precup, 2015b) but defined differently..\nHere, the cognitive effort is associated with the acquisition of the additional information yt, this. additional information (and its computation) being considered costly but crucial for discovering a. good policy: an agent only using the observations xt would be unable to solve the task, but using yt. at each time step would be 'too expensive\". The design choice of xt and yt is fundamental in the architecture, and is a distinct solution for bringing expert knowledge rather than explicitly defining. sub-tasks.\nencourage the agent to learn relevant options that will be used during many time steps, the model extracting relevant sub-policies. We propose to integrate the acquisition cost C (or cognitive effort) in the learning objective, relying on the budgeted learning paradigm already explored in different RL-based applications (Contardo et al., 2016; Dulac-Arnold et al., 2012). We define an augmented reward r* that includes the generated cost:\nwhere X controls the trade-off between the task efficiency and the cognitive charge. The resulting. discounted return denoted R* will be used as the objective to maximize instead of the classical\n1: procedure INFERENCE(S1) > S1 is the initial state. 2: initialize zo with the empty option= (0, 0, .., 0) E RO. 3: for t = 1 to T do 4: acquisition model: Draw Ot ~ h(Zt-1, t-1, xt) 5: if oz == 1 then 6: option level: Acquire yt and compute a new option ot = f(yt) 7: actor level: Initialize the actor zt = r(Ot, xt) 8: else 9: actor level: Update the actor state zt = h(zt-1, at-1, xt) 10: end if 11: actor level:Choose the action at w.r.t Zt. 12: Execute the chosen action 13: end for 14: end procedure\nr*(St,At,0t) =r(St,at)-XOt\ndiscounted return Ro, resulting in the following policy. gradient update rule\nwhere y is the learning rate. Note that this rule now updates both the probabilities of the chosen actions at, but also the probabilities of the o that can be seen as internal actions and that decide if a new option has to be computed or not, b* being the new resulting variance reduction term."}, {"section_index": "4", "section_name": "3. 3 DISCOVERING A DISCRETE SET OF OPTIONS", "section_text": "In the previous sections, we considered that the option ot generated by the option model is a vecto in a latent space RO. This is slightly different than the classical option definition which usually considers that an agent has a given \"'catalog\" of possible sub-routines i.e the set of options is a finit discrete set. We propose here a variant of the model where the model learns a finite discrete set o options.\nLet us denote K the (manually-fixed) number of options one wants to discover. Each option will be associated with a (learned) embedding denoted ok. The option model will store the different possible options and choose which one to use each time an option is required. In that case, the option model will be considered as a stochastic model able to sample one option index denoted it in {1, 2, ..., K} by using a multinomial distribution on top of a softmax computation. In that case, as the option model computes some stochastic choices, the policy gradient update rule will integrate these additional internal actions with:\nT-1 TTT -Y >`(V log P(at|zt) + V log P(ot|Zt-1,at-1,xt) + V log P(it|yt)) (R* - bt t=0\nBy considering that P(it[yt) is computed based on a softmax over a scoring function P(it[yt) l(oi+, yt) where l is a differentiable function, the learning will update both the l function and the options embedding Ok"}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "The complete details of the architecture used for the experiments are provided in the Appendix. We have tested this architecture on 3 different types of environments and compared it to a Recurrent Policy Gradient algorithm using a GRU-based neural network (R-PG):\nCartPole: This is the classical cart-pole environment as implemented in the OpenAI Gym4 platform where observations are (position, angle, speed, angularspeed), and actions are right or le ft. The reward is +1 for every time step without failure. For BONN, the observation is only used by the option model i.e yt = (position, angle, speed, angularspeed), the actor model receiving an empty observation xt = at each time step.\nLunar Lander: This environment corresponds to the Lunar Lander environment proposed in. OpenAI Gym where observations describe the position, velocity, angle of the agent and if he is in contact with the ground or not, and actions are do nothing, fire left engine, fire main engine, fire right engine. The reward is +100 if landing, +10 for each leg on the ground, -100 if crashing and -0.3 each time the main engine is fired. As for the cart-pole, the observation is only acquired by the. option model, the actor model receiving an empty observation xt = 0 at each time step..\nMulti-room Maze: The Multi-room Maze corresponds to a maze composed of k k rooms (k = 2 or k = 3), with doors between them (see Figure 3). The agent always starts at the upper-left corner. while the goal position is chosen randomly at each episode: it can be in any room, in any position and its position changes at each episode. The reward function is -1 when moving and +20 when reaching the goal, while 4 different actions are possible: (up, down, left, right). We consider two variants: MAZE1 where xt = 0 is the empty observation and the agent must learn when to acquired the. more informative observation yt which contains the observed doors, the agent position and the goal\nT-1 (V log P(at|zt) + V log P(ot|Zt-1,at-1,xt)) (R* - b*) t=0\nV log P(at|zt) + V log P(ot|Zt-1,at-1,xt) + V log P(it|yt)) (R* - bt) (5)\n200 180 20 160 R BONN-=0.1 R -40 BONN-=0.1 140 BONN-E=0 BONN-E=0 BONN-=0.25 -60- BONN-=0.25 -R-PG-E=0 --R-PG-E=0 120 -R-PG-=0.1 -80- -R-PG-E=0.1 -R-PG-=0.25 -R-PG-E=0.25 100 -100 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 %obs %obs (a) (b)\nFigure 2: Cost/reward curves for cart-pole (a) and M AZ E2 (b) with different levels of stochasticity The X-axis corresponds to the ratio of options used in each episode (100% means that the agen observes yt and computes a new option at each time step). The Y-axis corresponds to the reward R obtained on the task. The dashed lines are the R-PG performance. Note that the R-PG performanc is obtained without options (i.e using all the information available at each time step)\n(a) (b) (c)\nFigure 3: (a) (b) Examples of trajectories generated by the agent. Each point corresponds to a position where the agent decides to acquire yt and generates a new option. (c) Trajectories generated. with the D-BONN model where K = 9.\nposition if the goal is in the same room than the agent (i.e the agent only observes the current room). In the M AZ E2 world, xt is the agent position in the room, while yt corresponds to the description. of the room and the goal position if the goal is in the same room (i.e contrary to MAZE1, the agent always observes his position). The R-PG baseline has access to all these information (doors. position, goal) at each time step. Note that this environment is much more difficult than other 4-. rooms problems (introduced by Sutton et al. (1999)) in others RL works, where there is only one or two goal position(s), and that, in a more realistic way, the agent only observes the room he is in..\nFor all the environments, we consider different levels of stochasticity e such that the movement o the agent can fail with probability e, in which case a random transition is applied..\nthe action chosen by the agent is applied with probability 1 - e while a random action is chosen with probability e. The higher epsilon is, the more the environment is stochastic.\nWe illustrate the quality of BONN with cost/reward curves (see Figure 2) where the X-axis corre sponds to the number of times an option is computed (normalized w.r.t the size of the episode) while the Y-axis corresponds to the overall reward R = r(st, at) obtained on each task, for different levels of stochasticity e. Note that cost/reward curves are generated by computing the Pareto fron over all the learned models at different cost levels X. These curves have been obtained by first learn\n20 0 .. -20 & 8 R -40 -60 -80 MAZE2-=0.25 ..MAZE1-=0.25 -100 0.1 0.2 0.3 0.4 0.5 %obs (a) (b)\nFigure 4: (a) Cost/reward curve for MAZE1 and M AZE, with a stochasticity level of e = 0.25 (b) The options latent vectors visualized through the t-SNE algorithm. Similar colors mean the goal is in similar areas in the room, except for the red points that corresponds to the options used to reach one of the four possible doors, or when the goal is just near a door..\ning our model with a zero cost X = 0 and then by progressively increasing this cost, forcing th model to acquire yt less frequently and to discover options5\nFirst, one can see that even at low cost values (with only a few options computed), the BONN model is able to keep the same performance than the R-PG model, even if R-PG uses all the information contained in both xt and yt at each time step. Some specific cost/reward values are given in Table. 1 for different environments and different values of X, confirming that BONN is able to keep a high performance level while discovering relevant options. Note that if the cost of computing a new. option is too expensive, the BONN model is not able to find a good policy since it is not allowed to switch between options.\nWe can also see that the obtained reward decreases when the environments are more stochastic. which seems intuitive since stochasticity makes the tasks harder. Figure 4a compares the results obtained on the M AZ E1 environment and the M AZ E2 environment when X = 0.25. We note that the drop of performance in M AZ E2 happens at a lower cost than the one in M AZ E1. Indeed, in M AZ E2, the agent has access to its position at each time step and is more able to \"compensate'' the stochasticiy of the environment than in the M AZ E1 case, where the position is only available through yt.\nFigures 3b and 3a illustrates trajectories generated by the agent in the M AZ E2 environment, and. the positions where the options are generated. We can see that the agent learns to observe yt only. once in each room and that the agent uses the resulting option until it reaches another room (thus. the agent deducts from yt if he must move to another room, or reach the goal if it is in the current. room). Note that the agent cannot find the shortest path to the goal's room because, having no infor- mation about the position of the goal in another room, it learns to explore the maze in a \"particular'. order until reaching the goal's room. We have visualized the options latent vectors using the t-SNE. algorithm (Figure 4b). Similar colors (for example all green points) mean that the options computed. correspond to observations where the goals are in similar areas. We can for example see that all green options are close, and it shows that the latent option space has captured a particular structure. Analyzing this latent structure will be the topic of a future research..\nThe D-BONN model has been experimented on the M AZ E1 2 2, and an example of generated trajectories is given in Figure 3c. Each color corresponds to one of the learned discrete options. One can see that the model is still able to learn a good policy, but the constraint over the fixed number of discrete options clearly decreases the quality of the obtained policy. It seems thus more interesting to use continuous options instead of discrete ones, the continuous options being regrouped in smooth clusters as illustrated in Figure 4b.\n5Learning separate models for many values is time-consuming and does not significantly improve the obtained results\n1.0\nTable 1: Cost/reward values for the different environments, at different cost levels X and different stochasticiy levels e\nHierarchical Reinforcement Learning (Dayan & Hinton, 1993; Dietterich, 1998; Parr & Russell. 1998) has been the surge of many different works during the last decade because it is considered. as one solution to solve long-range planning tasks and allows to transfer knowledge between tasks.. Many different models have been proposed where subtasks are a priori known like Dietterich (1998). which proposes the MAXQ method. The concept of option has been introduced by Sutton et al. (1999). In this architecture, each option consists of an initiation set, its own policy (over primitive. actions or other options), and a termination function which defines the probability of ending the. option given a certain state. Other works defines hierarchical policies based on different levels. of observations: the Abstract Hidden Markov Mode1 (Bui et al., 2002) is based on discrete option defined on each space region, while in Heess et al. (2016) the architecture uses a low-level controller. that as only access to the proprioceptive information and a high-level controller has access to all. observations.\nThe main difference between these works and the BONN architecture is that, in the case of BONN options are latent vectors and the model is able to learn a manifold of possible options - even if a. discrete version has also been proposed with less convincing performances..\nOutside reinforcement learning, our work is also in relation with the Hierarchical Multiscale Recur rent Neural Networks (Chung et al., 2016) that discover hierarchical structure in sequences.\ne =0 e = 0.25 R %obs R %obs R-PG 200 1 196.02 1 Cartpole BONN A = 0.5 199.76 0.06 181.65 0.26 BONN X = 1 190.346 0.05 172.23 0.20 R-PG -5.86 1 -15.82 1 M AZ E2 3x3 BONN X = 3 -5.67 0.19 -27.11 0.16 R-PG 227.35 1 109.31 1 Lunar Lander BONN A = 0.5 221.24 0.16 91.68 0.07 BONN X = 5 210.51 0.06 90.41 0.04\nThe concept of options is at the core of many recent articles, for example in Kulkarni et al. (2016). the Deep Q-Learning framework is extended to integrate hierarchical value functions using intrinsic. motivation to learn the option policies. But in these different models, the options have to be manually chosen a priori and are not discovered during the learning process. Still in the option framework. Daniel et al. (2016) learns options (both policies and termination probabilities) without supervision. ising the Expectation Maximization algorithm. More recently, Bacon & Precup (2015a) does the same with the option-critic architecture, close to an actor-critic algorithm where options are discrete They used a similar experiment to ours with four rooms but only one (fixed) goal ; learning both. option policies and termination functions, the model converges to an option in the first room followed. y a second one in rooms 2 and 3. The closest work to our seems to be Bacon & Precup (2015b) bu1. the model is also based on a discrete set of options in the POMDP framework. Note that this article also introduces the cognitive effort concept. Some models are focused on the problem of learning. macro-actions (Hauskrecht et al., 1998; Mnih et al., 2016). In that case a given state is mapped to. a sequence of actions (i.e macro-actions), similar than when having xt empty in the BONN-model.. But macro-actions are more restricted than options since the sequence of actions is fixed..\nWe have proposed a new model for learning options in POMDP where the agent can choose to ac- quire a more informative observation at each time step. The model is learned in a budgeted learning setting where the acquisition of an additional information, and thus the use of a new option, has a cost. The learned policy is a trade-off between the efficiency and the cognitive effort of the agent In our setting, the options are handled through learned latent representations, and we have also pro posed a discrete version of BONN where the number of options is kept constant. Experimental\nresults show that the model is able to extract relevant options in complex environments. This work opens different research directions. One is to study if BONN can be applied in multi-task reinforce ment learning problems (the environment M AZ E, since the goal position is randomly chosen at each episode, can be seen as a particular simple multitask RL problem). Another question would be to study problems where many different observations can be acquired by the agent at different costs - e.g many different sensors on a robot."}, {"section_index": "6", "section_name": "ACKNOWLEGMENTS", "section_text": "This work has been supported within the Labex SMART supported by French state funds managed by the ANR within the Investissements d'Avenir programme under reference ANR-11-LABX-65"}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Pierre-Luc Bacon and Doina Precup. Learning with options: Just deliberate and relax. 2015b\nMatthew Botvinick, Yael Niv, and Andrew C. Barto. Hierarchically organized behavior and its neural foundations: A reinforcement-learning perspective. cognition, 113.3, 2009\nHung Hai Bui, Svetha Venkatesh, and Geoff West. Policy recognition in the abstract hidden marko. model. Journal of Artificial Intelligence Research. 17:451-499. 2002\nPeter Dayan and Geoffrey E Hinton. Feudal reinforcement learning. In Advances in neural infor mation processing systems, pp. 271-271. Morgan Kaufmann Publishers, 1993.\nThomas G Dietterich. The maxq method for hierarchical reinforcement learning. In ICML, pp 118-126. Citeseer. 1998\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014.\nVijay R. Konda and John N. Tsitsiklis. Actor-critic algorithms. In Advances in Neural Informati. Processing Systems 12, [NIPS Conference, Denver, Colorado, USA, November 29 - December 19991, pp. 1008-1014, 1999\nTejas D Kulkarni, Karthik R Narasimhan, Ardavan Saeedi. and Joshua B Tenenbaum. Hierarchica deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. arXiv preprint arXiv:1604.06057, 2016.\nSergey Levine and Vladlen Koltun. Guided policy search. In ICML (3), pp. 1-9, 2013.\nRichard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1):181-211, 1999\nDaan Wierstra, Alexander Forster, Jan Peters, and Jurgen Schmidhuber. Recurrent policy gradients Logic Journal of the IGPL, 18(5):620-634, 2010.\nFor all experiments, we used the ADAM optimizer (Kingma & Ba, 2014) with gradient clipping\nOption model:The option ot E RO is generated by: ot = f(yt) = relu(Woyt) where W, is a matrix of parameters.\nActor model: The state of the actor is zt E RZ computed by:\nMAZE environment details: The position of the agent is given by a vector of length 5 5 with. zeros everywhere and a one corresponding to its position in the room. The doors are encoded with. a vector of length 4 (0 if no door, 1 if a door), and the goal (if present in the same room than the agent) is also encoded with a vector of length 5 5 (with only O if the goal is in another room)..\nExperiments: The only differences between experiments are the dimensions O and Z, and som nyper-parameters. The details of the tested values are given in Table 2.\ndim of option space O dim of xt representation size of actor state Z\n6 An open-source version of the model is available at https://github.com/aureliale/BONN-model.\nZt = r(Ot, xt) = tanh(Wzo concat(Ot,xt))\nr = sigmoid(Wrxt+1 + Yaat + Urzt u = sigmoid(Wuxt+1 + Yuat + Uuzt) c = tanh(Wcxt+1 + Ycat + Uc(r. zt)) Zt+1 =UZt +(1-u)c\nwhere . is an element-wise multiplication and W., Y. and U. are matrices of parameters.. in both cases, the distribution over the set of actions is compute by a softmax function on d(zt) = WaZt where Wd is a matrix of parameters..\ncartpole lunar lander. maze1 maze2 dim of option space O 5 10 20 10 dim of xt representation 0 0 0 10 size of actor state Z 5 10 20 10\nTable 2: Values of parameters for the BONN architecture. Note that the values for MAZE and M AZ E, are both for the 2 2 maze and the 3 3 maze"}]
SJkXfE5xx
[{"section_index": "0", "section_name": "REVISITING CLASSIFIER TWO-SAMPLE TESTS", "section_text": "David Lopez-Paz', Maxime Oquab1,\n1Facebook AI Research, 2wILLOW project team, Inria/ ENS / CNRS dlp@fb.com, maxime.oquab@inria.fr\nThe goal of two-sample tests is to assess whether two samples, Sp ~ Pn and SQ ~ Qm, are drawn from the same distribution. Perhaps intriguingly, one relatively unexplored method to build two-sample tests is the use of binary classifiers. Ir particular, construct a dataset by pairing the n examples in Sp with a positive label and by pairing the m examples in SQ with a negative label. If the null hypothesis 'P = Q\" is true, then the classification accuracy of a binary classifier on a held-ou subset of this dataset should remain near chance-level. As we will show, sucl Classifier Two-Sample Tests (C2ST) learn a suitable representation of the data or the fly, return test statistics in interpretable units, have a simple null distribution and their predictive uncertainty allow to interpret where P and Q differ. The goal of this paper is to establish the properties, performance, and uses of C2ST First, we analyze their main theoretical properties. Second, we compare their per formance against a variety of state-of-the-art alternatives. Third, we propose theii use to evaluate the sample quality of generative models with intractable likelihoods such as Generative Adversarial Networks (GANs). Fourth. we showcase the nove"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "One of the most fundamental problems in statistics is to assess whether two samples, Sp ~ Pn and. So ~ Qm, are drawn from the same probability distribution. To this end, two-sample tests (Lehmann. & Romano2006) summarize the differences between the two samples into a real-valued test statistic and then use the value of such statistic to accept'[or reject the null hypothesis \"P = Q\". The. development of powerful two-sample tests is instrumental in a myriad of applications, including the. evaluation and comparison of generative models. Over the last century, statisticians have nurtured a wide variety of two-sample tests. However, most of these tests are only applicable to one-dimensional. examples, require the prescription of a fixed representation of the data, return test statistics in units that are difficult to interpret, or do not explain how the two samples under comparison differ..\nIntriguingly, there exists a relatively unexplored strategy to build two-sample tests that overcome the aforementioned issues: training a binary classifier to distinguish between the examples in Sp and the examples in So. Intuitively, if P = Q, the test accuracy of such binary classifier should remain near chance-level. Otherwise, if P / Q and the binary classifier is able to unveil some of the distributional differences between Sp and SQ, its test accuracy should depart from chance-level. As we will show, such Classifier Two-Sample Tests (C2ST) learn a suitable representation of the data on the fly, return test statistics in interpretable units, have simple asymptotic distributions, and their learned features and predictive uncertainty provide interpretation on how P and Q differ. In such a way, this work brings together the communities of statistical testing and representation learning\nThe goal of this paper is to establish the theoretical properties and evaluate the practical uses of C2ST To this end, our contributions are:\n'For clarity, we abuse statistical langua. age and write \"accept\" to mean \"fail to reject\"'"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We review the basics of two-sample tests in Section 2] as well as their common applications to measure statistical dependence and evaluate generative models.. We analyze the attractive properties of C2sT (Section[3) including an analysis of their exact asymptotic distributions, testing power, and interpretability..\nThe goal of two-sample tests is to assess whether two samples, denoted by Sp ~ Pn and SQ ~ Qm are drawn from the same distribution (Lehmann & Romanol 2006). More specifically, two-sample tests either accept or reject the null hypothesis, often denoted by Ho, which stands for \"P = Q When rejecting Ho, we say that the two-sample test favors the alternative hypothesis, often denoted by H1, which stands for \"P Q\". To accept or reject Ho, two-sample tests summarize the differences between the two samples (sets of identically and independently distributed examples):\nSp:={x1,...,xn} } ~ Pn(X) and SQ := {y1,...,Ym} ~ Qm(Y)\ninto a statistic t E R. Without loss of generality, we assume that the two-sample test returns a. small statistic when the null hypothesis \"P = Q\" is true, and a large statistic otherwise. Then, for a. sufficiently small statistic, the two-sample test will accept Ho. Conversely, for a sufficiently large statistic, the two-sample test will reject Ho in favour of H1..\nMore formally, the statistician performs a two-sample test in four steps. First, decide a significance. level a E 0, 1, which is an input to the two-sample test. Second, compute the two-sample test. statistic t. Third, compute the p-value p = P(T t|Ho), the probability of the two-sample test. returning a statistic as large as t when Ho is true. Fourth, reject Ho if p < a, and accept it otherwise\nInevitably, two-sample tests can fail in two different ways. First, to make a type-I error is to rejec. the null hypothesis when it is true (a \"false positive'). By the definition of p-value, the probability. of making a type-I error is upper-bounded by the significance level a. Second, to make a type-I. error is to accept the null hypothesis when it is false (a \"false negative''). We denote the probability. of making a type-II error by , and refer to the quantity = 1 - as the power of a test. Usually. the statistician uses domain-specific knowledge to evaluate the consequences of a type-I error, and. thus prescribe an appropriate significance level a. Within the prescribed significance level a, the. statistician prefers the two-sample test with maximum power r..\nAmong others, two-sample tests serve two other uses. First, two-sample tests can measure statistical dependence (Gretton et al.,2012a). In particular, testing the independence null hypothesis \"the random variables X and Y are independent' is testing the two-sample null hypothesis \"P(X, Y) = P(X)P(Y)\". In practice, the two-sample test would compare the sample S = {(xi, yi)}=1 ~ P(X, Y)n to a sample S, = {(xi, Yo(i))}=1 ~ (P(X)P(Y))n, where o is a random permutation of the set of indices {1, ..., n}. This approach is consistent when considering all possible random. permutations. However, since independence testing is a subset of two-sample testing, specialized. independence tests may exhibit higher power for this task (Gretton et al.2005).\nSecond, two-sample tests can evaluate the sample quality of generative models with intractabl likelihoods, but tractable sampling procedures. Intuitively, a generative model produces good samples. S = {}t=1 if these are indistinguishable from the real data S -- {i}=1 that they model. Thus the two-sample test statistic between S and S measures the fidelity of the samples S produced by the. generative model. The use of two-sample tests to evaluate the sample quality of generative models. include the pioneering work of|Box|(1980), the use of Maximum Mean Discrepancy (MMD) criterior (Bengio et al.]2013]Dziugaite et al.|2015]Lloyd & Ghahramani]2015] Bounliphone et al.] 2015 Sutherland et al.2016), and the connections to density-ratio estimation (Kanamori et al.. 2010 Wornowizki & Fried2016fMenon & Ong2016 Mohamed & Lakshminarayanan 2016).\nOver the last century, statisticians have nurtured a wide variety of two-sample tests. Classical two-sample tests include the t-test (Student!1908), which tests for the difference in means of two\nWe evaluate C2sT on a wide variety of synthetic and real data (Section|4), and compare their performance against multiple state-of-the-art alternatives. Furthermore, we provide examples to illustrate how C2ST can interpret the differences between pairs of samples. . In Section5] we propose the use of classifier two-sample tests to evaluate the sample quality of generative models with intractable likelihoods, such as Generative Adversarial Networks (Goodfellow et al.J 2014), also known as GANs. oAs a novel application of the synergy between C2ST and GANs, Section6|proposes the use of these methods for causal discovery.\nsamples; the Wilcoxon-Mann-Whitney test (Wilcoxon1945] Mann & Whitney1947), which tests for the difference in rank means of two samples; and the Kolmogorov-Smirnov tests (Kolmogorov. 1933] Smirnov1939) and their variants (Kuiper1962), which test for the difference in the empirica. cumulative distributions of two samples. However, these classical tests are only efficient when applie. to one-dimensional data. Recently, the use of kernel methods (Smola & Scholkopf|1998) enablec. the development of two-sample tests applicable to multidimensional data. Examples of these tests. include the MMD test (Gretton et al.]2012a), which looks for differences in the empirical kerne. mean embeddings of two samples, and the Mean Embedding test or ME (Chwialkowski et al.l2015. Jitkrittum et al.2016), which looks for differences in the empirical kernel mean embeddings o. two samples at optimized locations. However, kernel two-sample tests require the prescription o. a manually-engineered representation of the data under study, and return values in units that ar difficult to interpret. Finally, only the ME test provides a mechanism to interpret how P and Q diffe.\nNext, we discuss a simple but relatively unexplored strategy to build two-sample tests that overcome these issues: the use of binary classifiers..\n1 nte (zi,li)EDte"}, {"section_index": "3", "section_name": "3.1 NULL AND ALTERNATIVE DISTRIBUTIONS", "section_text": "First, under the null hypothesis Ho : P = Q, the samples Sp ~ Pn and SQ ~ Qm follow the same distribution, leading to an impossible binary classification problem. In that case, ntet follows Binomial(nte, p = ) distribution. Therefore, for large nte, we can use the central limit theorem tc\nTo analyze the power (probability of correctly rejecting false null hypothesis) of C2ST, we assume that the our classifier has an expected (unknown) accuracy of Ho : t = under the null hypothesis 6 D_ ofH1:t 1L under the alternative hypothesis \"p Q'\nWithout loss of generality, we assume access to the two samples Sp and So defined in (1, where xi, y; E X, for all i = 1, ..., n and j = 1, ..., m, and m = n. To test whether the null hypothesis Ho : P = Q is true, we proceed in five steps. First, construct the dataset\nD={(xi,O)}i=1U{(yi,1)}i=1=:{(zi,li)}n1\nSecond, shuffle D at random, and split it into the disjoint training and testing subsets Dtr and Dte. where D = Dtr U Dte and nte := Dte]. Third, train a binary classifier f : -> [0, 1 on Dtr; in the following, we assume that f(z) is an estimate of the conditional probability distribution p(l; = 1|zi) Fourth, return the classification accuracy on Dte:.\nas our C2ST statistic, where I is the indicator function. The intuition here is that if P = Q, the test accuracy (2) should remain near chance-level. In opposition, if P Q and the binary classifier unveils distributional differences between the two samples, the test classification accuracy (2) should be greater than chance-level. Fifth, to accept or reject the null hypothesis, compute a p-value using the null distribution of the C2ST, as discussed next.\nSecond, under the alternative hypothesis Hj : P Q, the statistic ntet follows a Poisson Binomial. distribution, since the constituent Bernoulli random variables may not be identically distributed In the following, we will approximate such Poisson Binomial distribution by the Binomial(n, p. distribution, where p = t=1 Pi (Ehm|1991). Therefore, we can use the central limit theorem to p1-p approximate the alternative distribution of (2) by W(n..\nwhere e E (0, ) is the effect size distinguishing P from Q. Let be the Normal cdf, nte the numbe. of samples available for testing, and a the significance level. Then.\nRemark 1. We leave for future work the study of quadratic-time C2ST with optimal power in high dimensional problems (Ramdas et al.]2015). These are problems where the ratio n/d -> c E [0, 1] and the power bounds depend on d. One possible line of research in this direction is to investigate the\n) = l;l. where the classifier f(z. z) predicts if the examples (z, z) come from the same sample\nTheorem[1also illustrates that maximizing the power of a C2ST is a trade-off between two competing objectives: choosing a classifier that maximizes the test accuracy e and maximizing the size of the tesi set nte. This relates to the well known bias-variance trade-off in machine learning. Indeed, simple. classifiers will miss more nonlinear patterns in the data (leading to smaller test accuracy), but call for. less training data (leading to larger test set sizes). On the other hand, flexible classifiers will miss less nonlinear patterns in the data (leading to higher test accuracy), but call for more training data. (leading to smaller test sizes). Formally, the relationship between the test accuracy, sample size, and the flexibility of a classifier depends on capacity measures such as the VC-Dimension (Vapnik|1998) Note that there is no restriction to perform model selection (such as cross-validation) on Dtr..\nRemark 2. We have focused on test statistics (2) built on top of the zero-one loss lo-1(y, y) = I[y = y'l E {0,1}. These statistics give rise to Bernoulli random variables, which can exhibit high variance. However, our arguments are readily extended to real-valued binary classification losses. Then, the variance of such real-valued losses would describe the norm of the decision function of the classifier two-sample test, appear in the power expression from Theorem[1] and serve as a hyper-parameter to maximize power as in (Gretton et al.2012b Section 3)"}, {"section_index": "4", "section_name": "3.3 INTERPRETABILITY", "section_text": "There are three ways to interpret the result of a C2ST. First, recall that the classifier predictions f(z) are estimates of the conditional probabilities p(l; = 1|z) for each of the samples z; in the test set. Inspecting these probabilities together with the true labels l, determines which examples were correctly or wrongly labeled by the classifier, with the least or the most confidence. Therefore, the values f(z) explain where the two distributions differ. Second, C2ST inherit the interpretability of their classifiers to explain which features are most important to distinguish distributions, in the same way as the ME test (Jitkrittum et al.|2016). Examples of interpretable features include the filters of the first layer of a neural network, the feature importance of random forests, the weights of a generalized linear model, and so on. Third, C2ST return statistics t in interpretable units: these relate to the percentage of samples correctly distinguishable between the two distributions. These interpretable numbers can complement the use of p-values."}, {"section_index": "5", "section_name": "3.4 PRIOR UseS", "section_text": "The reduction of two-sample testing to binary classification was introduced in (Friedman. 2003 studied within the context of information theory in (Perez-Cruz. 2009 Reid & Williamson 2011 discussed in (Fukumizu et al.]2009} Gretton et al. 2012a), and analyzed (for the case of linea. discriminant analysis) in (Ramdas et al.[[2016). The use of binary classifiers for two-sample testin. is increasingly common in neuroscience: see (Pereira et al.]2009] Olivetti et al.]2012) and th. references therein. Implicitly, binary classifiers also perform two-sample tests in algorithms tha discriminate data from noise, such as unsupervised-as-supervised learning (Friedman et al.] 2001 noise contrastive estimation (Gutmann & Hyvarinen2012), negative sampling (Mikolov et al.. 2013 and GANs (Goodfellow et al.|2014).\nFor a related discussion on this issue, we recommend the insightful comment by Arthur Gretton and Wittawa Jitkrittum, available at https://openreview.net/forum?id=SJkXfE5xx\n(a) two Gaussians. (b) Student-t versus Gaussian (c) Student-t versus Gaussiai 0.20 O O C2ST-KNN V V K-S 1.0 1.0 * C2ST-NN Kuiper 0.15 error 0.8 0.8 MMD O ME * Wilcoxon 0.6 0.6 0.10 1I-ad II-adA 0.4 0.4 0.2 0.2 0.0 0.0 0.00 0 500 1000 1500 2000 0 500 1000 1500 2000 0 5 1015 20 sample size sample size degrees of freedom (d) sinusoid (e) sinusoid (f) sinusoid 1.0 1.0 1.0 0.8 0.8 ernnr 0.8 0.6 0.6 0.6 1I-ad II-adA 0.4 0.4 0.4 0.2 0.2 F 0.2 0.0 0.0 0.0 0 500 1000 1500 2000 0.0 0.5 1.0 1.5 2.0 2.5 3.0 2 4 6 8 10 12 14 16 18 20 sample size noise variance frequency\nerr Wilcoxon 0.10 0.05 0.00 0 500 1000 1500 2000\nFigure 1: Results (type-I and type-II errors) of our synthetic two-sample test experiments"}, {"section_index": "6", "section_name": "EXPERIMENTS ON TWO-SAMPLE E TESTING", "section_text": "(C2ST-NN), and one based on k-nearest neighbours (C2ST-KNN). C2ST-NN has one hidden layer of 20 ReLU neurons, and trains for 100 epochs using the Adam optimizer (Kingma & Ba]2015] we did not observe a significant improvement in performance when increasing the flexibility of these classifiers (e.g., increasing the number of hidden neurons or decreasing the number of nearest neighbors). When analyzing one-dimensional data, we compare the performance of C2ST-NN and C2ST-KNN against the Wilcoxon-Mann-Whitney test (Wilcoxon1945} Mann & Whitney 1947), the Kolmogorov-Smirnov test (Kolmogorov1933] Smirnov1939), and the Kuiper test (Kuiper1962). In all cases, we also compare the performance of C2ST-NN and C2ST-KNN against the linear-time estimate of the Maximum Mean Discrepancy (MMD) criterion (Gretton et al. 2012a), the ME test (Jitkrittum et al.] 2016), and the SCF test (Jitkrittum et al.]2016). We use a significance level a = 0.05 across all experiments and tests, unless stated otherwise. We use Gaussian approximations to compute the null distributions of C2ST-NN and C2ST-KNN. We use the implementations of the MMD, ME, and SCF tests gracefully provided byJitkrittum et al.(2016), the scikit-learn implementation of the Kolmogorov-Smirnov and Wilcoxon tests, and the implementation from https: //github. com/aarchiba/kuiper of the Kuiper test. The implementation of our experimentsis available athttps://github.com/lopezpaz/classifier tests"}, {"section_index": "7", "section_name": "4.1 EXPERIMENTS ON TWO-SAMPLE TESTING", "section_text": "Control of type-I errors We start by evaluating the correctness of all the considered two-sample tests by examining if the prescribed significance level a = 0.05 upper-bounds their type-I error To do so, we draw x1,.:., xn, Y1,..., Yn ~ N(0, 1), and run each two-sample test on the two samples {x}=1 and {yi}=1. In this setup, a type-I error would be to reject the true null hypothesis Figure[1(a) shows that the type-I error of all tests is upper-bounded by the prescribed significance level, for all n E {25, 50, 100, 500, 1000, 5000, 10000} and 100 random repetitions. Thus, all tests control their type-I error as expected, up to random variations due to finite experiments.\nGaussian versus Student We consider distinguishing between samples drawn from a Normal distribution and samples drawn from a Student's t-distribution with v degrees of freedom. We shift and scale both samples to exhibit zero-mean and unit-variance. Since the Student's t distribution approaches the Normal distribution as v increases, a two-sample test must focus on the peaks of the distributions to distinguish one from another. Figure [1(b,c) shows the percentage of type-II errors made by all tests as we vary separately n and v, over 100 trials (random samples). We set\nTable 1: Type-I errors (first row) and powers (rest of rows) in distinguishing NIPS papers categories\nProblem nte ME-full ME-grid SCF-full SCF-grid MMD-quad MMD-lin C2ST-NN vs. 201 .010 .012 .014 .002 .018 .008 .002 201 .998 .656 1.00 .750 1.00 .578 .997 + vs. -\nTable 2: Type-I errors (first row) and powers (second row) in distinguishing facial expressions\nn = 20o0 when v varies, and let v = 3 when n varies. The Wilcoxon-Mann-Whitney exhibits the worst performance, as expected (since the ranks mean of the Gaussian and Student's t distributions coincide) in this experiment. The best performing method is the the one-dimensional Kuiper test followed closely by the multi-dimensional tests C2ST-NN and ME\nIndependence testing on sinusoids For completeness, we showcase the use two-sample tests to measure statistical dependence. This can be done, as described in Section|2] by performing a two-sample test between the observed data {(xi, Yi)}=1 and {(xi, Yo(i))}=1, where is a random permutation. Since the distributions P(X)P(Y) and P(X, Y) are bivariate, only the C2ST-NN C2ST-KNN, MMD, and ME tests compete in this task. We draw (xi, yi) according to the generative model x; ~ N(0,1), e; ~ N(0,y2), and yi ~ cos(Sx) + e. Here, x; are iid examples from the random variable X, and yi are iid examples from the random variable Y. Thus, the statistical dependence between X and Y weakens as we increase the frequency of the sinusoid, or increase the variance 2 of the additive noise. Figure|1(d,e,f) shows the percentage of type-II errors made by C2ST-NN, C2ST-KNN, MMD, and ME as we vary separately n, d, and y over 100 trials. We le1 n = 2000, = 1, y = 0.25 when fixed. Figure|1(d,e,f) reveals that among all tests, C2ST-NN is the most efficient in terms of sample size, C2ST-KNN is the most robust with respect to high-frequency variations, and that C2ST-NN and ME are the most robust with respect to additive noise.\nDistinguishing between NIPS articles We consider the problem of distinguishing between some. of the categories of the 5903 articles published in the Neural Information Processing Systems (NIPS. conference from 1988 to 2015, as discussed in Jitkrittum et al.(2016). We consider articles on. Bayesian inference (Bayes), neuroscience (Neuro), deep learning (Deep), and statistical learning. theory (Learn). Table[1shows the type-I errors (Bayes-Bayes row) and powers (rest of rows) for the. tests reported in (Jitkrittum et al.[2016), together with C2ST-NN, at a significance level = 0.01 when averaged over 500 trials. In these experiments, C2ST-NN achieves maximum power, while. upper-bounding its type-I error by Q.\nDistinguishing between facial expressions Finally, we apply C2ST-NN to the problem of distin. guishing between positive (happy, neutral, surprised) and negative (afraid, angry, disgusted) facial. expressions from the Karolinska Directed Emotional Faces dataset, as discussed in (Jitkrittum et al. 2016). See the fourth plot of Figure[2] first two-rows, for one example of each of these six emotions.. Table2 shows the type-I errors ( vs row) and the powers (+ vs - row) for the tests reported in. (Jitkrittum et al.||2016), together with C2ST-NN, at a = 0.01, averaged over 500 trials. C2ST-NN achieves a near-optimal power, only marginally behind the perfect results of SCF-full and MMD-quad.."}, {"section_index": "8", "section_name": "S EXPERIMENTS ON GENERATIVE ADVERSARIAL NETWORK EVALUATION", "section_text": "Since effective generative models will produce examples barely distinguishable from real data, two. sample tests arise as a natural alternative to evaluate generative models. Particularly, our interes is to evaluate the sample quality of generative models with intractable likelihoods, such as GANs. Goodfellow et al.[2014). GANs implement the adversarial game\nmin max E [log(d(x))] + IE [log(1 d(g(z)))] g d x~P(X) z~P(Z)\nProblem nte ME-full ME-grid SCF-full SCF-grid MMD-quad MMD-lin C2ST-NN Bayes-Bayes 215 .012 .018 .012 .004 .022 .008 .002 Bayes-Deep 216 .954 .034 .688 .180 .906 .262 1.00 Bayes-Learn 138 .990 .774 .836 .534 1.00 .238 1.00 Bayes-Neuro 394 1.00 .300 .828 .500 .952 .972 1.00 Learn-Deep 149 .956 .052 .656 .138 .876 .500 1.00 Learn-Neuro 146 .960 .572 .590 .360 1.00 .538 1.00\nTable 3: Results on GAN evaluation. Lower test statistics are best. Full results in Appendix|A\nUnfortunately, the evaluation of the log-likelihood of a GANs is intractable. Therefore, we will. employ a two-sample test to evaluate the quality of the fake examples x = g(z). In simple terms. evaluating a GAN in this manner amounts to withhold some real data from the training process. and use it later in a two-sample test against the same amount of synthesized data. When the two sample test is a binary classifier (as discussed in Section|3), this procedure is simply training a fresh. discriminator on a fresh set of data. Since we train and test this fresh discriminator on held-oui. examples, it may differ from the discriminator trained along the GAN. In particular, the discriminatoi. trained along with the GAN may have over-fitted to particular artifacts produced by the generator. thus becoming a poor C2ST.\nWe evaluate the use of two-sample tests for model selection in GANs. To this end, we train a number of DCGANs (Radford et al.] [2016) on the bedroom class of LSUN (Yu et al.]2015) and the Labeled Faces in the Wild (LFW) dataset (Huang et al.] 2007). We reused the Torch7 code of Radford et al.(2016) to train a set of DCGANs for {1, 10, 50, 100, 200} epochs, where the generator and discriminator networks are convolutional neural networks (LeCun et al.|1998) with {1, 2, 4, 8} gf and {1, 2, 4, 8} df filters per layer, respectively. We evaluate each DCGAN on 10, 000 held-out. examples using the fastest multi-dimensional two-sample tests: MMD, C2ST-NN, and C2ST-KNN\nOur first experiments revealed an interesting result. When performing two-sample tests directly on. pixels, all tests obtain near-perfect test accuracy when distinguishing between real and synthesizec (fake) examples. Such near-perfect accuracy happens consistently across DCGANs, regardless of the. visual quality of their examples. This is because, albeit visually appealing, the fake examples contain. checkerboard-like artifacts that are sufficient for the tests to consistently differentiate between real. and fake examples.Odena et al.(2016) discovered this phenomenon concurrently with us..\nrandom sample MMD KNN NN 0.158 0.830 0.999 0.154 0.994 1.000 0.048 0.962 1.000 0.012 0.798 0.964 0.024 0.748 0.949 0.019 0.670 0.983 SL 0.152 0.940 1.000 0.222 0.978 1.000 0.715 1.000 1.000 0.015 0.817 0.987 0.020 0.784 0.950 E 0.024 0.697 0.971\nwhere d(x) depicts the probability of the example x following the data distribution P(X) versus being synthesized by the generator. This is according to a trainable discriminator function d. In the adversarial game, the generator g plays to fool the discriminator d by transforming noise vectors. z ~ P(Z) into real-looking examples g(z). On the opposite side, the discriminator plays to distinguish between real examples x and synthesized examples g(z). To approximate the solution to. (3), alternate the optimization of the two losses (Goodfellow et al.]. 2014) given by\nLa(d)=Ex [l(d(x),1)]+Ez [l(d(g(z)),O)] Lq(g)=Ex [l(d(x),O)]+Ez [l(d(g(z)),1)]\nOn a second series of experiments, we featurize all images (both real and fake) using a deep. convolutional ResNet (He et al.|2015) pre-trained on ImageNet, a large dataset of natural images (Russakovsky et al.|2015). In particular, we use the resnet-34 model from|Gross & Wilber[(2016] Reusing a model pre-trained on natural images ensures that the test will distinguish between real anc. fake examples based only on natural image statistics, such as Gabor filters, edge detectors, and so on Such a strategy is similar to perceptual losses (Johnson et al.]2016) and inception scores (Salimans. et al.|2016). In short, in order to evaluate how natural the images synthesized by a DCGAN look. one must employ a \"natural discriminator'. Table [3|shows three GANs producing poor samples. and three GANs producing good samples for the LSUN and LFW datasets, according to the MMD C2ST-KNN, C2ST-NN tests on top of ResNet features. See Appendix|A|for the full list of results. Although it is challenging to provide with an objective evaluation of our results, we believe that the. rankings provided by two-sample tests could serve for efficient early stopping and model selection..\nRemark 3 (How good is my GAN? Is it overfitting?). Evaluating generative models is a delicate issue (Theis et al.2016), but two-sample tests may offer some guidance. In particular, good (non overfitting) generative models should produce similar two-sample test statistics when comparing their generated samples to both the train-set and the test-set samples.|As a general recipe, prefe generative models that achieve the same and small two-sample test statistic when comparing thei generated samples to both the train-set and test-set samples."}, {"section_index": "9", "section_name": "5.1 EXPERIMENTS ON INTERPRETABILITY", "section_text": "We illustrate the interpretability power of C2ST. First, the predictive uncertainty of C2ST sheds ligh. on where the two samples under consideration agree or differ. In the first plot of Figure2] a C2ST-NI. separates two bivariate Gaussian distributions with different means. When performing this separatior. the C2ST-NN provides an explicit decision boundary that illustrates where the two distribution separate from each other. In the second plot of Figure[2] a C2ST-NN separates a Gaussian distributior. from a Student's t distribution with v = 3, after scaling both to zero-mean and unit-variance. Th. plot reveals that the peaks of the distributions are their most differentiating feature. Finally, the thir. plot of Figure2|displays, for the LFW and LSUN datasets, five examples classified as real witl. high uncertainty (first row, better looking examples), and five examples classified as fake with higl. certainty (second row, worse looking examples).."}, {"section_index": "10", "section_name": "6 EXPERIMENTS ON CONDITIONAL GANS FOR CAUSAL DISCOVERY", "section_text": "In causal discovery, we study the causal structure underlying a set of d random variables X1, . . ., Xd. In particular, we assume that the random variables X1,.. ., Xg share a causal structure described by a collection of Structural Equations, or SEs (Pearl|2009). More specifically, we assume that the. random variable X, takes values as described by the SE X, = g(Pa(X,, ), Nt), for all i = 1, ..., d. In the previous, G is a Directed Acyclic Graph (DAG) with vertices associated to each of the random. variables X1,..., Xd. Also in the same equation, Pa(X,, 9) denotes the set of random variables which are parents of X; in the graph G, and N; is an independent noise random variable that follows\nSecond, the features learnt by the classifier of a C2sT are also a mechanism to understand the differences between the two samples under study. The third plot of Figure2|shows six examples from the Karolinska Directed Emotional Faces dataset, analyzed in Section|4.1 In that same figure we arrange the weights of the first linear layer of C2ST-NN into the feature most activated at positive examples (bottom left, positive facial expressions), the feature most activated at negative examples (bottom middle, negative facial expressions), and the \"discriminative feature\", obtained by substracting these two features (bottom right). The discriminative feature of C2ST-NN agrees with the one found by (Jitkrittum et al.]2016): positive and negative facial expressions are best distinguished at the eyebrows, smile lines, and lips. A similar analysis Jitkrittum et al.[(2016) on the C2ST-NN features in the NIPS article classification problem (Section4.1) reveals that the features most activated for the \"statistical learning theory\"' category are those associated to the words inequ tight, power, sign, hypothesi, norm, hilbert. The features most activated for the \"Bayesian inference category are those associated to the words infer, markov, graphic, conjug, carlo, automat, laplac\n3 As discussed with Arthur Gretton, if the generative model memorizes the train-set samples, a sufficiently large set of generated samples would reveal such memorization to the two-sample test. This is because some unique samples would appear multiple times in the set of generated samples, but not in the test-set of samples\n1.0 0.6 0.52 0.9 0.5 0.8 0.48 0.7 0.4 0.44 0.6 0.40 2 0.5 0.3 0.4 0.36 0.2 0.3 0.32 0.2 0.1 0.28 0.1 0.24 0.0 2 A 6 -4-20 2\n0.52 0.9 6 0.5 0.8 0.48 0.7 0.4 0.44 0.6 0.40 0.5 0.3 0.4 0.36 0 0.2 0.3 0.32 0.2 0.1 0.28 0.1 0.0 0.24 2 0 2 4 6 6 -4 -2 0 2\nFigure 2: Interpretability of C2ST. The color ma ponds to the value of p( =\nMethod ANM-HSIC IGCI RCC CGAN-C2ST Ensemble C2ST type 73% 82 % KNN Accuracy 67% 71% 76% 70% 73% NN 58% 65% MMD\nTable 4: Results on cause-effect discovery on the Tubingen pairs experiment\nthe probability distribution P(N). Then, we say that X, -> X, if X, E Pa(X), since a change i X, will cause a change in Xj, as described by the i-th SE.\nThe goal of causal discovery is to infer the causal graph G given a sample from P(X1, ..., Xa). For the sake of simplicity, we focus on the discovery of causal relations between two random variables denoted by X and Y. That is, given the sample D = {(xi, yi)}=1 ~ Pn(X, Y), our goal is to conclude whether \"X causes Y\", or \"Y causes X\". We call this problem cause-effect discovery (Mooij et al.][2016). In the case where X -> Y, we can write the cause-effect relationship as:\nUnfortunately, assuming independent additive noise is often too simplistic (for instance, the noise could be heteroskedastic or multiplicative). Because of this reason, we propose to use Conditiona Generative Adversarial Networks, or CGANs (Mirza & Osindero2014) to address the problen of cause-effect discovery. Our motivation is the shocking resemblance between the generator of a CGAN and the SE (5): the random variable X is the conditioning variable input to the generator the random variable N is the noise variable input to the generator, and the random variable Y is the. variable synthesized by the generator. Furthermore, CGANs respect the independence between the cause X and the noise N by construction, since n ~ P(N) is independent from all other variables This way, CGANs bypass the additive noise assumption naturally, and allow arbitrary interactions. q(X, N) between the cause variable X and the noise variable N..\nTo implement our cause-effect inference algorithm in practice, recall that training a CGAN from X to Y minimizes the two following objectives in alternation:\nLa(d) =Ex,y [l(d(x,y),1)]+ Ex,z [l(d(x,g(x,z)),O)] Lq(g) =Ex,y [l(d(x,y),O)]+Ex,z [l(d(x,g(x,z)),1)]\nLa(d) =Ex,y [l(d(x,y),1)]+Ex,z [l(d(x,g(x,z)),O)] Lg(g) =Ex,y [l(d(x,y),O)]+Ex,z[l(d(x,g(x,z)),1)]\nTable4|summarizes the performance of this procedure when applied to the 99 Tubingen cause-effect pairs dataset, version August 2016 (Mooij et al.]2016). RCC is the Randomized Causation Coefficient of (Lopez-Paz et al.] 2015). The Ensemble-CGAN-C2ST trains 100 CGANs, and decides the causal\nx ~ P(X)z n ~ P(N), y+g(x,n).\nThe current state-of-the-art in the cause-effect discovery is the family of Additive Noise Models, or ANM (Mooij et al.[2016). These methods assume that the SE (5) allow the expression y < g(x) + n. and exploit the independence assumption between the cause random variable X and the noise random variable N to analyze the distribution of nonlinear regression residuals, in both causal directions\nOur recipe for cause-effect is to learn two CGANs: one with a generator gy from X to Y to synthesize. the dataset Dx-Y = {(xi, gy(xi, Zi))}=1, and one with a generator gx from Y to X to synthesize. the dataset DxY = {(gx(yi, Zi), yi)}-1. Then, we prefer the causal direction X -> Y if the two-sample test statistic between the real sample D and Dx-y is smaller than the one between D and Dy--x. Thus, our method is Occam's razor at play: declare the simplest direction (in terms of. conditional generative modeling) as the true causal direction..\ndirection by comparing the top generator obtained in each causal direction, as told by C2ST-KNN The need to ensemble is a remainder of the unstable behaviour of generative adversarial training, bui also highlights the promise of such models for causal discovery."}, {"section_index": "11", "section_name": "7 CONCLUSION", "section_text": "Our take-home message is that modern binary classifiers can be easily turned into powerful two-sample tests. We have shown that these classifier two-sample tests set a new state-of-the-art in performance and enjoy unique attractive properties: they are easy to implement, learn a representation of the data on the fly, have simple asymptotic distributions, and allow different ways to interpret how the two samples under study differ. Looking into the future, the use of binary classifiers as two-sample tests provides a flexible and scalable approach for the evaluation and comparison of generative models (such as GANs), and opens the door to novel applications of these methods, such as causal discovery"}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Y. Bengio, L. Yao, and K. Cho. Bounding the test log-likelihood of generative models. arXiv, 2013\nJ. Friedman, T. Hastie, and R. Tibshirani. The elements of statistical learning. Springer, 2001\nJ. H. Friedman. On multivariate goodness of fit and two sample testing. eConf, 2003\nM. U. Gutmann and A. Hyvarinen. Noise-contrastive estimation of unnormalized statistical model with applications to natural image statistics. JMLR, 2012.\nK. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CVPR, 2015\nKingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015\nN. H. Kuiper. Tests concerning random points on a circle. Nederl. Akad. Wetensch. Proc., 63, 1962\nE. L. Lehmann and J. P. Romano. Testing statistical hypotheses. Springer, 2006\nM. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv, 2014\nS. Mohamed and B. Lakshminarayanan. Learning in Implicit Generative Models. arXiv, 2016\nG. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical report, University of Massachusetts, Amherst, 2007. W. Jitkrittum. Z. Szabo. K. Chwialkowski, and A. Gretton. Interpretable Distribution Features with Maximum Testing Power. NIPS, 2016. J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual Losses for Real-Time Style Transfer and Super- Resolution. ECCV. 2016. T. Kanamori, T. Suzuki, and M. Sugiyama. f-divergence estimation and two-sample homogeneity test under semiparametric density-ratio models. arXiv, 2010.\nLaksnnnarayanan.Lealmnngn J. M. Mooij, J. Peters, D. Janzing, J. Zscheischler, and B. Scholkopf. Distinguishing cause from effect using observational data: methods and benchmarks. JMLR, 2016. S. Nowozin, B. Cseke, and R. Tomioka. f-GAN: Training generative neural samplers using variational divergence minimization. NIPS, 2016. A.Odena,V.Dumoulin, and C.Olah. Deconvolutionand checkerboard artifacts. http://distill.pub/2016/deconv-checkerboard/, 2016. E. Olivetti, S. Greiner, and P. Avesani. Induction in neuroscience with classification: issues and solutions. In Machine Learning and Interpretation in Neuroimaging. 2012.\nS. J. Reddi, A. Ramdas, B. Poczos, A. Singh, and L. A. Wasserman. On the high dimensional power of a linear-time two sample test under mean-shift alternatives. A1STATS, 2015. M. D. Reid and R. C. Williamson. Information, divergence and risk for binary experiments. JMLR, 2011. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet large scale visual recognition challenge. IJCV.. 2015. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training GANs. NIPS, 2016. N. V. Smirnov. On the estimation of the discrepancy between empirical curves of distribution for two. independent samples. Bull. Math. Univ. Moscou, 1939..\nStudent. The probable error of a mean. Biometrika, 1908\n. Wilcoxon. Individual comparisons by ranking methods. Biometrics bulletin, 1945\nTable 5: GAN evaluation results on the LSUN dataset, for all epochs (ep), filters in discriminator (df) filters in generator (gf), and test statistics (for MMD, C2ST-KNN, C2ST-NN). A lower test statistic estimates that the GAN produces better samples. Best viewed with zoom.\ngf df random sample MMD KNN ep NN A 32 32 0.154 0.994 1.000 32 32 10 0.024 0.831 0.996 32 32 50 0.026 0.758 0.983 32 32 100 0.014 0.797 0.974 32 32 200 0.012 0.798 0.964 32 64 0.330 0.984 1.000 32 64 10 0.035 0.897 0.997 32 64 50 0.020 0.804 0.989 32 64 100 0.032 0.936 0.998 32 64 200 0.048 0.962 1.000 32 96 1 0.915 0.997 1.000 32 96 10 0.927 0.991 1.000 32 96 50 0.924 0.991 1.000 32 96 100 0.928 0.991 1.000 32 96 200 0.928 0.991 1.000 64 32 0.389 0.987 1.000 64 32 10 0.023 0.842 0.979 64 32 50 0.018 0.788 0.977 64 32 100 0.017 0.753 0.959 64 32 200 0.018 0.736 0.963 64 64 0.313 0.964 1.000 64 64 10 0.021 0.825 0.988 64 64 50 0.014 0.864 0.978 64 64 100 0.019 0.685 0.978 64 64 200 0.021 0.775 0.980 64 96 0.891 0.996 1.000 64 96 10 0.158 0.830 0.999 64 96 50 0.015 0.801 0.980 64 96 100 0.016 0.866 0.976 64 96 200 0.020 0.755 0.983 96 32 0.356 0.986 1.000 96 32 10 0.022 0.770 0.991 96 32 50 0.024 0.748 0.949 96 32 100 0.022 0.745 0.965 96 32 200 0.024 0.689 0.981 96 64 0.287 0.978 1.000 96 64 10 0.012 0.825 0.966 96 64 50 0.017 0.812 0.962 96 64 100 0.019 0.670 0.983 96 64 200 0.020 0.711 0.972 96 96 0.672 0.999 1.000 96 96 10 0.671 0.999 1.000 96 96 50 0.829 0.999 1.000 96 96 100 0.668 0.999 1.000\nTable 6: GAN evaluation results on the LFW dataset, for all epochs (ep), filters in discriminator (df) filters in generator (gf), and test statistics (for MMD, C2ST-KNN, C2ST-NN). A lower test statistic. estimates that the GAN produces better samples. Best viewed with zoom..\ndf ep random sample MMD KNN NN oof 32 32 0.806 1.000 1.000 32 32 10 0.152 0.940 1.000 32 32 50 0.042 0.788 0.993 32 32 0.029 0.808 0.982 100 32 32 200 0.022 0.776 0.970 32 64 0.994 1.000 1.000 32 64 10 0.989 1.000 1.000 32 64 50 0.050 0.808 0.985 32 64 100 0.036 0.766 0.972 32 200 0.015 0.817 64 0.987 32 96 0.995 1.000 1.000 32 96 0.992 1.000 1.000 10 32 96 50 0.995 1.000 1.000 32 96 100 0.053 0.778 0.987 64 200 0.037 96 0.779 0.995 64 32 1.041 1.000 1.000 64 32 0.086 0.971 C 1.000 64 32 50 0.043 0.756 0.988 64 32 100 0.018 0.746 0.973 64 32 200 0.025 0.757 0.972 64 64 0.836 1.000 1.000 64 64 0.103 0.910 0.998 64 64 50 0.018 0.712 0.973 0.020 0.784 0.950 64 64 100 64 64 200 0.022 0.719 0.974 64 96 1.003 1.000 1.000 64 96 1.015 1.000 1.000 64 96 50 1.002 1.000 1.000 64 96 100 1.063 1.000 1.000 64 96 200 1.061 1.000 1.000 96 32 1.022 1.000 1.000 96 32 0.222 0.978 1.000 96 32 50 0.026 0.734 0.965 96 32 L0 0.016 0.735 0.964 0.780 0.973 96 32 200 0.021 96 64 0.715 1.000 1.000 96 64 C 0.042 0.904 0.999 96 64 5C 0.024 0.697 0.971 96 64 100 0.028 0.744 0.983 96 64 200 0.020 0.697 0.976 96 96 0.969 1.000 1.000 96 96 10 0.920 1.000 1.000 96 96 50 0.926 1.000 1.000 96 96 100 0.920 1.000 1.000 96 96 200 0.923 1.000 1.000\nnte Nte erefore, the power of the test is (Q, nte,e) =1 Ute ich concludes the proof.. ACKNOWLEDGEMENTS\nWe are thankful to L. Bottou, B. Graham, D. Kiela, M. Rojas-Carulla, I. Tolstikhin, and M. Tygert fo their help in improving the quality of this manuscript. This work was partly supported by ERC gran. LEAP (no. 336845) and CIFAR Learning in Machines & Brains program..\n= P nte Q- q-171\n-1(1 -a)/2- e /nte - - (1 - )/2 nte (Q, nte, E =1-"}]
BJYwwY9ll
[{"section_index": "0", "section_name": "SNAPSHOT ENSEMBLES: TRAIN 1. GET M FOR FREE", "section_text": "Gao Huang*, Yixuan Li* Geoff Pleiss\n{gh349, y12363}@cornell.edu, geoff@cs.cornell.edu\nTsinghua University\nliuzhuangthu@gmail.com\nEnsembles of neural networks are known to be much more robust and accurate than individual networks. However, training multiple deep networks for model. averaging is computationally expensive. In this paper, we propose a method to. obtain the seemingly contradictory goal of ensembling multiple neural networks. at no additional training cost. We achieve this goal by training a single neural net-. work, converging to several local minima along its optimization path and saving. the model parameters. To obtain repeated rapid convergence, we leverage recent. work on cyclic learning rate schedules. The resulting technique, which we refer to. as Snapshot Ensembling, is simple, yet surprisingly effective. We show in a series. of experiments that our approach is compatible with diverse network architectures. and learning tasks. It consistently yields lower error rates than state-of-the-art. single models at no additional training cost, and compares favorably with tradi-. tional network ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot. Ensembles obtain error rates of 3.4% and 17.4% respectively.\nStochastic Gradient Descent (SGD) (Bottou]2010) and its accelerated variants (Kingma & Ba]2014 Duchi et al.2011) have become the de-facto approaches for optimizing deep neural networks. The. popularity of SGD can be attributed to its ability to avoid and even escape spurious saddle-points and local minima (Dauphin et al.|2014). Although avoiding these spurious solutions is generally considered positive, in this paper we argue that these local minima contain useful information that may in fact improve model performance..\nAlthough deep networks typically never converge to a global minimum, there is a notion of \"good and \"bad\" local minima with respect to generalization.Keskar et al.(2016) argue that local minim with flat basins tend to generalize better. SGD tends to avoid sharper local minima because gradient are computed from small mini-batches and are therefore inexact (Keskar et al.2016). If the learning rate is sufficiently large, the intrinsic random motion across gradient steps prevents the optimize from reaching any of the sharp basins along its optimization path. However, if the learning rat is small, the model tends to converge into the closest local minimum. These two very differen behaviors of SGD are typically exploited in different phases of optimization (He et al.]2016a) Initially the learning rate is kept high to move into the general vicinity of a flat local minimum. Onc this search has reached a stage in which no further progress is made, the learning rate is droppec (once or twice), triggering a descent, and ultimately convergence, to the final local minimum.\nIt is well established (Kawaguchi2016) that the number of possible local minima grows expo-. nentially with the number of parameters-of which modern neural networks can have millions. It. is therefore not surprising that two identical architectures optimized with different initializations or minibatch orderings will converge to different solutions. Although different local minima often have very similar error rates, the corresponding neural networks tend to make different mistakes. This.\nAuthors contribute equally"}, {"section_index": "1", "section_name": "John E. Hopcroft, Kilian Q. Weinberger", "section_text": "jeh@cs.cornell.edu, kqw4@cornell.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "0.5 Single Model. 0.5 Snapshot Ensemble. Standard LR Schedule 0.4 Cyclic LR Schedule. 0.4 0.3 0.3 - 0.2 0.2 - 0.1 0.1 0 0. -0.1 0.1 -0.2 -0.2 0.3 0.3 -0.4 0.4 50 50 50 50 40 40 40 40 30 30 30 30 20 20 20 20\n0.5 Single Model 0.5 Snapshot Ensemble Standard LR Schedule 0.4 Cyclic LR Schedule 0.4 0.3 - 0.3 0.2 - 0.2 0.1 0.1 0 0 0.1 -0.1 -0.2 -0.2 0.3 0.3 -0.4 -0.4 50 50 50 50 40 40 40 40 30 30 30 30 20 20 20 20\nFigure 1: Left: Illustration of SGD optimization with a typical learning rate schedule. The model converges to a minimum at the end of training. Right: Illustration of Snapshot Ensembling. The model undergoes severa learning rate annealing cycles, converging to and escaping from multiple local minima. We take a snapshot a each minimum for test-time ensembling.\nDespite its obvious advantages, the use of ensembling for deep networks is not nearly as wide spread as it is for other algorithms. One likely reason for this lack of adaptation may be the cost of learning multiple neural networks. Training deep networks can last for weeks, even on high performance hardware with GPU acceleration. As the training cost for ensembles increases linearly ensembles can quickly becomes uneconomical for most researchers without access to industrial scale computational resources.\nIn this paper we focus on the seemingly-contradictory goal of learning an ensemble of multiple. neural networks without incurring any additional training costs. We achieve this goal with a training. method that is simple and straight-forward to implement. Our approach leverages the non-convex. nature of neural networks and the ability of SGD to converge to and escape from local minima or demand. Instead of training M neural networks independently from scratch, we let SGD converge. M times to local minima along its optimization path. Each time the model converges, we save the. weights and add the corresponding network to our ensemble. We then restart the optimization witl. a large learning rate to escape the current local minimum. More specifically, we adopt the cycling. procedure suggested byLoshchilov & Hutter (2016), in which the learning rate is abruptly raised anc. then quickly lowered to follow a cosine function. Because our final ensemble consists of snapshot. of the optimization path, we refer to our approach as Snapshot Ensembling.Figure 1presents a. high-level overview of this method.\nIn contrast to traditional ensembles, the training time for the entire ensemble is identical to the time required to train a single traditional model. During testing time, one can evaluate and average the. last (and therefore most accurate) m out of M models. Our approach is naturally compatible with. other methods to improve the accuracy, such as data augmentation, stochastic depth (Huang et al.. 2016b), or batch normalization (Ioffe & Szegedy2015). In fact, Snapshot Ensembles can even be ensembled, if for example parallel resources are available during training. In this case, an ensemble. of K Snapshot Ensembles yields K M models at K times the training cost..\nWe evaluate the efficacy of Snapshot Ensembles on three state-of-the-art deep learning architectures for object recognition: ResNet (He et al.2016b), Wide-ResNet (Zagoruyko & Komodakis] 2016) and DenseNet (Huang et al.||2016a). We show across four different data sets that Snapshot Ensem-. bles almost always reduce error without increasing training costs. For example, on CIFAR-10 and. CIFAR-100, Snapshot Ensembles obtains error rates of 3.44% and 17.41% respectively.\ndiversity can be exploited through ensembling, in which multiple neural networks are trained from different initializations and then combined with majority voting or averaging (Caruana et al.2004) Ensembling often leads to drastic reductions in error rates. In fact, most high profile competitions e.g. Imagenet (Deng et al.2009) or Kaggle'] are won by ensembles of deep learning architectures.\nAs an alternative to traditional ensembles. so-called \"implicit'' ensembles have high efficiency dur ing both training and testing (Srivastava et al.2014]Wan et al.]2013] Huang et al.. 2016bf Singl et al.2016,Krueger et al.| 2016). The Dropout (Srivastava et al.] 2014) technique creates an en semble out of a single model by \"dropping\". Or zeroing random sets of hidden nodes durin each mini-batch. At test time, no nodes are dropped, and each node is scaled by the probability o. surviving during training.Srivastava et al.claim that Dropout reduces overfitting by preventing th. co-adaptation of nodes. An alternative explanation is that this mechanism creates an exponentia. number of networks with shared weights during training, which are then implicitly ensembled a. test time. DropConnect (Wan et al.2013) uses a similar trick to create ensembles at test time b. dropping connections (weights) during training instead of nodes. The recently proposed Stochasti. Depth technique (Huang et al.]2016b) randomly drops layers during training to create an implici. ensemble of networks with varying depth at test time. Finally, Swapout (Singh et al.]2016) is a. stochastic training method that generalizes Dropout and Stochastic Depth. From the perspective o. model ensembling, Swapout creates diversified network structures for model averaging. Our pro. posed method similarly trains only a single model; however, the resulting ensemble is \"explicit' ii. that the models do not share weights. Furthermore, our method can be used in conjunction with an. of these implicit ensembling techniques.\nSeveral recent publications focus on reducing the test time cost of ensembles, by transferring th \"knowledge\"' of cumbersome ensembles into a single model (Bucilu et al.l 2006, Hinton et al.. 2015).Hinton et al.(2015) propose to use an ensemble of multiple networks as the target of a singl. (smaller) network. Our proposed method is complementary to these works as we aim to reduce th training cost of ensembles rather than the test-time cost..\nPerhaps most similar to our work is that of Swann & Allinson(1998) and|Xie et al.[(2013), wh explore creating ensembles from slices of the learning trajectory.Xie et al. introduce the hori zontal and vertical ensembling method, which combines the output of networks within a range o training epochs.More recently, Jean et al.[(2014) and Sennrich et al.(2016) show improvemen by ensembling the intermediate stages of model training. Laine & Aila (2016) propose a tempora ensembling method for semi-supervised learning, which achieves consensus among models trainec with different regularization and augmentation conditions for better generalization performance. Fi nally, Moghimi et al.(2016) show that boosting can be applied to convolutional neural networks tc create strong ensembles. Our work differs from these prior works in that we force the model to visi multiple local minima, and we take snapshots only when the model reaches a minimum. We believ this key insight allows us to leverage more power from our ensembles.\nSnapshot Ensembling produces an ensemble of accurate and diverse models from a single training. process. At the heart of Snapshot Ensembling is an optimization process which visits several loca minima before converging to a final solution. We take model snapshots at these various minima, and average their predictions at test time.\nOur work is inspired by the recent findings of Loshchilov & Hutter(2016) and|Smith (2016), whc. show that cyclic learning rates can be effective for training convolutional neural networks. The au-. thors show that each cycle produces models which are (almost) competitive to those learned with. traditional learning rate schedules while requiring a fraction of training iterations. Although model. performance temporarily suffers when the learning rate cycle is restarted, the performance eventu ally surpasses the previous cycle after annealing the learning rate. The authors suggest that cycling. perturbs the parameters of a converged model, which allows the model to find a better local mini- mum. We build upon these recent findings by (1) showing that there is significant diversity in the. local minima visited during each cycle and (2) exploiting this diversity using ensembles. We are not. concerned with speeding up or improving the training of a single model; rather, our goal is to extract. an ensemble of classifiers while following the optimization path of the final model..\nEnsembles work best if the individual models (1) have low test error and (2) do not overlap in the set of examples they misclassify. Along most of the optimization path, the weight assignments oi a neural network tend not to correspond to low test error. In fact, it is commonly observed that the validation error drops significantly only after the learning rate has been reduced, which is typically done after several hundred epochs. Our approach is inspired by the observation that training neural networks for fewer epochs and dropping the learning rate earlier has minor impact on the final test error (Loshchilov & Hutter!2016). This seems to suggest that local minima along the optimization path become promising (in terms of generalization error) after only a few epochs.\nCyclic Cosine Annealing. To converge to mul tiple local minima, we follow a cyclic annealing. schedule as proposed byLoshchilov & Hutter. (2016).We lower the learning rate at a very. fast pace, encouraging the model to converge. towards its first local minimum after as few as. 50 epochs. The optimization is then contin-. ued at a larger learning rate, which perturbs the. model and dislodges it from the minimum. We. repeat this process several times to obtain mul-. tiple convergences. Formally, the learning rate. Q has the form:\nx(t) = f(mod(t- 1, [T/MD)) (1\nx(t) = f (mod(t - 1, [T/MD)\nVhere t is the iteration number,. LIsneTo al number of training iterations, and f is a. nonotonically decreasing function. In other. Figure words, we split the training process into M cy-. FAR10 cles. each of which starts with a large learning. cosine rate, which is annealed to a smaller learning. els, de rate. The large learning rate a = f(0) pro-. the eno vides the model enough energy to escape from. a critical point, while the small learning rate. x = f([T/MD) drives the model to a well behaved 1c f to be the shifted cosine function proposed by|Loshchi\nwhere ao is the initial learning rate. Intuitively, this function anneals the learning rate from its initial value ao to f([T/M) ~ 0 over the course of a cycle. Following (Loshchilov & Hutter2016), we update the learning rate at each iteration rather than at every epoch. This improves the convergence of short cycles, even when a large initial learning rate is used.\nEnsembling at Test Time. The ensemble prediction at test time is the average of the last m (m < M) model's softmax outputs. Let x be a test sample and let h,(x) be the softmax score of snapshot i. The output of the ensemble is a simple average of the last m models: to have the lowest test error.\nCifar10 (L=100,k=24, B=300 epochs) 101 Standard lr scheduling. Cosine annealing with restart lr 0.1. 100 S 10-1 10-2 10-3 Model Model Model Model Model Model 1 2 3 4 5 6 10-4 0 50 100 150 200 250 300 Epochs\nFigure 2: Training loss of 100-layer DenseNet on CI-. FAR1O using standard learning rate (blue) and M = 6. cosine annealing cycles (red). The intermediate mod-. els, denoted by the dotted lines, form an ensemble at the end of training..\nQ0 mod(t 1, [T/MD COS + 1 2 [T/M]\nSnapshot Ensembling.Figure 2|depicts the training process using cyclic and traditional learning rate schedules. At the end of each training cycle, it is apparent that the model reaches a local mini- mum with respect to the training loss. Thus, before raising the learning rate, we take a \"snapshot' of the model weights (indicated as vertical dashed black lines). After training M cycles, we have M model snapshots, f1 ... fm, each of which will be used in the final ensemble. It is important to highlight that the total training time of the M snapshots is the same as training a model with a stan- dard schedule (indicated in blue). In some cases, the standard learning rate schedule achieves lower training loss than the cyclic schedule; however, as we will show in the next section, the benefits of ensembling outweigh this difference\nMethod C10 C100 SVHN Tiny ImageNet Single model 5.52 28.02 1.96 46.50 NoCycle Snapshot Ensemble 5.49 26.97 1.78 43.69 ResNet-110 SingleCycle Ensembles 6.66 24.54 1.74 42.60 Snapshot Ensemble (o = 0.1) 5.73 25.55 1.63 40.54 Snapshot Ensemble (ao = 0.2) 5.32 24.19 1.66 39.40 Single model 5.43 23.55 1.90 39.63 Dropout 4.68 22.82 1.81 36.58 NoCycle Snapshot Ensemble 5.18 22.81 1.81 38.64 Wide-ResNet-32 SingleCycle Ensembles 5.95 21.38 1.65 35.53 Snapshot Ensemble (ao = 0.1) 4.41 21.26 1.64 35.45 Snapshot Ensemble (ao = 0.2) 4.73 21.56 1.51 32.90 Single model 5.24* 24.42* 1.77 39.09 Dropout 6.08 25.79 1.79* 39.68 NoCycle Snapshot Ensemble 5.20 24.63 1.80 38.51 DenseNet-40 SingleCycle Ensembles 5.43 22.51 1.87 38.00 Snapshot Ensemble (ao = 0.1) 4.99 23.34 1.64 37.25 Snapshot Ensemble (ao = 0.2) 4.84 21.93 1.73 36.61 Single model 3.74* 19.25* Dropout 3.65 18.77 NoCycle Snapshot Ensemble 3.80 19.30 DenseNet-100 SingleCycle Ensembles 4.52 18.38 Snapshot Ensemble (ao = 0.1) 3.57 18.12 Snapshot Ensemble (ao = 0.2) 3.44 17.41"}, {"section_index": "3", "section_name": "4.1 DATASETS", "section_text": "CIFAR. The two CIFAR datasets (Krizhevsky & Hinton2009) consist of colored natural images sized at 3232 pixels. CIFAR-10 (C10) and CIFAR-100 (C100) images are drawn from 10 and 100 classes, respectively. For each dataset, there are 50,o00 training images and 10,000 images reserved for testing. We use a standard data augmentation scheme (Lin et al.|2013f |Romero et al.||2014) Lee et al. 2015] Springenberg et al.2014]Srivastava et al.]2015Huang et al.2016bf Larsson et al. 2016), in which the images are zero-padded with 4 pixels on each side, randomly cropped to produce. 32 32 images, and horizontally mirrored with probability 0.5..\nTiny ImageNet. The Tiny ImageNet datasef3|consists of a subset of ImageNet images (Deng et al. 2009). There are 200 classes, each of which has 500 training images and 50 validation images Each image is resized to 64 64 and augmented with random crops, horizontal mirroring, and RGB intensity scaling (Krizhevsky et al.|2012).\nImageNet. The ILSVRC 2012 classification dataset (Deng et al.]2009) consists of 1000 images classes. with a total of 1.2 million training images and 50.000 validation images. We adopt the same\nTable 1: Error rates (%) on CIFAR-10 and CIFAR-100 datasets. All methods in the same group are trained. for the same number of iterations. Results of our method are colored in blue, and the best result for each network/dataset pair are bolded. * indicates numbers which we take directly fromHuang et al.(2016a).\nSVHN. The Street View House Numbers (SVHN) dataset (Netzer et al.]. 2011) contains 32 32 colored digit images from Google Street View, with one class for each digit. There are 73,257 images in the training set and 26,032 images in the test set. Following common practice (Sermanet et al.[2012]Goodfellow et al.2013 Huang et al.2016a), we withhold 6,000 training images for. validation, and train on the remaining images without data augmentation..\nCifar10 (ao = 0.1) Cifar100 (o = 0.1) Cifar10 (o = 0.2) Cifar100 (o = 0.2) 4.1 20.5 4.1 20.5 DenseNet Baseline DenseNet Baseline DenseNet Baseline DenseNet Baseline. 4.0 (Huang et al; 2016) 20.0 (Huang et al, 2016) 4.0 (Huang et al. 2016) 20.0 (Huang et al. 2016) 19.5 (%) rrrnrror 3.9 19.5 Frnnr 3.8 19.0 3.8 19.0 3.7 18.5 3.7 18.5 18.0 3.6 18.0 3.5 17.5 3.5 17.5 3.4 17.0 3.4 17.0 2 3 4 5 6 2 3 4 5 6 2 3 4 6 1 2 3 6 # of snapshots # of snapshots # of snapshots # of snapshots"}, {"section_index": "4", "section_name": "4.2 TRAINING SETTING", "section_text": "Architectures. We test several state-of-the-art architectures, including residual networks. (ResNet) (He et al.[2016a), Wide ResNet (Zagoruyko & Komodakis]2016) and DenseNet (Huang et al.[2016a). For ResNet, we use the original 110-layer network introduced byHe et al.[(2016a) Wide-ResNet is a 32-layer ResNet with 4 times as many convolutional features per layer as a stan. dard ResNet. For DenseNet, our large model follows the same setup as(Huang et al.]2016a). with depth L = 100 and growth rate k = 24. In addition, we also evaluate our method on a small DenseNet, with depth L = 40 and k = 12. To adapt all these networks to Tiny ImageNet, we add a stride of 2 to the first layer of the models, which downsamples the images to 32 32. For ImageNet.. we test the 50-layer ResNet proposed in (He et al.||2016a). We use a mini batch size of 644\nBaselines. Snapshot Ensembles incur the training cost of a single model; therefore, we compare with baselines that require the same amount of training. First, we compare against a Single Model trained with a standard learning rate schedule, dropping the learning rate from 0.1 to 0.01 halfway through training, and then to O.001 when training is at 75%. Additionally, to compare against implicit ensembling methods, we test against a single model trained with Dropout. This baseline uses the same learning rate as above, and drops nodes during training with a probability of 0.2..\nWe then test the Snapshot Ensemble algorithm trained with the cyclic cosine learning rate as de scribed in (2). We test models with the max learning rate ao set to 0.1 and 0.2. In both cases, we divide the training process into learning rate cycles. Model snapshots are taken after each learn- ing rate cycle. Additionally, we train a Snapshot Ensemble with a non-cyclic learning rate schedule This NoCycle Snapshot Ensemble, which uses the same schedule as the Single Model and Dropout baselines, is meant to highlight the impact of cyclic learning rates for our method. To accurately compare with the cyclic Snapshot Ensembles, we take the same number of snapshots equally spaced throughout the training process. Finally, we compare against SingleCycle Ensembles, a Snapshot Ensemble variant in which the network is re-initialized at the beginning of every cosine learning rate cycle, rather than using the parameters from the previous optimization cycle. This baseline es. sentially creates a traditional ensemble, yet each network only has 1/M of the typical training time This variant is meant to highlight the tradeoff between model diversity and model convergence Though SingleCycle Ensembles should in theory explore more of the parameter space, the models do not benefit from the optimization of previous cycles.\nTraining Budget. On CIFAR datasets, the training budget is B = 300 epochs for DenseNet-40 and DenseNet-100, and B = 200 for ResNet and Wide ResNet models. Snapshot variants are trained with M = 6 cycles of B/M = 50 epochs for DenseNets, and M = 5 cycles of B/M = 40 epochs for ResNets/Wide ResNets. SVHN models are trained with a budget of B = 40 epochs (5 cycles of 8 epochs). For Tiny ImageNet, we use a training budget of B = 150 (6 cycles of 25 epochs) Finally, ImageNet is trained with a budget of B = 90 epochs, and we trained 2 Snapshot variants: one with M = 2 cycles and one with M = 3.\n4Exceptions: ResNet-110 and Wide-ResNet are trained with batch size 128 on Tiny ImageNet. The Ima geNet model is trained with batch size 256.\nCifar10 (o = 0.1) Cifar100 (ao = 0.1) Cifar10 (o = 0.2) Cifar100 (o = 0.2) 4.1 20.5 4.1 20.5 DenseNet Baseline DenseNet Baseline. DenseNet Baseline DenseNet Baseline. 4.0 (Huang et al, 2016) 20.0 --. -(Huang et al. 2016) 4.0 - - -(Huang et al., 2016) 20.0 -(Huang et al.-2016) 19.5 (%) rrrrrrror 3.9 19.5 19.0 3.8 19.0 18.5 3.7 18.5 18.0 3.6 18.0 3.5 17.5 3.5 17.5 3.4 17.0 3.4 17.0 3 4 5 6 1 2 3 A 6 2 3 4 5 6 1 2 3 4 6 # of snapshots # of snapshots. # of snapshots # of snapshots\nFigure 3: DenseNet-100 Snapshot Ensemble performance on CIFAR-10 and CIFAR-100 with restart learning. rate ao = 0.1 (left two) and ao = 0.2 (right two). Each ensemble is trained with M = 6 annealing cycles (50 epochs per each)."}, {"section_index": "5", "section_name": "4.3 SNAPSHOT ENSEMBLE RESULTS", "section_text": "Accuracy. The main results are summarized n Table In most cases, Snapshot ensem-. Metl oles achieve lower error than any of the base-. ine methods. Most notably, Snapshot Ensem-. Single oles yield an error rate of 17.41% on CIFAR-. Snapshot Ensei Snapshot Ensej 100 using large DenseNets, far outperforming. he record of 19.25% under the same training. Table 2: Top-1 er cost and architecture (Huang et al.]2016a). Our nethod has the most success on CIFAR-100 set using ResNet-: nd Tiny ImageNet, which is likely due to the. complexity of these datasets. The softmax outputs for these datase. arge number of classes, making it unlikely that any two models n. shot Ensembling is also capable of improving the competitive bas as well, reducing error by 1% and 0.4% respectively with the Wid.\nThe NoCycle Snapshot Ensemble generally has little effect on performance, and in some instances. even increases the test error. This highlights the need for a cyclic learning rate for useful ensembling. The SingleCycle Ensemble has similarly mixed performance. In some cases, e.g., DenseNet-40 or CIFAR-100, the SingleCycle Ensemble is competitive with Snapshot Ensembles. However, as the. model size increases to 100 layers, it does not perform as well. This is because it is difficult to train a. large model from scratch in only a few epochs. These results demonstrate that Snapshot Ensembles. tend to work best when utilizing information from previous cycles. Effectively, Snapshot Ensembles strike a balance between model diversity and optimization..\nTable 2 shows Snapshot Ensemble results on ImageNet. The Snapshot Ensemble with M = 2 achieves 23.33% validation error, outperforming the single model baseline with 24.01% validation. error. It appears that 2 cycles is the optimal choice for the ImageNet dataset. Provided with the limited total training budget B = 90 epochs, we hypothesize that allocating fewer than B/2 = 45 epochs per training cycle is insufficient for the model to converge on such a large dataset..\nEnsemble Size. In some applications, it may be beneficial to vary the. size of the ensemble dynamically at test time depending on available. resources. Figure [3|displays the performance of DenseNet-40 on the. CIFAR-100 dataset as the effective ensemble size, m, is varied. Each en semble consists of snapshots from later cycles, as these snapshots have. received the most training and therefore have likely converged to bet. ter minima. Although ensembling more models generally gives better. performance, we observe significant drops in error when the second and. third models are added to the ensemble. In most cases, an ensemble of. two models outperforms the baseline model..\nRestart Learning Rate. The effect of the restart learning rate can be observed in Figure 3] The left two plots show performance when using a restart learning rate of ao = 0.1 at the beginning of each cycle, and the right two plots show ao = 0.2. In most cases, ensembles with the larger restart learning rate perform better, presumably because the strong perturbation in between cycles increases the diversity of local minima.\nVarying Number of Cycles. Given a fixed training budget, there is a trade-off between the number of learning rate cycles and their length. Therefore, we investigate how the number of cycles M affects the ensemble performance, given a fixed training budget. We train a 40-layer DenseNet or the CIFAR-100 dataset with an initial learning rate of ao = 0.2. We fix the total training budgel B = 300 epochs, and vary the value of Mf E {2, 4, 6, 8, 10}. As shown in Table[3] our method is relatively robust with respect to different values of M. At the extremes, M = 2 and M = 10, we find a slight degradation in performance, as the cycles are either too few or too short. In practice, we find that setting M to be 4 ~ 8 works reasonably well.\nVarying Training Budget. The left and middle panels of Figure|4[show the performance of Snap. shot Ensembles and SingleCycle Ensembles as a function of training budget (where the number of cycles is fixed at M = 6). We train a 40-layer DenseNet on CIFAR-10 and CIFAR-100, with an ini. tial learning rate of ao = 0.1, varying the total number of training epochs from 60 to 300. We observe.\nMethod Val. Error (%) Single model 24.01 Snapshot Ensemble (M = 2) 23.33 Snapshot Ensemble (M = 3) 23.96\nTable 2: Top-1 error rates (%) on ImageNet validation set using ResNet-50 with varying number of cycles\nM Test Error (%) 2 22.92 4 22.07 6 21.93 8 21.89 10 22.16\nTable 3: Error rates of. a DenseNet-40 Snapshot Ensemble on CIFAR-100 varying M-the number of models (cycles) used in the. ensemble.\nCifar10, DenseNet-40 Cifar100, DenseNet-40 Cifar100, DenseNet-40 11 38 Snapshot Ensemble Snapshot Ensemble 28 10 SingleCycle Ensemble 36 -SingleCycle Ensemble (%) rnnnn arne tnnsnnne 26 34 9 24 32 8 22 30 28 20 6 18 Single Model 26 Snapshot ensemble 5 (50 epochs per model cost) 24 16 True ensemble of fully trained models (300 epochs per model cost) 4 22 14 100 150 200 250 300 100 150 200 250 300 1 2 3 4 5 6 Training budget B (epochs). Training budget B (epochs). # of models\n11 38 Snapshot Ensemble Snapshot Ensemble 28 10 SingleCycle Ensemble 36 SingleCycle Ensemble (%)rrnn aane rnnrnnne 26 9 34 24 32 8 22 30 28 20 6 18 Single Model 26 Snapshot ensemble (50 epochs per model cost) 5 24 16 True ensemble of fully trained models (300 epochs per model cost): 22 14 100 150 200 250 300 100 150 200 250 300 2 3 4 5 Training budget B (epochs) Training budget B (epochs) # of models\nFigure 4: Snapshot Ensembles under different training budgets on (Left) CIFAR-10 and (Middle) CIFAR-100 Right: Comparison of Snapshot Ensembles with true ensembles..\nCifar10 (cosine annealing) Cifar100 (cosine annealing) Cifar10 (standard lr scheduling) Cifar100 (standard lr scheduling) 18 50 90 100 with 5-th snapshot 80 16 90 with 4-th snapshot 70 with 3-rd snapshot 80 14 with 2-nd snapshot with 5-th snapshot with 1-st snapshot with 4-th snapshot 50 35 with 3-rd snapshot 10 40 with 2-nd snapshot test with 1-st snapshot 30 8 20 40 6 25 10 30 4 20 0 20 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0\nthat both Snapshot Ensembles and SingleCycle Ensembles become more accurate as training bud get increases. However, we note that as training budget decreases, Snapshot Ensembles still yield competitive results, while the performance of the SingleCycle Ensembles degrades rapidly. These results highlight the improvements that Snapshot Ensembles obtain when the budget is low. If the budget is high, then the SingleCycle baseline approaches true ensembles and outperforms Snapshot ensembles eventually.\nComparison with True Ensembles. We compare Snapshot Ensembles with the traditional ensem-. bling method. The right panel of Figure4 shows the test error rates of DenseNet-40 on CIFAR-100.. The true ensemble method averages models that are trained with 300 full epochs, each with differ-. ent weight initializations. Given the same number of models at test time, the error rate of the true ensemble can be seen as a lower bound of our method. Our method achieves performance that is. comparable with ensembling of 2 independent models. but with the training cost of one model.."}, {"section_index": "6", "section_name": "4.4 DIVERSITY OF MODEL ENSEMBLES", "section_text": "Parameter Space. We hypothesize that the cyclic learning rate schedule creates snapshots which are not only accurate but also diverse with respect to model predictions. We qualitatively measure this diversity by visualizing the local minima that models converge to. To do so, we linearly interpolate snapshot models, as described byGoodfellow et al.(2014). Let J (0) be the test error of a model using parameters 0. Given 01 and 02 - the parameters from models 1 and 2 respectively - we can compute the loss for a convex combination of model parameters: J ( (01) + (1 - X) (02)), where is a mixing coefficient. Setting to 1 results in a parameters that are entirely 01 while setting to 0 gives the parameters 02. By sweeping the values of X, we can examine a linear slice of the parameter space. Two models that converge to a similar minimum will have smooth parameter interpolations. whereas models that converge to different minima will likely have a non-convex interpolation, with a spike in error when X is between 0 and 1.\nFigure 5|displays interpolations between the final model of DenseNet-40 (sixth snapshot) and all. intermediate snapshots. The left two plots show Snapshot Ensemble models trained with a cyclic. learning rate, while the right two plots show NoCycle Snapshot models. = 0 represents a model which is entirely snapshot parameters, while X = 1 represents a model which is entirely the param eters of the final model. From this figure, it is clear that there are differences between cyclic and\nFigure 5: Interpolations in parameter space between the final model (sixth snapshot) and all intermediate snapshots. X = 0 represents an intermediate snapshot model, while X = 1 represents the final model. Left: A Snapshot Ensemble, with cosine annealing cycles (ao = 0.2 every B/M = 50 epochs). Right: A NoCycle. Snapshot Ensemble. (two learning rate drops. snapshots every 50 epochs).\nnon-cyclic learning rate schedules. Firstly, all of the cyclic snapshots achieve roughly the same error. as the final cyclical model, as the error is similar for X = O and X = 1. Additionally, it appears. that most snapshots do not lie in the same minimum as the final model. Thus the snapshots are likely to misclassify different samples. Conversely, the first three snapshots achieve much higher. error than the final model. This can be observed by the sharp minima around X = 1, which suggests. that mixing in any amount of the snapshot parameters will worsen performance. While the fina. two snapshots achieve low error, the figures suggests that they lie in the same minimum as the final. model, and therefore likely add limited diversity to the ensemble..\nActivation space. To further explore the diver- sity of models, we compute the pairwise corre- lation of softmax outputs for every pair of snap shots. Figure 6 displays the average correla- tion for both cyclic snapshots and non-cyclical snapshots. Firstly, there are large correlations between the last 3 snapshots of the non-cyclic training schedule (right). These snapshots are taken after dropping the learning rate, suggest- ing that each snapshot has converged to the same minimum. Though there is more diversity amongst the earlier snapshots, these snapshots have much higher error rates and are therefore not ideal for ensembling. Conversely, there is less correlation between all cyclic snapshots (left). Because all snapshots have similar accu- racy (as can be seen in Figure 5), these differ- ences in predictions can be exploited to create effective ensembles."}, {"section_index": "7", "section_name": "5 DISCUSSION", "section_text": "We introduce Snapshot Ensembling, a simple method to obtain ensembles of neural networks with out any additional training cost. Our method exploits the ability of SGD to converge to and escap from local minima as the learning rate is lowered, which allows the model to visit several weigh assignments that lead to increasingly accurate predictions over the course of training. We harnes this power with the cyclical learning rate schedule proposed by Loshchilov & Hutter (2016), savin model snapshots at each point of convergence. We show in several experiments that all snapshot are accurate, yet produce different predictions from one another, and therefore are well suited fo test-time ensembles. Ensembles of these snapshots significantly improve the state-of-the-art oi CIFAR-10, CIFAR-100 and SVHN. Future work will explore combining Snapshot Ensembles witl traditional ensembles. In particular, we will investigate how to balance growing an ensemble witl new models (with random initializations) and refining existing models with further training cycle under a fixed training budget."}, {"section_index": "8", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Ilya Loshchilov and Frank Hutter for their insightful comments on the cyclic cosine shaped learning rate. The authors are supported in part by the, III-1618134, II-1526012, IIS 1149882 grants from the National Science Foundation, US Army Research Office W911NF-14 1-0477. and the Bill and Melinda Gates Foundation."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In KDD, 2006\nRich Caruana. Alexandru Niculescu-Mizil. Geoff Crew, and Alex Ksikes. Ensemble selection fron libraries of models. In ICML, 2004\nCosine with restart (CIFAR100) Standard lr scheduling (CIFAR100) 1 0.9 0.89 0.88 0.88 0.88 0.78 0.76 0.82 0.82 0.82 V 0.9 1 0.92 0.91 0.91 0.9 2 0.78 0.78 0.83 0.83 0.83 0.89 0.92 1 0.93 0.92 0.91 3 0.76 0.78 1 0.84 0.84 0.84 0.88 0.91 0.93 0.93 0.92 4 0.82 0.83 0.84 1 0.99 0.99 0.88 0.91 0.92 0.93 1 0.93 5 0.82 0.83 0.84 0.99 1 0.99 0.88 0.9 0.91 0.92 0.93 1 6 0.82 0.83 0.84 0.99 0.99 1 1 2 3 4 5 6 1 2 3 4 5 6\nCosine with restart (CIFAR100) Standard lr scheduling (CIFAR100) 1 0.9 0.89 0.88 0.88 0.88 1 0.78 0.76 0.82 0.82 0.82 2 0.9 1 0.92 0.91 0.91 0.9 2 0.78 1 0.78 0.83 0.83 0.83 1 3 0.89 0.92 1 0.93 0.92 0.91 3 0.76 0.78 1 0.84 0.84 0.84 S 4 0.88 0.91 0.93 O 1 0.93 0.92 4 0.82 0.83 0.84 1 0.99 0.99 5 0.88 0.91 0.92 0.93 11 0.93 5 0.82 0.83 0.84 0.99 11 0.99 6 0.88 0.9 0.91 0.92 0.93 1 6 0.82 0.83 0.84 0.99 0.99 1 1 2 3 4 5 6 1 2 3 4 5 6 7\nFigure 6: Pairwise correlation of softmax outputs be- tween any two snapshots for DenseNet-100. Left: A Snapshot Ensemble, with cosine annealing cycles. (restart with ao = 0.2 every 50 epochs). Right: A. NoCycle Snapshot Ensemble, (two learning rate drops, snapshots every 50 epochs).\nLeon Bottou. Large-scale machine learning with stochastic gradient descent. In COMPsTAT. 2010\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-sca hierarchical image database. In CVPR, 2009.\nIan J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxou networks. In ICML, 2013\nLars Kai Hansen and Peter Salamon. Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence. 12:993-1001. 1990\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. In CVPR, 2016a.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016b.\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXi preprint arXiv:1503.02531, 2015.\nGao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas tic depth. In ECCV, 2016b.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICCV, 2015..\nSbastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large targe vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007, 2014.\nKenji Kawaguchi. Deep learning without poor local minima. arXiv preprint arXiv:1605.07110 2016.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo lutional neural networks. In NIPS. 2012\nAnders Krogh, Jesper Vedelsby, et al. Neural network ensembles, cross validation, and active learn ing. In NIPS, volume 7, 1995.\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400. 2013.\nlya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with restarts. arXiv preprin arXiv:1608.03983, 2016\nMohammad Moghimi, Mohammad Saberian, Jian Yang, Li-Jia Li, Nuno Vasconcelos, and Serg Belongie. Boosted convolutional neural networks. 2016.\nAdriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, an Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014\nPierre Sermanet, Soumith Chintala, and Yann LeCun. Convolutional neural networks applied to house numbers digit classification. In ICPR, 2012.\nJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving fo. simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014\nA Swann and N Allinson. Fast committee learning: Preliminary results. Electronics Letters, 34(14): 1408-1410, 1998\nJingjing Xie, Bing Xu, and Zhang Chuang. Horizontal and vertical ensemble with deep representa tion for classification. arXiy preprint arXiv:1306.2759. 2013.\nGustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net works without residuals. arXiv preprint arXiv:1605.07648, 2016.\nRico Sennrich, Barry Haddow, and Alexandra Birch. Edinburgh neural machine translation systems for wmt 16. arXiv preprint arXiv:1606.02891, 2016"}, {"section_index": "10", "section_name": "A. Single model and Snapshot Ensemble performance over time", "section_text": "In Figures[79] we compare the test error of Snapshot Ensembles with the error of individual model snapshots. The blue curve shows the test error of a single model snapshot using a cyclic cosine learning rate. The green curve shows the test error when ensembling model snapshots over time. (Note that, unlike Figure[3] we construct these ensembles beginning with the earliest snapshots.) As a reference, the red dashed line in each panel represents the test error of single model trained for 300 epochs using a standard learning rate schedule. Without Snapshot Ensembles, in about half of the cases, the test error of final model using a cyclic learning rate-the right most point in the blue curve--is no better than using a standard learning rate schedule.\nOne can observe that under almost all settings, complete Snapshot Ensembles--the right most points. of the green curves--outperform the single model baselines. In many cases, ensembles of just 2 o1. 3 model snapshots are able to match the performance of the single model trained with a standarc learning rate. Not surprisingly, the ensembles of model snapshots consistently outperform any of its. members, yielding a smooth curve of test error over time..\nResNet-110 on C10 (a,=0.1) ResNet-110 on C10 (,=0.2) 9 9 8 errr trsrerorr test 6 6 5 5 1 2 3 4 5 1 2 3 4 5 #snapshots #snapshots ResNet-110 on C100 (,=0.1) ResNet-110 on C100 (,=0.2) 324 32 30 %) 30 ernorr trrrn erorr 28 28 26 26 aestt 24 24 1 2 3 4 5 1 2 3 4 5 #snapshots #snapshots ResNet-110 on SVHN (,=0.1) ResNet-110 on SVHN (a,=0.2) (%) oe (%)errnr ero 1.8 1.8 aestt 1.6 1.6 2 3 4 5 2 3 1 1 4 5 #snapshots #snapshots ResNet-110 on Tiny ImageNet (.=0.1) ResNet-110 on Tiny ImageNet ( =0.2) 50 50 (%) (%) trsnrerorr 45 trrrr erorr 45 A 40 40 1 2 3 4 5 6 1 2 3 4 5 6 #snapshots #snapshots Wide-ResNet-32 on C10 ( =0.1) Wide-ResNet-32 on C10 (,=0.2) 7 (%) one (%) errn eroo 6 6 aest 5 A 4. 47 1 2 3 4 5 2 3 5 #snapshots #snapshots --- Single model snapshot Snapshot Ensemble. Single model with STD-LR\nFigure 7: Single model and Snapshot Ensemble performance over time (part 1)\nWide-ResNet-32 on C100 ( =0.1) Wide-ResNet-32 on C100 ( =0.2) 28 28 26 26 errr 24 24 test 22 22 20 20 2 3 4 5 2 3 4 5 #snapshots #snapshots Wide-ResNet-32 on SVHN ( =0.1) Wide-ResNet-32 on SVHN ( =0.2) 2 2 (%)ernr ere) 1.8 tereorr 1.8 1.6 1.6 1 2 3 4 5 1 2 3 4 5 #snapshots #snapshots Wide-ResNet-32 on Tiny ImageNet (.=0.1) Wide-ResNet-32 on Tiny ImageNet (.=0.2) 42 42 (%) 40 (%) Joue 1 40 errorr 38 38 36 36 test test 34 34 32 32 2 3 4 5 6 2 3 4 5 6 #snapshots #snapshots DenseNet-40 on C10 ( =0.1) DenseNet-40 on C10 (,=0.2) 8 8 (%)err ero (%) ernr eeo) 4 4 2 3 4 5 6 1 2 3 4 5 6 #snapshots #snapshots DenseNet-40 on C100 ( =0.1) DenseNet-40 on C100 ( =0.2) 30 30 (%) errr eroe (%) errrr erre 25 25 20 20 2 3 4 5 1 2 3 5 6 4 #snapshots #snapshots -- Single model snapshot Snapshot Ensemble Single model with STD-LR\nFigure 8: Single model and Snapshot Ensemble performance over time (part 2)\nDenseNet-40 on SVHN ( =0.1) DenseNet-40 on SVHN (=0.2) 2 (%) errrero (%) erreroo 1.8 1.8 1.6 1.6 1 2 3 4 5 1 2 3 4 5 #snapshots #snapshots DenseNet-40 on Tiny ImageNet (,=0.1) DenseNet-40 on Tiny ImageNet ( =0.2) 44 44 (%) (%) ernn eroo 42 42 err 40 40 test 38 38 36 36 1 2 3 4 5 6 1 2 3 4 5 6 #snapshots #snapshots DenseNet-100 on C10 ( =0.1) DenseNet-100 on C10 ( =0.2) 6 6 (%) ersrrrro (%) err eroo 5 3 3 1 2 3 4 6 1 2 3 4 5 6 #snapshots #snapshots DenseNet-100 on C100 ( =0.1) DenseNet-100 on C100 ( =0.2) 24 24 (%) 22 22 t ern eror t eenneror 20 20 18 18 1 2 3 4 5 6 1 2 3 4 5 6 #snapshots #snapshots -- Single model snapshot. -- Snapshot Ensemble Single model with STD-LR\nFigure 9: Single model and Snapshot Ensemble performance over time (part 3)"}]
SyxeqhP9ll
[{"section_index": "0", "section_name": "ABSTRACT", "section_text": "In this paper we propose equipping Generative Adversarial Networks with the ability to produce direct energy estimates for samples. Specifically, we develop a flexible adversarial training framework, and prove this framework not only en sures the generator converges to the true data distribution, but also enables the discriminator to retain the density information at the global optimum. We derive the analytic form of the induced solution, and analyze its properties. In order to make the proposed framework trainable in practice, we introduce two effective approximation techniques. Empirically, the experiment results closely match ou1 theoretical analysis, verifying that the discriminator is able to recover the energy of data distribution."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) represent an important mile. stone on the path towards more effective generative models. GANs cast generative model trainin. as a minimax game between a generative network (generator), which maps a random vector into the. data space, and a discriminative network (discriminator), whose objective is to distinguish gener. ated samples from real samples. Multiple researchers Radford et al. (2015); Salimans et al. (2016). Zhao et al. (2016) have shown that the adversarial interaction with the discriminator can result in generator that produces compelling samples. The empirical successes of the GAN framework were. also supported by the theoretical analysis of Goodfellow et al., who showed that, under certain con. ditions, the distribution produced by the generator converges to the true data distribution, while the. discriminator converges to a degenerate uniform solution.\nIt is tempting to consider the GAN discriminator as a candidate for providing this sort of scoring. function. Conceptually, it is a trainable sample evaluation mechanism that - owing to GAN train. ing paradigm - could be closely calibrated to the distribution modeled by the generator. If the. discriminator could retain fine-grained information of the relative quality of samples, measured for. instance by probability density or unnormalized energy, it could be used as an evaluation metric. Such data-driven evaluators would be highly desirable for problems where it is difficult to define evaluation criteria that correlate well with human judgment. Indeed, the real-valued discriminator of the recently introduced energy-based GANs Zhao et al. (2016) might seem like an ideal candidate. energy function. Unfortunately, as we will show, the degenerate fate of the GAN discriminator a the optimum equally afflicts the energy-based GAN of Zhao et al...\n*Part of this work was com pleted while author was at Maluuba Research\nWhile GANs have excelled as compelling sample generators, their use as general purpose probabilis. tic generative models has been limited by the difficulty in using them to provide density estimates or even unnormalized energy values for sample evaluation..\nIn this paper we consider the questions: (i) does there exists an adversarial framework that induces a non-degenerate discriminator, and (ii) if so, what form will the resulting discriminator take? We introduce a novel adversarial learning formulation, which leads to a non-degenerate discriminator while ensuring the generator distribution matches the data distribution at the global optimum. We. derive a general analytic form of the optimal discriminator, and discuss its properties and their\nrelationship to the specific form of the training objective. We also discuss the connection between the proposed formulation and existing alternatives such as the approach of Kim & Bengio (2016) Finally, for a specific instantiation of the general formulation, we investigate two approximation. techniques to optimize the training objective, and verify our results empirically.."}, {"section_index": "2", "section_name": "2 RELATED WORK", "section_text": "Following a similar motivation, the field of Inverse Reinforcement Learning (IRL) (Ng & Russell. 200o) has been exploring ways to recover the \"intrinsic\" reward function (analogous to the discrim. inator) from observed expert trajectories (real samples). Taking this idea one step further, appren. ticeship learning or imitation learning (Abbeel & Ng, 2004; Ziebart et al., 2008) aims at learning a. policy (analogous to the generator) using the reward signals recovered by IRL. Notably, Ho & Er. mon draw a connection between imitation learning and GAN by showing that the GAN formulatior. can be derived by imposing a specific regularization on the reward function. Also, under a specia. case of their formulation, Ho & Ermon provide a duality-based interpretation of the problem, whicl. inspires our theoretical analysis. However, as the focus of (Ho & Ermon, 2016) is only on the policy the authors explicitly propose to bypass the intermediate IRL step, and thus provide no analysis o. the learned reward function.\nThe GAN models most closely related to our proposed framework are energy-based GAN models o Zhao et al. (2016) and Kim & Bengio (2016). In the next section, We show how one can derive botl of these approaches from different assumptions regarding regularization of the generative model.\nBefore presenting the proposed formulation, we first state some basic assumptions required by the analysis, and introduce notations used throughout the paper.\nFollowing the original work on GANs (Goodfellow et al., 2014), our analysis focuses on the non. parametric case, where all models are assumed to have infinite capacities. While many of the non. parametric intuitions can directly transfer to the parametric case, we will point out cases where this. transfer fails. We assume a finite data space throughout the analysis, to avoid technical machinery. out of the scope of this paper. Our results, however, can be extended to continuous data spaces, anc our experiments are indeed performed on continuous data..\nLet be the data space under consideration, and P = {p | p(x) 0, Vx E X, xex P(x) = 1} be the set of all proper distributions defined on X. Then, Pdata E P : I +> R and Pgen E P :. +> R will denote the true data distribution and the generator distribution. Ex~p f(x) denotes the. expectation of the quantity f(x) w.r.t. x drawn from p. Finally, the term \"discriminator' will refer to any structure that provides training signals to the generator based on some measure of difference between the generator distribution and the real data distribution, which which includes but is not. limited to f-divergence."}, {"section_index": "3", "section_name": "3.2 PROPOSED FORMULATION", "section_text": "When the generator distribution matches the data distribution, the training signal (gradient) w.r. the discriminator vanishes. At this point, assume the discriminator still retains density information and views some samples as more real and others as less. This discriminator will produce a training signal (gradient) w.r.t. the generator, pushing the generator to generate samples that appear more real to the discriminator. Critically, this training signal is the sole driver of the generator's training Hence, the generator distribution will diverge from the data distribution. In other words, as long a the discriminator retains relative density information, the generator distribution cannot stably matcl the data distribution. Thus, in order to keep the generator stationary as the data distribution, the discriminator must assign flat (exactly the same) density to all samples at the optimal.\nFrom the analysis above, the fundamental difficulty is that the generator only receives a single train. ing signal (gradient) from the discriminator, which it has to follow. To keep the generator stationary this single training signal (gradient) must vanish, which requires a degenerate discriminator. In this work, we propose to tackle this single training signal constraint directly. Specifically, we intro. duce a novel adversarial learning formulation which incorporates an additional training signal to the generator, such that this additional signal can.\nThe proposed formulation can be written as the following minimax training objective\nwhere c(x) : +> R is the discriminator that assigns each data point an unbounded scalar cost, and K(pgen) : P +> R is some (functionally) differentiable, convex function of pgen. Compared to the original GAN, despite the similar minimax surface form, the proposed fomulation has two crucial distinctions.\nFirstly, while the GAN discriminator tries to distinguish \"fake\"' samples from real ones using binary classification, the proposed discriminator achieves that by assigning lower cost to real samples anc higher cost to \"fake\"' one. This distinction can be seen from the first two terms of Eqn. (1), where the discriminator c(x) is trained to widen the expected cost gap between \"fake\"' and real samples while the generator is adversarially trained to minimize it. In addition to the different adversaria mechanism, a calibrating term K (pgen) is introduced to provide a countervailing source of training signal for pgen as we motivated above. For now, the form of K(pgen) has not been specified. But as we will see later, its choice will directly decide the form of the optimal discriminator c* (x).\nWith the specific optimization objective, we next provide theoretical characterization of both the generator and the discriminator at the global optimum.\nwhere c(x), Vx appears in L(pgen, c) as the dual variables introduced for the equality constraints This duality relationship has been observed previously in (Ho & Ermon, 2016, equation (7)) unde the adversarial imitation learning setting. However, in their case, the focus was fully on the generato side (induced policy), and no analysis was provided for the discriminator (reward function).\nIn order to characterize c*, we first expand the set constraint on pgen into explicit equality and inequality constraints:\nmin K(Pgen Pgen S.t. Pgen(x) - Pdata(x) = 0, Vx Pgen(x) 0,Vx Pgen(x) -1 = 0. xEX\nNotice that K (pgen) is a convex function of pgen(x) by definition, and both the equality and inequality. constraints are affine functions of pgen(x). Thus, problem (2) is a convex optimization problem What's more, since (i) domk is open, and (ii) there exists a feasible solution pgen = Pdata to (3), by. the refined Slater's condition (Boyd & Vandenberghe, 2004, page 226), we can further verify that. strong duality holds for (3). With strong duality, a typical approach to characterizing the optimal. solution is to apply the Karush-Kuhn-Tucker (KKT) conditions, which gives rise to this theorem:\nbalance (cancel out) the discriminator signal at the optimum, so that the generator can stay. stationary even if the discriminator assigns non-flat density to samples. cooperate with the discriminator signal to make sure the generator converges to the data. distribution. and the discriminator retains the correct relative density information\nE [c(x)] - E [c(x)] + K(Pgen), max min C Pgen E P x~Pgen x~Pdata\nDefine L(Pgen, c) = Ex~pgen [c(x)] - Ex~paa [c(x)] + K(Pgen), then L(Pgen, c) is the Lagrange dual function of the following optimization problem\nmin K(Pgen) Pgen E P S.t. Pgen(x) - Pdata(x) = 0, Vx E X\nhin gen S.t. Pgen(x) - Pdata(x) = 0, V Pgen(x) 0,Vx Pgen(x) -1 = 0. rEX\nProposition 3.1. By the KKT conditions of the convex problem (3), at the global optimum, the optimal generator distribution p*en matches the true data distribution pdata, and the optimal discrim. inator c* (x) has the following form:.\naK (Pgen (x X*+ *(x),Vx E X dp aen Pgen=Pdata\nPdata(x) > 0 ere Pdata(x) = 0\nWeak Support Discriminator. As part of the optimal discriminator function, the term *(x) plays the role of support discriminator. That is, it tries to distinguish the support of the data distribution, i.e. supp(Pdata) = {x E Pdata(x) > O}, from its complement set with zero- and x' E supp(Pdata)c, it is guaranteed that *(x) *(x'). However, because *() is under- determined, there is nothing preventing the inequality from degenerating into an equality. Therefore, we name it the weak support discriminator. But, in all cases, * () assigns zero cost to all data points within the support. As a result, it does not possess any fine-grained density information inside of the data support. It is worth pointing out that, in the parametric case, because of the smoothness and the generalization properties of the parametric model, the learned discriminator may generalize beyond the data support.\nGlobal Bias. In (4), the term X* is a scalar value shared for all x. As a result, it does not affect the relative cost among data points, and only serves as a global bias for the discriminator function\nHaving discussed general properties, we now consider some specific cases of the convex function K, and analyze the resulting optimal discriminator c* (x) in detail..\nThe detailed proof of proposition 3.1 is provided in appendix A.1. From (4), we can see the exact form of the optimal discriminator depends on the term K(pgen), or more specifically its gradient. But, before we instantiate K (pgen) with specific choices and show the corresponding forms of c* (x). we first discuss some general properties of c* (x) that do not depend on the choice of K.\n1. First, let us consider the case where K is the negative entropy of the generator distribution, i.e. K (pgen) = -H(pgen). Taking the derivative of the negative entropy w.r.t. Pgen(x), we have. cent(x) = - log Pdata(x) - 1 - X* + *(x), Vx E X, (5) where *(x) and X* have the same definitions as in (4).. Up to a constant, this form of c*nt(x) is exactly the energy function of the data distribution. Pdata(x). This elegant result has deep connections to several existing formulations, which include. max-entropy imitation learning (Ziebart et al., 2008) and the directed-generator-trained energy- based model (Kim & Bengio, 2016). The core difference is that these previous formulations. are originally derived from maximum-likelihood estimation, and thus the minimax optimization. is only implicit. In contrast, with an explicit minimax formulation we can develop a better. understanding of the induced solution. For example, the global bias X* suggests that there exists. more than one stable equilibrium the optimal discriminator can actually reach. Further, *(x can be understood as a support discriminator that poses extra cost on generator samples which. fall in zero-probability regions of data space. 2. When K (Pgen) = rex Pgen(x)2 = |Pgen|2, which can be understood as posing 2 regular- a K (Pgen) . = Pdata(x), and it follows ct,(x) = -Pdata(x) - * + *(x), Vx E X, with * (x), X* similarly defined as in (4). Surprisingly, the result suggests that the optimal discriminator c*, (x) directly recovers the neg-. ative probability -Pdata(x), shifted by a constant. Thus, similar to the entropy solution (5), it. fully retains the relative density information of data points within the support..\ncent(x) = - log Pdata(x) - 1 - X* + *(x), Vx E X,\nc+(x) =-Pdata(x)-X*+ *(x),Vx E X\nAs we finish the theoretical analysis of the proposed formulation, we want to point out that simply adding the same term K (pgen) to the original GAN formulation will not lead to both a generator that matches the data distribution, and a discriminator that retains the density information (see appendix A.3 for detailed analysis)."}, {"section_index": "4", "section_name": "1+ PARAMETRIC INSTANTIATION WITH ENTROPY APPROXIMATION", "section_text": "While the discussion in previous sections focused on the non-parametric case, in practice we are lim. ited to a finite amount of data, and the actual problem involves high dimensional continuous spaces. Thus, we resort to parametric representations for both the generator and the discriminator. In orde. to train the generator using standard back-propagation, we do not parametrize the generator distri bution directly. Instead, we parametrize a directed generator network that transforms random noise. ~ pz(z) to samples from a continuous data space R'. Consequently, we don't have analytical ac. cess to the generator distribution, which is defined implicitly by the generator network's noise- >data mapping. However, the regularization term K(pgen) in the training objective (1) requires the gen. erator distribution. Faced with this problem, we focus on the max-entropy formulation, and exploi. two different approximations of the regularization term K (Pgen) = - H (Pgen)."}, {"section_index": "5", "section_name": "4 .1 NEAREST-NEIGHBOR ENTROPY GRADIENT APPROXIMATION", "section_text": "The first proposed solution is built upon an intuitive interpretation of the entropy gradient. Firstly since we construct pgen by applying a deterministic, differentiable transform ge to samples z from a. fixed distribution pz, we can write the gradient of H(Pgen) with respect to the generator parameters. 0 as follows:\nd log Pgen(x) Intuitively, for any generated data x = ge(z), the term essentially describes the direction of local change in the sample space that will increase the log-density. Motivated by this intuition we propose to form a local Gaussian approximation pge n of pgen around each point x; in a batch of\n1Og Pgen(xi) based on th amples3 x tromthe generator, and then compute the gradient\nHowever, because of the under-determined term * (x), we cannot recover the distribution den sity Pdata exactly from either c*, or c*nt if the data support is finite. Whether this ambiguity car. be resolved is beyond the scope of this paper, but poses an interesting research problem.. 3. Finally, let's consider consider a degenerate case, where K (pgen) is a constant. That is, we don. provide any additional training signal for pgen at all. With K (pgen) = const, we simply have.\nc*st(x) = X* + *(x),Vx E X\nwhose discriminative power is fully controlled by the weak support discriminator *(x). Thus it follows that c*st(x) won't be able to discriminate data points within the support of Pdata, and its power to distinguish data from sUPp(Pdata) and sUpp(Pdata)C is weak. This closely matches the intuitive argument in the beginning of this section.\nE [c(x)] - E [c(x)] max min C Pgen EP x~Pgen x~Pdata\nwhich is very similar to the EBGAN objective (Zhao et al., 2016, equation (2) and (4)). As we show in appendix A.2, compared to the objective in (8), the EBGAN objective puts extra constraints on the allowed discriminator function. In spite of that, the EBGAN objective suf fers from the single-training-signal problem and does not guarantee that the discriminator will recover the real energy function (see appendix A.2 for detailed analysis).\ndge(z) 0log Pgen(ge(z)) VgH(Pgen) =Ez~pz [Vg log Pgen(ge(z))] = Ez~p, de dge(z\nwhere the first equality relies on the \"reparametrization trick\". Equation 9 implies that, if we can compute the gradient of the generator log-density logPgen(x) w.r.t. any x = ge(z), then we can. directly construct the Monte-Carlo estimation of the entropy gradient VeH(pgen) using samples. from the generator.\nD log Pgen(xi 1 ~ i - Xi, where i = x' is the mean of the Gaussian dxi k x'EKNN(xi)\nFinally, note the scale of this gradient approximation may not be reliable. To fix this problem, we normalize the approximated gradient into unit norm, and use a single hyper-parameter to model the scale for all x, leading to the following entropy gradient approximation\nwhere a is the hyper-parameter and u; is defined as in equation (10\nWe can think of pgen(x z) as a peaked Gaussian with a fixed, diagonal covariance, and hence its. conditional entropy is constant and can be dropped. Furthermore, H(pgen(z)) is also assumed to be fixed a priori. Hence, we can maximize H(pgen(x)) by minimizing the conditional entropy:\nH(Pgen(z|x)) = Ex~pge Ez~pgen(z|x) [- lOg Pgen(z| x)]\nH(Pgen(z|x)) = Ex~psen(x) [Ez~Pgen(z|x) [-log qgen(z|x)]- KL(Pgen(z|x)|9gen(z|x)] Ex~Pgen(x)[Ez~Pgen(z|x)[- log qgen(z|x)]] = U(Igen\nU(qgen) - log qgen(z |x)] (z[x)l\nwhich can be optimized efficiently with standard back-propagation and Monte Carlo integration of the relevant expectations based on independent samples drawn from the joint pgen(x, z). By mini- mizing this upper bound on the conditional entropy H(pgen(z x)), we are effectively maximizing a variational lower bound on the entropy H(pgen(x)).\nGaussian approximation. Specifically, each local Gaussian approximation pgen is formed by finding the k nearest neighbors of x, in the batch {x1, ..., xn}, and then placing an isotropic Gaussian distri- bution at their mean (i.e. maximimum likelihood). Based on the isotropic Gaussian approximation. the resulting gradient has the following form\n1 Hi -Xi VeH(Pgen) ~ k l|i-xi||2 Xi=go(zi\nAn obvious weakness of this approximation is that it relies on Euclidean distance to find the k nearest. neighbors. However, Euclidean distance is usually not the proper metric to use when the effective. dimension is very high. As the problem is highly challenging, we leave it for future work\nAnother approach we consider relies on defining and maximizing a variational lower bound on the. entropy H(pgen(x)) of the generator distribution. We can define the joint distribution over observed. data and the noise variables as Pgen(x, z) = Pgen(x | z)Pgen(z), where simply Pgen(z) = Pz(z) is a fixed prior. Using the joint, we can also define the marginal pgen(x) and the posterior pgen(z | x).. We can also write the mutual information between the observed data and noise variables as:\nI(Pgen(x);Pgen(z)) = H(Pgen(x)) -H(Pgen(x|z) H(Pgen(z))H(Pgen(z|x))\nH(Pgen(x)) = H(Pgen(z))- H(Pgen(z|x)) + H(Pgen(x|z))\nOptimizing this term is still problematic, because (i) we do not have access to the posterior Pgen(z x), and (ii) we cannot sample from it. Therefore, we instead minimize a variational up-. per bound defined by an approximate posterior qgen(z x):\nFigure 1: True energy functions and samples from synthetic distributions. Green dots in the sampl plots indicate the mean of each Gaussian component.."}, {"section_index": "6", "section_name": "5.1 SYNTHETIC LOW-DIMENSIONAL DATA", "section_text": "First, we consider three synthetic datasets in 2-dimensional space, which are drawn from the fol lowing distributions: (i) Mixture of 4 Gaussians with equal mixture weights, (ii) Mixture of 200 Gaussians arranged as two spirals (100 components each spiral), and (iii) Mixture of 2 Gaussians with highly biased mixture weights, P(c1) = 0.9, P(c2) = 0.1. We visualize the ground-trutl energy of these distributions along with 100K training samples in Figure 1. Since the data lies ir 2-dimensional space, we can easily visualize both the learned generator (by drawing samples) anc the discriminator for direct comparison and evaluation. We evaluate here our EGAN-Ent mode using both approximations: the nearest-neighbor based approximation (EGAN-Ent-NN) and the variational-inference based approximation (EGAN-Ent-VI), and compare them with two baselines the original GAN and the energy based GAN with no regularization (EGAN-Const).\nExperiment results are summarized in Figure 2 for baseline models, and Figure 3 for the proposed. models. As we can see, all four models can generate perfect samples. However, for the discrimi- nator, both GAN and EGAN-Const lead to degenerate solution, assigning flat energy inside the em- pirical data support. In comparison, EGAN-Ent-VI and EGAN-Ent-NN clearly capture the density. information, though to different degrees. Specifically, on the equally weighted Gaussian mixture and the two-spiral mixture datasets, EGAN-Ent-NN tends to give more accurate and fine-grained solutions compared to EGAN-Ent-VI. However, on the biased weighted Gaussian mixture dataset, EGAN-Ent-VI actually fails to captures the correct mixture weights of the two modes, incorrectly assigning lower energy to the mode with lower probability (smaller weight). In contrast, EGAN- Ent-NN perfectly captures the bias in mixture weight, and obtains a contour very close to the ground. truth.\nTo better quantify these differences, we present detailed comparison based on KL divergence in. appendix B.2. What's more, the performance difference between EGAN-Ent-VI and EGAN-Ent-NN on biased Gaussian mixture reveals the limitations of the variational inference based approximation. i.e. providing inaccurate gradients. Due to space consideratiosn, we refer interested readers to the appendix B.3 for a detailed discussion.."}, {"section_index": "7", "section_name": "5.2 RANKING NIST DIGITS", "section_text": "In this experiment, we verify that the results in synthetic datasets can translate into data with higher dimensions. While visualizing the learned energy function is not feasible in high-dimensional space we can verify whether the learned energy function learns relative densities by inspecting the ranking of samples according to their assigned energies. We train on 28 28 images of a single handwritten\nFor more details, please refer to https://github. com/zihangdai/cegan_iclr2017\nIn this section, we verify our theoretical results empirically on several synthetic and real datasets. In particular, we evaluate whether the discriminator obtained from the entropy-regularized adversarial training can capture the density information (in the form of energy), while making sure the generator distribution matches the data distribution. For convenience, we refer to the obtained models as EGAN-Ent. Our experimental setting follows closely recommendations from Radford et al. (2015), except in Sec. 5.1 where we use fully-connected models (see appendix B.1 for details). 1\n(a) Standard GAN\nFigure 2: Learned energies and samples from baseline models whose discriminator cannot retain density information at the optimal. In the sample plots, blue dots indicate generated samples, and red dots indicate real ones.\n(a) Entropy regularized Energy GAN with variational inference approximation (EGAN-Ent-VI)\n(b) Entropy regularized Energy GAN with nearest neighbor approximation (EGAN-Ent-NN)\nFigure 3: Learned energies and samples from proposed models whose discriminator can retain den sity information at the optimal. Blue dots are generated samples, and red dots are real ones.\ndigit from the NIST dataset. 2 We compare the ability of EGAN-Ent-NN with both EGAN-Cons and GAN on ranking a set of 1,000 images, half of which are generated samples and the rest are rea test images. Figures 4 and 5 show the top-100 and bottom-100 ranked images respectively for eacl model, after training them on digit 1. We also show in Figure 7 the mean of all training samples so we can get a sense of what is the most common style (highest density) of digit 1 in NIST. We can notice that all of the top-ranked images by EGAN-Ent-NN look similar to the mean sample In addition, the lowest-ranked images are clearly different from the mean image, with either higl (clockwise or counter-clockwise) rotation degrees from the mean, or an extreme thickness level. W do not see such clear distinction in other models. We provide in the appendix B.4 the ranking of th full set of images.\nIn this last set of experiments, we evaluate the visual quality of samples generated by our mode. in two datasets of natural images, namely CIFAR-10 and CelebA. We employ here the variational based approximation for entropy regularization, which can scale well to high-dimensional data. Figure 6 shows samples generated by EGAN-Ent-VI. We can see that despite the noisy gradients. provided by the variational approximation, our model is able to generate high-quality samples.."}, {"section_index": "8", "section_name": "(a) EGAN-Ent-NN (b) EGAN-Const", "section_text": "Figure 4: 100 highest-ranked imag out of 1000 generated and reals (bounding box) samples\nFigure 5: 100 lowest-ranked images out of 1000 generated and reals (bounding box) samples\nWe futher validate the quality of our model's samples on CIFAR-10 using the Inception score pro-. posed by (Salimans et al., 2016) 3. Table 1 shows the scores of our EGAN-Ent-VI, the best GAN. model from Salimans et al. (2016) which uses only unlabeled data, and an EGAN-Const model which has the same architecture as our model. We notice that even without employing suggested techniques in Salimans et al. (2016), energy-based models perform quite similarly to the GAN model. Furthermore, the fact that our model scores higher than EGAN-Const highlights the im portance of entropy regularization in obtaining good quality samples.\nIn this paper we have addressed a fundamental limitation in adversarial learning approaches, which is their inability of providing sensible energy estimates for samples. We proposed a novel adversarial. learning formulation which results in a discriminator function that recovers the true data energy. We provided a rigorous characterization of the learned discriminator in the non-parametric setting, and. proposed two methods for instantiating it in the typical parametric setting. Our experimental results verify our theoretical analysis about the discriminator properties, and show that we can also obtain samples of state-of-the-art quality."}, {"section_index": "9", "section_name": "(a) E( -Ent-NN MADDDZL", "section_text": "We would like to thank the developers of Theano (Theano Development Team, 2016) for developing. such a powerful tool for scientific computing. Amjad Almahairi was supported by funding from Maluuba Research.\n(a) CIFAR-10 (b) CelebA Figure 6: Samples generated from our model. Model Our model Improved GANt EGAN-Const\nTable 1: Inception scores on CIFAR-10. + As reported in Sali- mans et al. (2016) without using labeled data.."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 200\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672-2680, 2014.\nTaesup Kim and Yoshua Bengio. Deep directed generative models with energy-based probability estimation. arXiv preprint arXiv:1606.03439, 2016.\nA. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Icml, pp. 663-670, 2000\nSebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural sampler. using variational divergence minimization. arXiv preprint arXiv:1606.00709, 2016.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with dee convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016\nJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network arXiy preprint arXiv:1609.03126. 2016\nBrian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In AAAI, pp. 1433-1438, 2008\nFigure 6: Samples generated from our model\nPieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In. Proceedings of the twenty-first international conference on Machine learning, pp. 1. ACM, 2004\nonathan Ho and Stefano Ermon. Generative adversarial imitation learning. arXiv preprint arXiv:1606.03476, 2016.\nTheano Development Team. Theano: A Python framework for fast computation of mathematica expressions. arXiv e-prints, abs/1605.02688, May 2016\nProof of proposition 3.1. Refining the Lagrange L(pgen, c) by introducing additional dual variable or the probability constraints (the second and third), the new Lagrange function has the form\nL(Pgen,C,,X) = K(Pgen)+c(x) (Pgen(x)-Pdata(x))-(x)Pgen(x)+X(Pgen(x)-1) xEX xEX xEX\nwhere c(x) E R, Vx, (x) E R+, Vx, and X E R are the dual variables. The KKT conditions for the optimal primal and dual variables are as follows\ngen=Pdata *(x)P*en(x)=0, Vx *(x) 0, Vx P*en(x) 0, Pgen(x) = Pdata(x), Vx Pgen(x) = 1 xEX"}, {"section_index": "11", "section_name": "A.2 OPTIMAL CONDITIONS OF EBGAN", "section_text": "In (Zhao et al., 2016), the training objectives of the generator and the discriminator cannot be writter as a single minimax optimization problem since the margin structure is only applied to the objectiv of the discriminator. In addition, the discriminator is designed to produce the mean squared recon struction error of an auto-encoder structure. This restricted the range of the discriminator outpui to be non-negative, which is equivalent to posing a set constraint on the discriminator under the non-parametric setting.\nThus, to characterize the optimal generator and discriminator, we adapt the same analyzing logic used in the proof sketch of the original GAN (Goodfellow et al., 2014). Specifically, given a spe- cific generator distribution pgen, the optimal discriminator function given the generator distribution c*(x; Pgen) can be derived by examining the objective of the discriminator. Then, the conditional optimal discriminator function is substituted into the training objective of pgen, simplifying the \"ad- versarial' training as a minimizing problem only w.r.t. pgen, which can be well analyzed.\nFirstly, given any generator distribution pgen, the EBGAN training objective for the discriminator can be written as the following form.\nx; Pgen) = arg max E max(0,m-c(x))- E cx CEC Pgen Pdata = arg max E min(0,c(x)-m)- E cx cEC Pgen Pdata\nwhere C = {c : c(x) > 0, Vx E X} is the set of allowed non-negative discriminator functions. Note this set constraint comes from the fact the mean squared reconstruction error as discussed above\nSince the problem (19) is independent w.r.t. each x, the optimal solution can be easily derived as\nOK(Pgen) +c*(x)-*(x)+X*=0,Vx (stationarity) Pgen=Pdata *(x)p*en(x) = 0, Vx(complement slackness) *(x) 0, Vx (dual feasibility) P*en(x) 0, Pgen(x) = Pdata(x), Vx (primal feasibility) Pgen(x) = 1 (primal feasibility) xEX\nRearranging the conditions above, we get p*en(x) = Pdata(x), Vx E X as well as equation (4), which concludes the proof.\n(0, Pgen(x) < Pdata(x) m, Pgen(x) > Pdata(x) * Qxi Pgen(x) = Pdata(x) > 0 Pgen(x) = Pdata(x) = 0\nwhere ax E [0, m] is an under-determined number, a x E 0, oo) is another under-determined non negative real number, and the subscripts in m, Qx, Bx reflect that fact that these under-determined values can be distinct for different x..\nE c*(x;Pgen) E c*(x; Pgen) Pgen = arg min x~Pgen x~Pdata Pgen EP = arg min Pgen(x) - Pdata Pgen EP xEX\nProposition A.1. The global optimal of the EBGAN training objective is achieved if and only if Pgen = Pdata. At that point, c*(x) is fully under-determined..\nProof. The proof is established by showing contradiction\n[Pgen(x) - Pdata(x)]c*(x; P*en xEX<UX<Ut_ [Pgen(x)-Pdata(x)]c*(x;p*en)+[P*en(x)-Pdata(x)]c*(x;Pgen xEX< xEX> P*en(x) - Pdata(x) = m xEX> > 0\nL(p =0<L(p gen gen\nSpecifically, the. neral f-GAN formulation takes the following form\nwhere f'(.) is the first-order derivative of f(). Note that, even when we add an extra term L(pgen). to equation (24), since the term K (pgen) is a constant w.r.t. the discriminator, it does not change the. result given by equation (25) about the optimal discriminator. As a consequence, for the optimal\nwhere the second term of the first line is implicitly defined as the problem is an adversarial game between Pgen and c.\nwhich contradicts the optimal (miminum) assumption of pgen. Hence, the contradiction concludes that at the global optimal, p*en = Pdata. By equation (20), it directly follows that c*(x; p*en) = Qx, which completes the proof.\nTo show that simply adding the same training signal to GAN will not lead to the same result, it is more convenient to directly work with the formulation of f-GAN (Nowozin et al., 2016, equation 6)) family, which include the original GAN formulation as a special case..\nmax min E f*(c(x))]- E [c(x)] C Pgen E P x~Pgen x~Pdata\nwhere the f*(.) denotes the convex conjugate (Boyd & Vandenberghe, 2004) of the f-divergence function. The optimal condition of the discriminator can be found by taking the variation w.r.t. c, which gives the optimal discriminator\ndata c*(\ndiscriminator to retain the density information, it effectively means pgen / Pdata. Hence, there wil. be a contradiction if both c* (x) retains the density information, and the generator matches the data distribution.\nIntuitively, this problem roots in the fact that f-divergence is quite \"rigid' in the sense that given the Pgen(x) it only allows one fixed point for the discriminator. In comparison, the divergence used in our proposed formulation, which is the expected cost gap, is much more flexible. By the expected cost gap itself, i.e. without the K(pgen) term, the optimal discriminator is actually under-determined"}, {"section_index": "12", "section_name": "B.1 EXPERIMENT SETTING", "section_text": "Here, we specify the neural architectures used for experiements presented in Section 5\nFirstly, for the Egan-Ent-VI model, we parameterize the approximate posterior distribution qgen(z x) with a diagonal Gaussian distribution, whose mean and covariance matrix are the output of a trainable inference network, i.e.\nwhere finfer denotes the inference network, and I is the identity matrix. Note that the Inference Network only appears in the Egan-Ent-VI model..\nFor experiments with the synthetic datasets, the following fully-connected feed forward neural n works are employed\nGenerator: FC(4, 128)-BN-ReLU-FC (128, 128)-BN-ReLU-FC(128,2) Discriminator: FC(2, 128) -ReLU-FC(128, 128)-ReLU-FC(128, 1) Inference Net: FC (2, 128)-ReLU-FC (128, 128)-ReLU-FC(128, 4*2)\nFor the handwritten digit experiment, we closely follow the DCGAN (Radford et al., 2015) arch tecture with the following configuration\nGenerator: FC (10, 512*7*7)-BN-ReLU-DC(512,256;4c2s)-BN-ReLU -DC(256,128;4c2s)-BN-ReLU-DC(128,1;3c1s)-Sigmoid Discriminator: CV(1, 64;3c1s)-BN-LRec-CV(64,128;4c2s) -BN-LRec -CV(128,256;4c2s)-BN-LRec-FC(256*7*7,1) Inference Net: CV (1, 64;3c1s) -BN-LRec-CV(64, 128;4c2s) -BN-LRec -CV(128,256;4c2s)-BN-LRec-FC(256*7*7,10*2)\nHere, LRec is the leaky rectified non-linearity recommended by Radford et al. (2015). In addition,. CV (128, 25 6, 4c2s) denotes a convolutiona1 layer with 128 input channels, 256 output channels. and kernel size 4 with stride 2. Similarly, DC (256, 128, 4c2s) denotes a corresponding trans-. posed convolutional operation. Compared to the original DCGAN architecture, the discriminator under our formulation does not have the last sigmoid layer which squashes a scalar value into a. probability in [0, 1].\nFor celebA experiment with 64 64 color images. we use the following architecture.\nGenerator: FC(10,512*4*4)-BN-ReLU-DC(512,256;4c2s)-BN-ReLU-DC(256,128;4c2s -BN-ReLU-DC(256,128;4c2s)-BN-ReLU-DC(128,3;4c2s)-Tanh Discriminator: CV(3, 64;4c2s)-BN-LRec-CV(64,128;4c2s)-BN-LRec-CV(128,256;4c2s -BN-LRec-CV(256,256;4c2s)-BN-LRec-FC(256*4*4,1) InferenceNet: CV(3, 64;4c2s) -BN-LRec-CV(64,128;4c2s) -BN-LRec-CV(128,256;4c2s -BN-LRec-CV(256,256;4c2s)-BN-LRec-FC(256*4*4,10*2)\nqgen(z|x) = N(,Io2\nwhere FC and BN denote fully-connected layer and batch normalization layer respectively. Note that since the input noise to the generator has dimension 4, the Inference Net output has dimension 4 * 2. where the first 4 elements correspond the inferred mean, and the last 4 elements correspond to the inferred diagonal covariance matrix in log scale.\nFor Cifar10 experiment, where the image size is 32 32, similar architecture is used\nGiven the chosen architectures, we follow Radford et al. (2015) and use Adam as the optimizatio. algorithm. For more detailed hyper-parameters, please refer to the code.\nB.2 OUANTITATIVE COMPARISON OF DIFFERENT MODELS\nSince the synthetic datasets are two dimensional, we approximate both the empirical data distribu-. tion and the generator distribution using the simple histogram estimation. Specifically, we divide the.. canvas into a 100-by-100 grid, and assign each sample into its nearest grid cell based on euclidean distance. Then, we normalize the number of samples in each cell into a proper distribution. When. recovering the discriminator distribution from the learned energy, we assume that *(x) = 0 (i.e. infinite data support), and discretize the distribution into the same grid cells.\nX x Pdisc(x) Vx E Grid\nGenerator: FC(10,512*4*4)-BN-ReLU-DC(512,256;4c2s)-BN-ReLU-DC(256,128;3c1s -BN-ReLU-DC (256, 128; 4c2s) -BN-ReLU-DC (128, 3; 4c2s) -Tanh Discriminator: CV(3, 64;3c1s)-BN-LRec-CV(64,128;4c2s)-BN-LRec-CV(128,256;4c2s -BN-LRec-CV(256,256;4c2s) -BN-LRec-FC(256*4*4,1) InferenceNet: CV(3, 64;3c1s) -BN-LRec-CV(64, 128; 4c2s)-BN-LRec-CV(128,256;4c2s -BN-LRec-CV(256,256;4c2s) -BN-LRec-FC(256*4*4,10*2)\nGaussian Mixture: KL(Pdata||Pemp) = 0.0291, KL(Pemp|Pdata) = 0.0159 KL Divergence |Pgen|Pemp Pemp||Pgen Pgen||Pdata PdataPgenPdiscPemp PempPdisc Pdisc||Pdata Pdata||PdiscPgen||Pdisc Pdisc||Pgen GAN 0.3034 0.5024 0.2498 0.4807 6.7587 2.0648 6.2020 2.0553 2.4596 7.0895 EGAN-Const 0.2711 0.4888 0.2239 0.4735 6.7916 2.1243 6.2159 2.1149 2.5062 7.0553 EGAN-Ent-VI 0.1422 0.1367 0.0896 0.1214 0.8866 0.6532 0.7215 0.6442 0.7711 1.0638 EGAN-Ent-NN 0.1131 0.1006 0.0621 0.0862 0.0993 0.1356 0.0901 0.1187 0.1905 0.1208 Biased Gaussian Mixture:. KL(Pdata|Pemp) = 0.0273, KL(Pemp|/Pdata) = 0.0144 KL Divergence PgenPemp PempPgen1 Pgen|Pdata Pdata|Pgen PdiscPemp PempPdisc Pdisc |Pdata PdataPdiscPgenPdisc PdiscPgen GAN 0.0788 0.0705 0.0413 0.0547 7.1539 2.5230 6.4927 2.5018 2.5205 7.1140 EGAN-Const 0.1545 0.1649 0.1211 0.1519 7.1568 2.5269 6.4969 2.5057 2.5860 7.1995 EGAN-Ent-VI 0.0576 0.0668 0.0303 0.0518 3.9151 1.3574 2.9894 1.3365 1.4052 4.0632 EGAN-Ent-NN 0.0784 0.0574 0.0334 0.0422 0.8505 0.3480 0.5199 0.3299 0.3250 0.7835 Two-spiral Gaussian Mixture:. KL(Pdata|Pemp) = 0.3892, KL(Pemp|/Pdata) = 1.2349 KL Divergence PgenPemp PempPgen PgenPdata PdataPgen Pdisc l|Pemp Pemp ||Pdisc Pdisc Pdata Pdata||PdiscPgen|Pdisc PdiscPgen GAN 0.5297 0.2701 0.3758 0.7240 6.3507 1.7180 4.3818 1.0866 1.6519 5.7694 EGAN-Const 0.7473 1.0325 0.7152 1.6703 5.9930 1.5732 3.9749 0.9703 1.8380 6.0471 EGAN-Ent-VI 0.2014 0.1260 0.4283 0.8399 1.1099 0.3508 0.3061 0.4037 0.4324 0.9917 EGAN-Ent-NN 0.1246 0.1147 0.4475 1.2435 0.1036 0.0857 0.4086 0.7917 0.1365 0.1686\nIn order to quantify the quality of recovered distributions, we compute the pairwise KL divergence of the following four distributions:\nThe real data distribution with analytic form, denoted as Pdata. The empirical data distribution approximated from the 100K training data, denoted as Pem The generator distribution approximated from 100K generated data, denoted as Pgen. The discriminator distribution re-normalized from the learned energy, denoted as Pdisc\nBased on these approximation, Table 2 summarizes the results. For all measures related to the discriminator distribution, EGAN-Ent-VI and EGAN-Ent-NN significantly outperform the other two baseline models, which matches our visual assessment in Figure 2 and 3. Meanwhile, the generator distributions learned from our proposed framework also achieve relatively lower divergence to both the empirical data distribution and the true data distribution.\nIn order to understand the performance difference between EGAN-Ent-VI and EGAN-Ent-NN, we analyze the quality of the entropy gradient approximation during training. To do that, we visualize some detailed training information in Figure 8.\nc) Frequency map of real sar # lower-left : # upper-right = 865 : 135 # lower-left : # upper-right = 892 : 108 Freq-diff lower-left = 27, Freq-diff upper-right = 27 E-gaplower-left =-55.348,E-gap upper-right=-311.390 10 10 (e) 2000 generated samples (red) & 2000 real samples (blue) (f) Gradient field of the discriminator w.r.t. generated samples (g) Gradient field of the max-ent appox. w.r.t. generated samples (h) Gradient field of the all cost w.r.t. generated samples cost_d = 0.001, NLL = 17.364 gradnorm_d2g = 0.109 gradnorm_i2g = 17.364 gradnorm_all2g = 17.361 (a) Training details under variational inference entropy approximation Frequency difference between real and generated samples Freq-diff lower-left =-5, Freq-diff upper-right =5 E-gap lower-left = 0.332,E-gap upper-right = -678.018 15 es (red) & 2000 real samples (blue) Gradient field of the discriminator w.r.t. generated samples Gradient field of the max-ent appox. w.r.t. generated samples Gradient field of the all cost w.r.t. generated samples c0st_d = -0.080, NLL = 2.893 gradnorm_d2g = 3.814 gradnorm_i2g = 2.893 gradnorm_all2g = 1.928\nAs we can see in figure 8a, the viarational entropy gradient approximation w.r.t. samples is no accurate:\nFigure 8: For convenience, we will use Fig. (i,j) to refer to the subplot in row i, column j. Fig. (1,1): current energy plot. Fig. (1,2): frequency map of generated samples in the current batch. Fig. (1,3):. frequency map of real samples in the current batch. Fig-(1,4): frequency difference between real and generated samples. Fig. (2,1) comparison between more generated from current model and real. sample. Fig. (2,2): the discriminator gradient w.r.t. each training sample. Fig. (2,3): the entropy. gradient w.r.t. each training samples. Fig. (2,4): all gradient (discriminator + entropy) w.r.t. each. training sample.\nIt is inaccurate in terms of gradient direction. Ideally, the direction of the entropy gradi ent should be pointing from the center of its closest mode towards the surroundings, with\nIt is inaccurate in terms of gradient direction. Ideally, the direction of the entropy gradi. ent should be pointing from the center of its closest mode towards the surroundings, with\nIn comparison, the nearest neighbor based gradient approximation is much more accurate as shown in 8b. As a result, it leads to more accurate energy contour, as well as faster training. What's more from Figure 8b Fig. (2,4), we can see the entropy gradient does have the cancel-out effect on the discriminator gradient, which again matches our theory."}, {"section_index": "13", "section_name": "B.4 RANKING NIST DIGITS", "section_text": "Figure 9 shows the ranking of all 1000 generated and real images (from the test set) for three models. EGAN-Ent-NN, EGAN-Const, and GAN. We can clearly notice that in EGAN-Ent-NN the top ranked digits look very similar to the mean digit. From the upper-left corner to the lower-right corner, the transition trend is: the rotation degree increases, and the digits become increasingly thick. or thin compared to the mean. In addition, samples in the last few rows do diverge away from the mean image: either highly diagonal to the right or left, or have different shape: very thin or thick, or. typewriter script. Other models are not able to achieve a similar clear distinction for high versus low. probability images. Finally, we consistently observe the same trend in modeling other digits, which. are not shown in this paper due to space constraint.."}, {"section_index": "14", "section_name": "B.5 CLASSIFIER PERFORMANCE AS A PROXY MEASURE", "section_text": "As mentioned in Section 5, evaluating the proposed formulation quantitatively on high-dimensiona data is extremely challenging. Here, in order to provide more quantitative intuitions on the learne discriminator at convergence, we adopt a proxy measure. Specifically, we take the last-layer activa ion of the converged discriminator network as fixed pretrained feature. and build a linear classifie ipon it. Hypothetically, if the discriminator does not degenerate, the extracted last-layer featur should maintain more information about the data points, especially compared to features from de generated discriminators. Following this idea, we first train EGAN-Ent-NN, EGAN-Const, an GAN on the MNIST till convergence, and then extract the last-layer activation from their discrimi nator networks as fixed feature input. Based on fixed feature, a randomly initialized linear classifie is trained to do classification on MNIST. Based on 10 runs (with different initialization) of each c the three models, the test classification performance is summarized in Table 3. For comparison pu oose. we also include a baseline where the input features are extracted from a discriminator networ with random weights.\nTable 3: Test performance of linear classifiers based on last-layer discriminator features\nBased on the proxy measure, EGAN-Ent-NN seems to maintain more information of data, whicl suggests that the discriminator from our proposed formulation is more informative. Despite the positive result, it is important to point out that maintaining information about categories does no necessarily mean maintaining information about the energy (density). Thus, this proxy measure should be understood cautiously.\nthe direction orthogonal to the implicit contour in Fig. (1,2). However, the direction of. gradients in the Fig. (2,3) does not match this.. . It is inaccurate in magnitude. As we can see, the entropy approximation gradient (Fig. (2,3)) has much larger norm than the discriminator gradient (Fig. (2,2)). As a result, the. total gradient (Fig. (2,4)) is fully dominated by the entropy approximation gradient. Thus.. it usually takes much longer for the generator to learn to generate rare samples, and the. training also proceeds much slower compared to the nearest neighbor based approximation\nTest error (%) EGAN-Ent-NN EGAN-Const GAN Random Min 1.160 1.280 1.220 3.260 Mean 1.190 1.338 1.259 3.409 Std. 0.024 0.044 0.032 0.124\n(a) EGAN-Ent-NN\nFigure 9: 1000 generated and test images (bounding box) ranked according their assigned energies"}]
S1_pAu9xl
[{"section_index": "0", "section_name": "TRAINED TERNARY OUANTIZATION", "section_text": "Chenzhuo Zhu\nTsinghua University\nzhuczl3@mails.tsinqhua.edu.cn\nDeep neural networks are widely used in machine learning applications. However. the deployment of large neural networks models can be difficult to deploy on mobile. devices with limited power budgets. To solve this problem, we propose Trained. Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values. This method has very little accuracy degradation. and can even improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet model is trained from. scratch, which means it's as easy as to train normal full precision model. We. highlight our trained quantization method that can learn both ternary values and ternary assignment. During inference, only ternary values (2-bit weights) and scaling factors are needed, therefore our models are nearly 16 smaller than full. precision models. Our ternary models can also be viewed as sparse binary weight networks, which can potentially be accelerated with custom circuit. Experiments on CIFAR-10 show that the ternary models obtained by trained quantization method. outperform full-precision models of ResNet-32,44,56 by 0.04%, 0.16%, 0.36% respectively. On ImageNet, our model outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and outperforms previous ternary models by 3%"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In this paper, we propose Trained Ternary Quantization which uses two full-precision scaling coefficients WP, Wn for each layer l, and quantize the weights to {-Wn, O, +WP} instead of traditional {-1, 0, +1} or {-E, 0, +E} where E is the mean of the absolute weight value, which is not learned. Our positive and negative weights have different absolute values WP and Wn that are trainable parameters. We also maintain latent full-precision weights at training time, and discard them at test time. We back propagate the gradient to both WP, Wr and to the latent full-precision weights. This makes it possible to adjust the ternary assignment (i.e. which of the three values a weight is assigned).\nOur quantization method, achieves higher accuracy on the CIFAR-10 and ImageNet datasets. For AlexNet on ImageNet dataset, our method outperforms previously state-of-art ternary network(Li &\nSong Han\nStanford University"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deep neural networks are becoming the preferred approach for many machine learning applications. However, as networks get deeper, deploying a network with a large number of parameters on a small device becomes increasingly difficult. Much work has been done to reduce the size of networks. Half-. precision networks (Amodei et al.]2015) cut sizes of neural networks in half. XNOR-Net (Rastegari. et al.2016), DoReFa-Net (Zhou et al.2016) and network binarization (Courbariaux et al.f 2015 Lin et al.]2015) use aggressively quantized weights, activations and gradients to further reduce. computation during training. While weight binarization benefits from 32 smaller model size, the. extreme compression rate comes with a loss of accuracy.Hubara et al.(2016) and Li & Liu(2016 propose ternary weight networks to trade off between model size and accuracy..\nLiu2016) by 3.0% of Top-1 accuracy and the full-precision model by 1.6%. By converting most of the parameters to 2-bit values, we also compress the network by about 16x. Moreover, the advantage. of few multiplications still remains, because WP and Wn are fixed for each layer during inference On custom hardware, multiplications can be pre-computed on activations, so only two multiplications per activation are required."}, {"section_index": "3", "section_name": "2 MOTIVATIONS", "section_text": "First, a smaller model means less overhead when exporting models to clients. Take autonomous driving for example; Tesla periodically copies new models from their servers to customers' cars Smaller models require less communication in such over-the-air updates, making frequent updates more feasible. Another example is on Apple Store; apps above 100 MB will not download until you connect to Wi-Fi. It's infeasible to put a large DNN model in an app. The second issue is energy consumption. Deep learning is energy consuming, which is problematic for battery-constrained mobile devices. As a result, iOS 10 requires iPhone to be plugged with charger while performing photo analysis. Fetching DNN models from memory takes more than two orders of magnitude more energy than arithmetic operations. Smaller neural networks require less memory bandwidth to fetch the model, saving the energy and extending battery life. The third issue is area cost. When deploying DNNs on Application-Specific Integrated Circuits (ASICs), a sufficiently small model can be stored directly on-chip, and smaller models enable a smaller ASIC die.\nSeveral previous works aimed to improve energy and spatial efficiency of deep networks. One common strategy proven useful is to quantize 32-bit weights to one or two bits, which greatly reduces model size and saves memory reference. However, experimental results show that compressed weights usually come with degraded performance, which is a great loss for some performance sensitive applications. The contradiction between compression and performance motivates us to work on trained ternary quantization, minimizing performance degradation of deep neural networks while saving as much energy and space as possible."}, {"section_index": "4", "section_name": "3.1 BINARY NEURAL NETWORK (BNN)", "section_text": "Lin et al.[(2015) proposed binary and ternary connections to compress neural networks and speed u computation during inference. They used similar probabilistic methods to convert 32-bit weights int binary values or ternary values, defined as:\nHere wb and wt denote binary and ternary weights after quantization. w denotes the latent full precision weight.\nDuring back-propagation, as the above quantization equations are not differentiable, derivatives of expectations of the Bernoulli distribution are computed instead, yielding the identity function.\nHere L is the loss to optimize\nFor BNN with binary connections, only quantized binary values are needed for inference. Therefor a 32 smaller model can be deployed into applications.\nThe potential of deep neural networks, once deployed to mobile devices, has the advantage of lower latency, no reliance on the network, and better user privacy. However, energy efficiency becomes the bottleneck for deploying deep neural networks on mobile devices because mobile devices are battery constrained. Current deep neural network models consist of hundreds of millions of parameters Reducing the size of a DNN model makes the deployment on edge devices easier.\n~ Bernoulli 2 wt ~ Bernoulli(|w) sign(w)\n6 ~ Bernoulli wt ~ Bernoulli([w) sign(w\naL OL aL dw dwb dwt\nZhou et al.(2016) proposed DoReFa-Net which quantizes weights, activations and gradients of neura networks using different widths of bits. Therefore with specifically designed low-bit multiplicatior algorithm or hardware, both training and inference stages can be accelerated.\nThey also introduced a much simpler method to quantize 32-bit weights to binary values, defined as\nLi & Liu|(2016) proposed TWN (Ternary weight networks), which reduce accuracy loss of binary networks by introducing zero as a third quantized value. They use two symmetric thresholds . and a scaling factor Wi for each layer l to quantize weighs into {-Wi, 0, +Wi}:.\nWi:Wi> t 0:wiz -W:w<-\nThey then solve an optimization problem of minimizing L2 distance between full precision and ternary weights to obtain layer-wise values of W, and :."}, {"section_index": "5", "section_name": "3.4 DEEP COMPRESSION", "section_text": "Han et al.(2015) proposed deep compression to prune away trivial connections and reduce precision of weights. Unlike above models using zero or symmetric thresholds to quantize high precision weights, Deep Compression used clusters to categorize weights into groups. In Deep Compression, low precision weights are fine-tuned from a pre-trained full precision network, and the assignment of each weight is established at the beginning and stay unchanged, while representative value of each cluster is updated throughout fine-tuning."}, {"section_index": "6", "section_name": "4 METHOD", "section_text": "At inference time, we throw away the full-resolution weights and only use ternary weights\nDuring gradient descent we learn both the quantized ternary weights (the codebook), and choos which of these values is assigned to each weight (choosing the codebook index)\nwb =E(|w|) sign(w)\n= 0.7 x E(|wi) W = E (|wi(i)]) iE{i|wr(i)|>}\nAnd again Equation|2|is used to calculate gradients. While an additional bit is required for ternary weights, TWN achieves a validation accuracy that is very close to full precision networks according to their paper.\nOur method is illustrated in Figure[1 First, we normalize the full-precision weights to the range. [-1, +1] by dividing each weight by the maximum weight. Next, we quantize the intermediate full-resolution weights to {-1, 0, +1} by thresholding. The threshold factor t is a hyper-parameter that is the same across all the layers in order to reduce the search space. Finally, we perform trained. quantization by back propagating two gradients, as shown in the dashed lines in Figure[1 We back-propagate gradient1 to the full-resolution weights and gradient2 to the scaling coefficients. The former enables learning the ternary assignments, and the latter enables learning the ternary. values.\nTrained Normalized Intermediate Ternary Weight Quantization Final Ternary Weight Full Precision Weight Full Precision Weight Scale Normalize Quantize Wn Wp -1 0 1 1 t 0 t 1 1 0 1 -Wn 0 : Wp gradient1 gradient2 Loss Feed Forward ..... Back Propagate. Inference Time\nNormalized Intermediate Ternary Weight Quantization Fin Full Precision Weight. Full Precision Weight Scale Normalize Quantize Wn Wp -1 0 1 1 0t 1 0 1\nFigure 1: Overview of the trained ternary quantization procedure\nWP :W> l 0:|wi -Wp:wi<-\nUnlike previous work where quantized weights are calculated from 32-bit weights, the scaling coeffi cients WP and Wn are two independent parameters and are trained together with other parameters. Following the rule of gradient descent, derivatives of WP and Wn are calculated as:\nNote we use scalar number 1 as factor of gradients of zero weights. The overall quantization process is illustrated as Figure[1 The evolution of the ternary weights from different layers during training is shown in Figure[2 We observe that as training proceeds, different layers behave differently: for the first quantized conv layer, the absolute values of WP and Wr get smaller and sparsity gets lower while for the last conv layer and fully connected layer, the absolute values of WP and Wr get larger and sparsity gets higher.\nWe learn the ternary assignments (index to the codebook) by updating the latent full-resolution weights during training. This may cause the assignments to change between iterations. Note tha the thresholds are not constants as the maximal absolute values change over time. Once an updatec weight crosses the threshold, the ternary assignment is changed\nThe benefits of using trained quantization factors are: i) The asymmetry of WP Wn enables neural networks to have more model capacity. ii) Quantized weights play the role of \"learning rate multipliers\" during back propagation.\nIn previous work on ternary weight networks, Li & Liu (2016) proposed Ternary Weight Network. (TWN) using , as thresholds to reduce 32-bit weights to ternary values, where , is definec as Equation|5] They optimized value of , by minimizing expectation of L2 distance betweer full precision weights and ternary weights. Instead of using a strictly optimized threshold, we adop\nTo learn the ternary value (codebook), we introduce two quantization factors WP and Wr for positive. and negative weights in each layer l. During feed-forward, quantized ternary weights wf are calculated as:\naL aL aL aL OWP dwt( dWn dwt(i) iEIP iEIn\nHere IP = {i|w(i) > } and If = {i|(i)wi < -}. Furthermore, because of the existence. of two scaling factors, gradients of latent full precision weights can no longer be calculated by Equation[2 We use scaled gradients for 32-bit weights:\naL A w>j aL aL 1 x [wi|i dwi dw aL w<-\nres1.0/conv1/Wn res1.0/conv1/Wp res3.2/conv2/Wnres3.2/conv2/Wp linear/Wn linearWp 3 2 1 M 2 -3 Negatives Zeros Positives Negatives Zeros Positives Negatives Zeros Positives 100% Pereeeeaee 75% 50% 25% 0% 0 50 100 150 0 50 100 150 0 50 100 150\nFigure 2: Ternary weights value (above) and distribution (below) with iterations for different layers of ResNet-20 on CIFAR-10\ndifferent heuristics: 1) use the maximum absolute value of the weights as a reference to the layer's threshold and maintain a constant factor t for all layers:\nWe use parameters pre-trained from a full precision ResNet to initialize our model. Learning rate is set to 0.1 at beginning and scaled by 0.1 at epoch 80, 120 and 300. A L2-normalized weight decay\n20% 17.5% Yan eanaoor 15% 12.5% 10% 7.5% 5% 0 50 100 150 Epochs\nFigure 3: ResNet-20 on CIFAR-10 with different weight precision\n= t max[w\nand 2) maintain a constant sparsity r for all layers throughout training. By adjusting the hyper parameter r we are able to obtain ternary weight networks with various sparsities. We use the first method and set t to O.05 in experiments on CIFAR-10 and ImageNet dataset and use the second one to explore a wider range of sparsities in section 5.1.1.\nof O.oo02 is used as regularizer. Most of our models converge after 160 epochs. We take a moving average on errors of all epochs to filter off fluctuations when reporting error rate..\nWe expand our experiments to ternarize ResNet with 32, 44 and 56 layers. All ternary models are fine-tuned from full precision models. Our results show that we improve the accuracy of ResNet-32 ResNet-44 and ResNet-56 by 0.04%, 0.16% and 0.36% . The deeper the model, the larger the improvement. We conjecture that this is due to ternary weights providing the right model capacity and preventing overfitting for deeper networks.\nTable 1: Error rates of full-precision and ternary ResNets on Cifar-10"}, {"section_index": "7", "section_name": "5.1 IMAGENET", "section_text": "We further train and evaluate our model on ILSVRC12(Russakovsky et al.(2015). ILSVRC12 is a 1000-category dataset with over 1.2 million images in training set and 50 thousand images in validation set. Images from ILSVRC12 also have various resolutions. We used a variant of AlexNet(Krizhevsky et al.(2012)) structure by removing dropout layers and add batch normalization(Ioffe & Szegedy 2015) for all models in our experiments. The same variant is also used in experiments described in the paper of DoReFa-Net.\nOur ternary model of AlexNet uses full precision weights for the first convolution layer and the las. fully-connected layer. Other layer parameters are all quantized to ternary values. We train our mode on ImageNet from scratch using an Adam optimizer (Kingma & Ba|(2014)). Minibatch size is set to. 128. Learning rate starts at 10-4 and is scaled by 0.2 at epoch 56 and 64. A L2-normalized weight. decay of 5 10-6 is used as a regularizer. Images are first resized to 256 256 then randomly. cropped to 224 224 before input. We report both top 1 and top 5 error rate on validation set..\nWe compare our model to a full precision baseline, 1-32-32 DoReFa-Net and TWN. After around 64 epochs, validation error of our model dropped significantly compared to other low-bit networks as well as the full precision baseline. Finally our model reaches top 1 error rate of 42.5%, while DoReFa-Net gets 46.1% and TWN gets 45.5%. Furthermore, our model still outperforms full precision AlexNet (the batch normalization version, 44.1% according to paper of DoReFa-Net) by 1.6%, and is even better than the best AlexNet results reported (42.8%1). The complete results are listed in Table 2\nTable 2: Top1 and Top5 error rate of AlexNet on ImageNet\n'https://github.com/BVLC/caffe/wiki/Models-accuracy-on-ImageNet-2012-val\nWe compare our model with the full-precision model and a binary-weight model. We train a a full precision ResNet (He et al.||2016) on CIFAR-10 as the baseline (blue line in Figure[3). We fine-tune the trained baseline network as a 1-32-32 DoReFa-Net where weights are 1 bit and both activations and gradients are 32 bits giving a significant loss of accuracy (green line) . Finally, we fine-tuning the baseline with trained ternary weights (red line). Our model has substantial accuracy improvement. over the binary weight model, and our loss of accuracy over the full precision model is small. We. also compare our model to Tenary Weight Network (TWN) on ResNet-20. Result shows our model improves the accuracy by ~ 0.25% on CIFAR-10.\nModel Full resolution Ternary (Ours) Improvement ResNet-20 8.23 8.87 -0.64 ResNet-32 7.67 7.63 0.04 ResNet-44 7.18 7.02 0.16 ResNet-56 6.80 6.44 0.36\nDoReFa-Net TWN Ours - - Full precision (with Dropout) 80% Train Validation 60% Top1 Top1 40% 42.8% 38% Top5 20% 19.8% Top5 12% 0% 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Epochs\nDoReFa-Net TWN Ours - Full precision (with Dropout) 80% Train Validation 60% Top1 40% Top1 42.8% 38% Top5 20% 19.8% Top5 12% 0% 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Epochs\nWe also report the results of our methods on ResNet-18B in Table3 The full-precision error rates are. obtained from Facebook's implementation. Here we cite Binarized Weight Network(BWN|Rastegari. et al.(2016) results with all layers quantized and TWN finetuned based on a full precision network.. while we train our TTQ model from scratch. Compared with BWN and TWN, our method obtains a substantial improvement.\nTable 3: Top1 and Top5 error rate of ResNet-18 on ImageNei"}, {"section_index": "8", "section_name": "6 DISCUSSION", "section_text": "We save storage for models by 16 by using ternary weights. Although switching from a binary weight network to a ternary-weight network increases bits per weight, it brings sparsity to the weights which gives potential to skip the computation on zero weights and achieve higher energy efficiency\nFigure|5 shows the relationship between sparsity and accuracy. As the sparsity of weights grow from 0 (a pure binary-weight network) to O.5 (a ternary network with 50% zeros), both the trainin and validation error decrease. Increasing sparsity beyond 50% reduces the model capacity too fai increasing error. Minimum error occurs with sparsity between 30% and 50%.\nWe introduce only one hyper-parameter to reduce search space. This hyper-parameter can be. either sparsity, or the threshold t w.r.t the max value in Equation 6. We find that using threshold produces better results. This is because fixing the threshold allows the sparsity of each layer to vary. (Figure reffig:weights).\nFigure 4: Training and validation accuracy of AlexNet on ImageNet\nWe draw the process of training in Figure[4] the baseline results of AlexNet are marked with dashed lines. Our ternary model effectively reduces the gap between training and validation performance which appears to be quite great for DoReFa-Net and TWN. This indicates that adopting trainable W] and Wn helps prevent models from overfitting to the training set.\n1-bit 2-bit 2-bit Error Full precision (BWN) (TWN) (Ours) Top1 30.4% 39.2% 34.7% 33.4 % Top5 10.8% 17.0% 13.8% 12.8%\nIn this section we analyze performance of our model with regard to weight compression and inference speeding up. These two goals are achieved through reducing bit precision and introducing sparsity.. We also visualize convolution kernels in quantized convolution layers to find that basic patterns of edge/corner detectors are also well learned from scratch even precision is low..\nValidation Error 4 Train Error 18% 16% 14% Rate 12% 10% Full Precision 8% 8% E 6% 4% A 2% 4 4 4 A 0% w/o pruning 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Sparsity: percentage of zero weights. Figure 5: Accuracy v.s. Sparsity on ResNet-20 SPARSITY AND EFFICIENCY OF ALEXNET\nWe notice that without all quantized layers sharing the same t for Equation[9] our model achieves considerable sparsity in convolution layers where the majority of computations takes place. Therefore we are able to squeeze forward time to less than 30% of full precision networks..\nAs for spatial compression, by substituting 32-bit weights with 2-bit ternary weights, our model is approximately 16 smaller than original 32-bit AlexNet.\nWe visualize quantized convolution kernels in Figure[6 The left matrix is kernels from the second. convolution layer (5 5) and the right one is from the third (3 3). We pick first 10 input channels and first 10 output channels to display for each layer. Grey, black and white color represent zero. negative and positive weights respectively.\nWe observe similar filter patterns as full precision AlexNet. Edge and corner detectors of various directions can be found among listed kernels. While these patterns are important for convolution neural networks, the precision of each weight is not. Ternary value filters are capable enough extracting key features after a full precision first convolution layer while saving unnecessary storage.\nFull precision Pruning (NIPS'15) Ours Layer Density Width Density Width Density Width conv1 100% 32 bit 84% 8 bit 100% 32 bit conv2 100% 32 bit 38% 8 bit 23% 2 bit conv3 100% 32 bit 35% 8 bit 24% 2 bit conv4 100% 32 bit 37% 8 bit 40% 2 bit conv5 100% 32 bit 37% 8 bit 43% 2 bit conv total 100% 37% 1 33% - 1 fc1 100% 32 bit 9% 5 bit 30% 2 bit fc2 100% 32 bit 9% 5 bit 36% 2 bit fc3 100% 32 bit 25% 5 bit 100% 32 bit fc total 100% 10% 37% - All total. 100% 11% 37% 1\nTable 4: Alexnet layer-wise sparsity\nWe further analyze parameters from our AlexNet model. We calculate layer-wise density (complement. of sparsity) as shown in Table4[ Despite we use different WP and Wn for each layer, ternary weights. can be pre-computed when fetched from memory, thus multiplications during convolution and inner product process are still saved. Compared to Deep Compression, we accelerate inference speed using ternary values and more importantly, we reduce energy consumption of inference by saving memory references and multiplications, while achieving higher accuracy..\nFurthermore, we find that there are a number of empty filters (all zeros) or filters with single non-zero value in convolution layers. More aggressive pruning can be applied to prune away these redundant kernels to further compress and speed up our model.\nFigure 6: Visualization of kernels from Ternary AlexNet trained from Imagenet"}, {"section_index": "9", "section_name": "7 CONCLUSION", "section_text": "We introduce a novel neural network quantization method that compresses network weights to ternary. values. We introduce two trained scaling coefficients W! and W! for each layer and train these. coefficients using back-propagation. During training, the gradients are back-propagated both to the latent full-resolution weights and to the scaling coefficients. We use layer-wise thresholds that are proportional to the maximum absolute values to quantize the weights. When deploying the ternary. network, only the ternary weights and scaling coefficients are needed, which reducing parameter size. by at least 16. Experiments show that our model reaches or even surpasses the accuracy of full. precision models on both CIFAR-10 and ImageNet dataset. On ImageNet we exceed the accuracy of. prior ternary networks (TWN) by 3%."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Martin Abadi and et. al o. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URI http://tensorflow. org/ Software available from tensorflow.org.\nMatthieu Courbariaux, Itay Hubara, COM Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neura networks: Training neural networks with weights and activations constrained to+ 1 or-\nMatthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pp 3123-3131, 2015.\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning trained quantization and huffman coding. CoRR, abs/1510.00149, 2, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXi preprint arXiv:1603.05027, 2016.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nFengfu Li and Bin Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016\nMohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classificatior using binary convolutional neural networks. arXiv preprint arXiv:1603.05279. 2016\nShuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016\nZhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015."}]
Hk-mgcsgx
[{"section_index": "0", "section_name": "AN INFORMATION RETRIEVAL APPROACH FOR FIND ING DEPENDENT SUBSPACES OF MULTIPLE VIEWS", "section_text": "Ziyuan Lin & Jaakko Peltonen\nDepartment of Computer Science, Aalto University, Finland, anc School of Information Sciences, University of Tampere, Finland.\nFinding relationships between multiple views of data is essential both in ex- ploratory analysis and as pre-processing for predictive tasks. A prominent ap- proach is to apply variants of Canonical Correlation Analysis (CCA), a classical method seeking correlated components between views. The basic CCA is re- stricted to maximizing a simple dependency criterion, correlation, measured di- rectly between data coordinates. We introduce a new method that finds dependent subspaces of views directly optimized for the data analysis task of neighbor re- trieval between multiple views. We optimize mappings for each view such as lin- ear transformations to maximize cross-view similarity between neighborhoods of data samples. The criterion arises directly from the well-defined retrieval task, de- tects nonlinear and local similarities, measures dependency of data relationships rather than only individual data coordinates, and is related to well understood measures of information retrieval quality. In experiments the proposed method outperforms alternatives in preserving cross-view neighborhood similarities, and yields insights into local dependencies between multiple views."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Finding dependent subspaces across views (subspaces where some property of data is statisticall. elated or similar across views) is a common data analysis need, where Canonical Correlation Ana ysis (CCA) (Hotelling, 1936) is a standard unsupervised tool. Preprocessing to find depende. subspaces is useful both for prediction and for analysis: in predictive tasks, such subspaces he. if non-dependent parts of each view may arise from noise and distortions. In some data analys. asks, finding the dependent subspaces may itself be the main goal; for example in bioinformati domains dependency seeking projections have been used to identify relationships between differer. views of cell activity (Tripathi et al., 2008; Klami et al., 2013); in signal processing a similar tas. could be identifying optimal filters for dependent signals of different nature, e.g., speech and th. corresponding tongue movements of the speakers as in Westbury (1994)..\nMethods like CCA maximize simple correlations between data point coordinate features across the. projected subspaces. However, in many data domains the coordinates may not be of main interes. but rather the data relationships that they reveal. It is then of great interest to develop dependency. seeking methods that directly focus on the data relationships. For example, consider a database o. scientists, defined in one view by their level of interest in various research topics, and in anothe. view by their level of interest in various hobbies. In a database like this, finding relationships oj. people is the common interest, e.g. to find nearest colleagues for a scientist, having the most similai. (neighboring) research interests; or to find hobby partners having the most similar (neighboring. hobby interests; the question is then, can we predict the research colleagues from hobby partners. or vice versa? Research topics and hobbies are very dissimilar views, and not all of their variatior will be related, but we can try to find subspaces of research and hobby interests, so that researcl. neighbors and hobby neighbors are as highly related as possible in those subspaces..\nZiyuan Lin and Jaakko Peltonen contributed equally to the paper."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper we propose a method that solves this task: we present a novel method for seeking. dependent subspaces across multiple views, preserving neighborhood relationships of data. Our\nmethod directly maximizes the between-view similarity of neighborhoods of data samples, a natural measure for similarity of data relationships among the views. The method detects nonlinear and local dependencies, has strong invariance properties, is related to an information retrieval task of the analyst, and performs well in experiments.\nWe present our method in Section 2, properties and extensions in Section 3, related work in Section 4, experiments in Section 5, and conclusions in Section 6"}, {"section_index": "3", "section_name": "METHOD: DEPENDENT NEIGHBORHOODS OF VIEWS", "section_text": "Our method finds similar neighborhood relationships between views. We define the neighborhooc relationships and then discuss how to measure their cross-view similarity. Instead of hard neighbor hoods where two points are or are not neighbors, we use more realistic probabilistic neighborhoods\nAssume input data items x; = (xi,1,:..,xi,Nviews) have paired features xi,v in each view V. We con- sider transformations of each view by a mapping fv which is typically a dimensionality reducing transformation to a subspace of interest; in this paper, for simplicity and interpretability we use lin- where dimorig(V) and dimlow(V) are the number of dimensions of V and its subspace respectively. The local neighborhood of a data item i in any transformation of view V can be represented by the conditional probability distribution pi,v = {pv(j|i; fv)} where pv(j[i; fv) tells the probability that data item j / i is picked as a representative neighbor of i; that is, the probability that an analyst who inspected item i will next pick j for inspection. The pv(.j[i; fv) can be defined in several ways, as a decreasing function of distance dy(i, j; fv) between features of i and j in view V. Here we define it by a simple exponential falloff with respect to squared distance of i and j, as\npv(j|i;fv) =exp(-d(i,j;fv)/o2v)(exp(-d?(i,k;fv)/2v)) k+i\nwhere Oiy sets the falloff rate around i in the view. We tried two simple ways to set the Oy: one is. as a fraction of maximal pairwise distance so O;y = O.05 . max ;.k x j.y - xk.vl, or alternatively, set\npv(j|i;fv) =exp(-|w(xi,v-xj,v)|l2/o2v)(exp(-||w(xiv-xk,v)|l2/o2v))\nwhere the matrix Wy defines the subspace of interest for the view and also the distance metric withir the subspace. Our method learns the mapping parameters Wy for each view.\nRelating data items is one of the main elementary tasks in Shneiderman's taxonomy of tasks in visual. data analysis (Shneiderman, 1996). Our method is optimized for finding related (neighboring) data items, formulated as optimizing an information retrieval task. Since our method directly serves the. task of relating data items (across views) in Shneiderman's taxonomy, in this sense it arguably comes. closer to needs of data analysts than maximizing some variant of coordinate correlation.\nWe find linear projections (linear subspaces) of views. Linear projections have advantages of sim plicity and easy interpretability with respect to original data features. Even if projections are linear, the dependency criterion we optimize is flexible and detects nonlinear dependencies across views.\ni.e., calculate the average distance between xj,v and its k-th nearest neighbor xt,v, then give the average a heuristic correction factor / dimlow(V)/dimorig(V) since the average distance is obtained from the original space yet Oi.y is used in a subspace. We use the first simple Oiy for artificial data experiments and the more data-driven second oy from (2) with k = 5 for the other experiments. Both choices give good results. Other local choices to e.g. achieve a desired entropy are possible, see Venna et al. (2010). With linear mappings the probabilities become\nNeighborhoods represented as probability distributions can be compared by difference measures.. We discuss two measures for different purposes, and their information retrieval interpretations\nKullback-Leibler divergence. For two distributions p = {p(j)} and q = {q(j)}, the Kullback Leibler (KL) divergence is an information-theoretic asymmetric difference measure defined as\nDkL(p,q) =)'p(j(logp(j)/q(j))\nKL divergence is related to an information retrieval criterion: Dkt(p,q) is the cost of misses ir. information retrieval of neighbors, when neighbors are retrieved using retrieval distribution q bu they actually follow distribution p. DkL(p, q) is also the cost of false neighbors when neighbors ar. retrieved using p but they actually follow q. The relationships were shown in Venna et al. (2010 and used to compare a reduced-dimensional neighborhood to an original one; we use it in a nove way to compare neighborhoods across (transformed) views of data. The symmetrized divergence i. the total cost of both misses and false neighbors when neighbors following the distribution in on transformed view are retrieved from the other transformed view with its neighbor distribution..\nThe value of the KL divergence can depend highly on differences between individual probabilities p(j) and q(j). A single missed neighbor can yield a high divergence value: for any index j if p(j) > & for some & > 0, Dkr(p,q) > o as q(j) -> 0. In real-life multi-view data differences between views may be unavoidable, so we prefer a less strict measure focusing more on overall similarity of neighborhoods than severity of individual misses. We discuss such a measure below.\nSimilarity of neighborhoods by itself is not enough. The KL divergence and angle cosine (neigh. borhood correlation) measures only compare similarity of neighborhoods but not potential useful. ness of the found subspaces. In high-dimensional data it is often possible to find subspaces wher neighborhoods are trivially similar. For example, in data with sparse features it is often possible t find two dimensions where all data is reduced to a single value; in such dimensions neighborhoo. distributions would become uniform across all data since, hence any two such dimensions appea. similar. To avoid discovering trivial similarities we wish to complement the measures of similarit between neighborhoods with terms favoring nontrivial (sparse) neighborhoods. A simple way tc. prefer sparse neighborhoods is to omit the normalization from neighborhood correlation, yielding.\nwhich is the inner product between the vectors of neighborhood probabilities. Unlike Cos(p,q) Sim(p, q) favors sparse neighborhoods: it has highest value 1 if and only if p = q and p(j) = q(j) = 1 for only one element j, and lowest value O if the supports of p and q are nonoverlapping.\nThe information retrieval interpretation is: Sim(p, q) is a proportional count of true neighbors from p retrieved from q or vice versa. If p has K neighbors with near-uniform high probabilities p(j) 1/K and other neighbors have near-zero probabilities, and q has L neighbors with high probability q(j) ~ 1/L, then Sim(p,q) ~ M/KL where M is the number of neighbors for which both p and q are high (retrieved true neighbors). Thus Sim(p,q) rewards matching neighborhoods and favors sparse neighborhoods (small K and L). One advantage of this formulation is to avoid matching two neighborhoods that seem to match only because they are highly uninformative: for example if p and q are both uniform over all neighbors, they have the same probability values and would be \"similar' in a naive comparison of probability values, but both are actually simply uninformative about the choice of neighbors. Sim(p, q) would prefer a more sparse, more informative match, as desired.\nThe KL divergence is nonnegative and zero if and only if p = q. Traditionally it is interpreted to measure the amount of extra coding length needed when coding examples with codes generated for distribution q when the samples actually come from distribution p. We treat views symmetrically and compute the symmetrized divergence (DkL(p, q) + DkL(q, p))/2.\nAngle cosine. A simple similarity measure between discrete distributions is the angle cosine be-. tween the distributions as vectors, that is, Cos(p,q) = (jp(j)q(j))//(j(p(j))2)(j(q(j))2) which can be seen as the Pearson correlation coefficient between elements of p and q; it is thus a. neighborhood correlation-a neighborhood based analogue of the coordinate correlation cost func- tion of CCA.1 The angle cosine is bounded above and below: it has highest value 1 if and only if. p = q and lowest value O if supports of p and q are nonoverlapping..\nSim(p,q) =)p(j)q(j\n1 To make the connection exact, typically correlation is computed after substracting the mean from coordi-. nates; for neighbor distributions of n data items, the mean neighborhood probability is the data-independent value 1/(n -- 1)2 which can be substracted from each sum term if an exact analogue to correlation is desired."}, {"section_index": "4", "section_name": "2.2 FINAL COST AND OPTIMIZATION TECHNIQUE", "section_text": "We wish to evaluate similarity of neighborhoods between subspaces of each view, and optimize the subspaces to maximize the similarity, while favoring subspaces having sparse (informative) neigh borhoods for data items. We then evaluate similarities as Sim(pi,v, Pi,u) where pi,v = {pv(j|i; fv)} is the neighborhood distribution around data item i in the dependent subspace of view V and fy is the mapping (parameters) of the subspace, and pi,u = {pu(j|i; fu)} is the corresponding neighborhood distribution in the dependent subspace of view U having the mapping fu. As the objective function for finding dependent projections, we sum the above over each pair of views (U,V) and over the neighborhoods of each data item i, yielding\nNviews Nviews Ndata Ndata C(f1,... :,JNvie L L LL Pv(j[i; fv)pu(j[i; fu V=1 U=1,U/V i=1 j=1,j#i\nwhere, in the setting of linear mappings and neighborhoods with Gaussian falloffs, py is defined b (3) and is parameterized by the projection matrix Wy of the linear mapping..\nOptimization. The function C(f1, .:., fNview. ..) is a well-defined objective for dependent projections. and can be maximized with respect to the projection matrices Wy of each view. We use gradient. techniques for optimization, specifically limited memory Broyden-Fletcher-Goldfarb-Shanno (L-. BFGS). Even with L-BFGS, (6) can be hard to optimize due to several local optima. To find a. good local optimum, we optimize over L-BFGS rounds with a shrinking added penalty driving. the objective away from the worst local optima during the first rounds; we use the optimum in each round as initialization of the next. For the penalty we use KL divergence based dissimilarity between. neighborhoods, summed over neighborhoods of all data items i and view pairs (U, V), giving.\nNyiews Nviews Ndata (DkL(Pi.V,Pi.U)+DkL(Pi,U,Pi.v) CPenalty(f1, ..., fNviews 2 V=1 U=1,UV i=1\nBounding KL divergence by neighbor distribution smoothing. To bound the KL divergence, one. way is to give the neighbor distributions (1) a positive lower bound. In the spirit of the well-known Laplace smoothing in statistics, we revise the neighbor probabilities (1) as.\nPv(j|i;fv)=(exp(-d3(i,j;fv)/o,v)+e)(exp(-d?(i,k;fv)/o?v)+(Ndata-1)e)-\nwhere & > O is a small positive number. To keep notations simple, we still denote this smoothec neighbor distribution as pv(j|i; fv). To avoid over-complicated formulation and for consistency, we also use this version of neighbor probabilities in our objective function (6), even though the value o the objective is bounded by itself. We simply set & = 1e - 6 which empirically works well.\nShrinking the penalty. Even with bounded KL divergence, optimization stages need different amounts of penalty. At end of optimization, nearly no penalty is preferred, as views may not fully agree even with the best mapping. We shrink the penalty during optimization; the objective becomes\nCTotal(.f) yCPenalty\nTime complexity. We calculate the neighbor distributions for all views, and optimize the objec 4 at ( Views time, with d the maximal dimensionality among views. Acceleration techniques (Yang et al., 2013 Van Der Maaten, 2014; Vladymyrov & Carreira-Perpinan, 2014) from neighbor embedding could be adopted to reduce time complexity of a single view from O(Ndata) to O(Ndatalog Ndata) or even 1 O(Ndata). But scalability is not our first concern in this paper, so we use the naive O(Ndata) imple- mentation for calculating the neighbor distributions for each view involved.\nwhich is a function of all mapping parameters and can be optimized by L-BFGS; (7) penalizes severe misses (pairs (i, j) with nonzero neighborhood probability in one view but near-zero in another) driving the objective away from bad local optima. In practice KL divergence is too strict about misses; we use two remedies below.\nwe maximize count of retrieved true neighbors across views, and penalize by severity of misses. Invariances. For any subspace of any view, (1) and (3) depend only on pairwise distances and are thus invariant to global translation, rotation, and mirroring of data in that subspace. The cost is then invariant to differences of global translation, rotation, and mirroring between views and finds view dependencies despite such differences. If in any subspace the data has isolated subsets (where data pairs from different subsets have zero neighbor probability) invariance holds for local transla- tion/rotation/mirroring of the subsets as long as they preserve the isolation.\nDependency is measured between whole subspaces. Unlike CCA where each canonical com ponent of one view has a particular correlated pair in the other view, we maximize dependency with respect to the entire subspaces (transformed representations) of each view, as neighborhoods of data depend on all coordinates within the dependent subspace. Our method thus takes into account within-view feature dependencies when measuring dependency. Moreover, dependent subspaces do not need to be same-dimensional, and in some views we can choose not to reduce dimensionality but to learn a metric (full-rank linear transformation).\nFinding dependent neighborhoods between feature-based views and views external neighbor hoods. In some domains, some data views may directly provide neighborhood relationships o similarities between data items, e.g., friendships in a social network, followerships in Twitter, oj citations between scientific papers. Such relationships or similarities can be used in place of the feature-based neighborhood probabilities pv(j|i; fv) above. This shows an interesting similarity tc a method (Peltonen, 2009) used to find similarities of one view to an external neighborhood defini tion; our method contains this task as one special case\nOther falloffs. Exponential falloff in (1) and (3) can be replaced with other forms like t-distributed neighborhoods (van der Maaten & Hinton, 2oo8). Such replacement preserves the invariances.."}, {"section_index": "5", "section_name": "4 RELATED WORK", "section_text": "In general, multi-view learning (Xu et al., 2013) denotes learning models by leveraging multiple potentially dependent data views; such models could be built either for unsupervised tasks based on the features in the views or for supervised tasks that involve additional annotations like categories of samples. In this paper we concentrate on unsupervised multi-view learning, and our prediction tasks of interest are predicting neighbors across views.\nThe standard Canonical Correlation Analysis (CCA) (Hotelling, 1936) iteratively finds componen pairs maximizing correlation between data points in the projected subspaces. Such correlation i a simple restricted measure of linear and global dependency. To measure dependency in a mor flexible way and handle nonlinear local dependency, linear and nonlinear CCA variants have bee. proposed: Local CCA (LCCA, Wei & Xu 2012) seeks linear projections for local patches in botl views that maximize correlation locally, and aligns local projections into a global nonlinear projec tion; its variant Linear Local CCA (LLCCA) finds a linear approximation for the global nonlinea projection; Locality Preserving CCA (LPCCA, Sun & Chen 2007) maximizes reweighted corre lation between data coordinate differences in both views. In experiments we compare to the wel known traditional CCA and LPCCA as an example of recent state of the art.\nAs a more general framework, Canonical Divergence Analysis (Nguyen & Vreeken, 2015) min mizes a general divergence between distributions of data coordinates in the projected subspace\nThe methods mentioned above work on data coordinates in the original spaces. There are also. nonlinear CCA variants (e.g., Lai & Fyfe 2000; Bach & Jordan 2003; Verbeek et al. 2003; Andrew. et al. 2013; Wang et al. 2015; Hodosh et al. 2013) for detecting nonlinear dependency between. multiple views. Although some variants above are locality-aware, they introduce locality from the original space before maximizing correlation or other similarity measures in the low-dimensional.\nInformation retrieval. Our objective measures success in a neighbor retrieval task of the analyst: we maximize count of retrieved true neighbors across views, and penalize by severity of misses\nsubspaces. Since locality in the original space may not reflect locality in the subspaces due to noise or distortions, such criteria may not be suited for finding local dependencies in subspaces\nThe CODE method (Globerson et al., 2007) creates an embedding of co-occurrence data of pair of original categorical variables, mapping both variables into a shared space. Our method is no restricted to categorical inputs - its main applicability is to high-dimensional vectorial data, witl several other advantages. In contrast to CODE, we find dependent subspaces (mappings) fron multiple high-dimensional real-valued data views. Instead of restricting to a single mapping spac we find several mappings, one from each view, which do not need to go into the same space; ou output spaces can even be different dimensional for each view. Unlike CODE our method is no restricted to maximizing coordinate similarity: we only need to make neighborhoods similar whicl is more invariant to various transformations.\nThe above methods and several in Xu et al. (2013) all maximize correlation or alternative depen dency measures between data coordinates across views. As we pointed out, in many domains coordi nates are not of main interest but rather the data relationships they reveal; we consider neighborhooc relationships and our method directly finds subspaces having similar neighborhoods."}, {"section_index": "6", "section_name": "5 EXPERIMENTS", "section_text": "We demonstrate our method on artificial data with multiple dependent groups between views, and three real data sets: a variant of MNIST digits (LeCun & Cortes, 2010), video data, and stock prices In this paper we restrict our method to find linear subspaces, important in many applications for interpretability, and compare with two prominent linear subspace methods for multiple views, CCA and LPCCA. To our knowledge, no other information retrieval based approaches for finding linear subspaces is known up to the time when we did the experiment. Future work could compare methods for nonlinear mappings (Xu et al., 2013) to variants of ours for the same mapping; we do not focus on the mapping choice, and focus on showing the benefit or our neighborhood based objective.\nOn the artificial data set, we measure performance by correspondence between found projections and the ground truth. On the real data we use mean precision-mean recall curves, a natural performance measure for information retrieval tasks, and a measure for dependency as argued in Section 2\nfor each i, to remove cross-dimension correlation. We assemble the resulting X, into X\nWe pursue 2 transformations mapping from the 5D original space to a 1D latent space for each of the two views. Ground truth projections for both views will then be w(i) = (8ij)j=1 E R15. Results. are in Fig. 1: compared with CCA, our method successfully finds one of the ground truth transfor- mations (= the 5th one), despite mirroring and scale, recovering the between-view dependency.\nWe measure performance by correspondence between the found projections and the ground truth transformation: let W1, W2 E IR15 be projections found by a method, define\n|w(w]|/||Wi|2+|w)w|/|W2||2 Corr(Wi,W2) = max\nas the correspondence score. High score means good match between the found projections and ground truth. We repeat the experiment calculating correspondence on 20 artificial data sets gen- erated in the same way. The table in Figure 1 summarizes the statistics. Our method outperforms CCA and LPCCA (with k = 5), finding the dependency on all 20 data sets.\nExperiment on artificial data sets. We generate an artificial data set with 2 views with multiple lependent groups in each pair of corresponding dimensions as follows. Denote view V(e {1,2}). (V) Fi mj i jk (V) e R5 100, and randomly permute entries of X(1) and X(2) in the same way but differently for\nTransformed coordinates found Transformed coordinates Dim 5 in view 1 vs.dim 5 in view 2 by the proposed method. found by CCA 2 Mean Std ma u! g wap Our method 1.00 0.00 CCA 0.51 0.043 LPCCA 0.51 0.043 transformed coordinate of view 1. 0.00 0.05 dim.5 in view 1 transformed.coordinate.of view.1\nTransformed coordinates found Transformed coordinates Dim 5 in view 1 vs.dim 5 in view 2 by the proposed method found by CCA / \\ Mean Std \" man! u! up Our method 1.00 0.00 CCA 0.51 0.043 LPCCA 0.51 0.043 1 5000 -0.10 0.05 0.00 0.05 dim 5 in view 1 transformed coordinate of view 1 transformed coordinate of view 1\nMNIST handwritten digit database (MNIST). MNIST contains 7OooO gray-scale hand-written digit images of size 28 28. We create a training set and a testing set with 2000 images each. In the training set, we randomly choose 200 images from each digit to balance the distribution, while the. testing set is another 2000 random images without balancing. We apply nonlinear dimensionality. algorithm on both the left half and the right half of the images to create the two views to simulate. a scenario where views have nonlinear dependency; we use Neighbor Retrieval Visualizer (NeRV. Venna et al. 2010) embedding to 5 dimensions with = 0.1 and = 0.9 respectively. The experi-. ment is repeated 17 times, covering 68000 images.\nImage patches from video (Video). We take image patches from a subset of frames in a video (db . tt / rcAS 5t II). Starting from frame 50, we take 5 consecutive frames as a short clip at every 50 frames until frame 5200, then create two views from image patches in two fixed rectangles in those frames, rect1 = [800, 900] [250, 350] and rect2 = [1820,1920] [800, 900]. We measure 5-fold cross-validation performance after randomly permuting the clips..\nStock prices (Stock), from the Kaggle \"Winton stock market challenge\" (goo. g1/eqdhKK). It. contains prices of a stock at different times. We split the time series in the given training set into two halves, and let view 1 be the amplitudes from the Fourier transform results of the first half, and view 2 be the phases from the Fourier transform results of the second half..\nFor each data set we seek a pair of transformations onto 2D subspaces for the views. We measure per formance by mean precision-mean recall curves of neighbor retrieval between 1) the two subspaces from the transformations, and 2) one of the original views and the subspace from the transformatior. for the other view. The better the performance is, the more to the top (better mean precision) and. right (better mean recall) the curve will be in the figure. We set the number of neighbors in the ground truth as 5 for MNIST and Stock, 4 for Video, and let the number of retrieved neighbors vary. from 1 to 10 as we focus on the performance of the matching for the nearest neighbors. We compare. the performance with CCA and LPCCA. Figure 2 (column 1-3) shows the results..\nWe now show our method can find dependent subspaces for multiple (more than two) views. Ir. this experiment we use Cell-Cycle data with five views. The views are from different measurements. of cell cycle-regulated gene expression for the same set of 5670 genes (Spellman et al., 1998). We. preprocess data as in Tripathi et al. (2008) with an extra feature normalization step. We seek five. two-dimensional subspaces from the five views, comparing to the PCA baseline with 2 components We again use mean precision-mean recalls curves as the performance measure, additionally average. the curves across the 10 view pairs or view-transformed coordinate pairs, besides averaging over the five repetitions in 5-fold cross-validation. Figure 2 (column 4) shows we outperform the baseline\nFigure 1: Result for artificial data with dependent groups. Figures (left to right): one of the ground. truths; 1D subspace pair recovered by our method; 1D subspace pair recovered by CCA. Our method recovers the dependency between views in the 5th dimension despite mirroring and scale differences;. CCA fails to do so. Table: means and standard deviations of the correspondence measure (10); our. method outperforms CCA and LPCCA, recovering the dependency in all artificial data sets..\nMNIST Video Stock Cell-Cycle 0.0215 Proposed method 0.02 CCA PCA 0.07 0.06 0.05 0.04 0.03 0.02 LPCCA CC 0.01.017 0.0250.030.035 0.018 0.019 0.02 recall recall recall recall 0.09 0.08 ed method PCA 0.07 uo! .0i 0.05 0.04 0.03 CCA 0.02 0.01.017 0.004 0.006 0.8 0.019 0.02 0.02 0.0010.0020.0030.0040.0050.0060.0070.0080.0090.01 recall recall recall\nFigure 2: Mean precision-mean recalls curves from different real data on the test sets. Top row. view 1 as the ground truth; bottom row: subspace from view 1 as the ground truth. We can se. curves from our method are to the top and/or right of the curves from other methods in most parts o all sub-figures, meaning our method achieves better precision and recall on average. Column 1-3 our method outperforms CCA and LPCCA; column 4: our method outperforms PCA.\nfrom x(1), and a two-dimensional subspace from X(2); the aim is to find the nonlinear dependency. between one-dimensional timestamps, and a two-dimensional representation for the three trajecto ries summarizing the two-dimensional movements of the three points along Lissajous curves. Figure. 3 shows the Lissajous curves, found subspaces, and optimized projection pair. Our method success. fully finds the informative feature in X(1), and keeps transformed coordinates of x(2) smooth, with. roughly the same amount of \"quick turns\"' as in original Lissajous curves. The magnitudes in the optimized projections also suggest they capture the correct dependency..\nEmbedding 1 3000 Embedding 2 P(1) 2500 2000 150 P(2) 1000 0.5 50 1.0 100 300 400 500 600 00 1000 0.5 0.0 0.5 coordinate 1\nEmbedding 1 Embedding 2 3000 P(1) 2500 0.5 2000 0.0 P(2) 1000 0.5 500 -1.00 100 200 300 400 500 600 700 800 900 1000 0.0 0.5 1.0 coordinate 1\nFigure 3: Lissajous curves (a) and found subspaces from our method. (b) - (d) show we find the correct dependency: (b): perfect linear correlation shows the time dimension was found. (c): the number of \"quick turns\"' (14 in total) in the smooth curves roughly matches that in the original curves. (d): projection weights, darker color means smaller magnitude; high magnitude of p(1)'s first entry and the complementary pattern in p(2) suggest we capture the dependency correctly.\nWe presented a novel method for seeking dependent subspaces across multiple views, preserving neighborhood relationships of data. It has strong invariance properties, detects nonlinear dependen cies, is related to an information retrieval task of the analyst, and performs well in experiments\nH. Hotelling. Relations between two sets of variates. Biometrika, 28(3-4):321-377, 1936\nA. Klami, S. Virtanen, and S. Kaski. Bayesian canonical correlation analysis. J. Mach. Learn. Res. 14:965-1003, 2013.\nY. LeCun and C. Cortes. MNIST handwritten digit database, 2010\nH. Nguyen and J. Vreeken. Canonical divergence analysis. CoRR, abs/1510.08370, 2015\nProc. IEEE Symposium on Visual Languages, pp. 336-343. IEEE Computer Society Press, 1996 Paul T. Spellman, G. Sherlock, M. Q. Zhang, V. R. Iyer, K. Anders, M. B. Eisen, P. O. Brown, D. Botstein, and B. Futcher. Comprehensive identification of cell cycle-regulated genes of the. yeast saccharomyces cerevisiae by microarray hybridization. Molecular Biology of the Cell, 9 (12):3273-3297, 12 1998. ISSN 1059-1524. T. Sun and S. Chen. Locality preserving cca with applications to data visualization and pose estima tion. Image Vision Comput., 25(5):531-543, 2007. A. Tripathi, A. Klami, and S. Kaski. Simple integrative preprocessing preserves what is shared in. data sources. BMC Bioinformatics, 9:111, 2008. L. Van Der Maaten. Accelerating t-sne using tree-based algorithms. J. Mach. Learn. Res., 15(1):. 3221-3245, January 2014. ISSN 1532-4435. L. van der Maaten and G. Hinton. Visualizing data using t-SNE. J. Mach. Learn. Res., 9:2579-2605.. 2008. J. Venna, J. Peltonen, K. Nybo, H. Aidos, and S. Kaski. Information retrieval perspective to nonlinear dimensionality reduction for data visualization. J. Mach. Learn. Res., 11:451-490, 2010.. J. J. Verbeek, S. T. Roweis, and N. A. Vlassis. Non-linear cca and pca by alignment of local models.. In NIPS, pp. 297-304. MIT Press, 2003. M. Vladymyrov and M. A. Carreira-Perpinan. Linear-time training of nonlinear low-dimensional. embeddings. In A1STATS, volume 33, 2014. W. Wang. R. Arora. K. Livescu. and J. Bilmes. . On deep multi-view representation learning.In."}]
Sk2Im59ex
[{"section_index": "0", "section_name": "UNSUPERVISED CROSS-DOMAIN IMAGE GENERATION", "section_text": "Yaniv Taigman, Adam Polyak & Lior Wolf\nyaniv, adampolyak, wolf}@fb.com\nWe study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given representation function f, which accepts inputs in either domains, would remain unchanged. Other than f, the training data is unsupervised and consist of a set of samples from each domain, without any mapping between them. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f preserving component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Humans excel in tasks that require making analogies between distinct domains, transferring ele ments from one domain to another, and using these capabilities in order to blend concepts that. originated from multiple source domains. Our experience tells us that these remarkable capabilities are developed with very little, if any, supervision that is given in the form of explicit analogies..\nRecent achievements replicate some of these capabilities to some degree: Generative Adversarial. Networks (GANs) are able to convincingly generate novel samples that match that of a given training. set; style transfer methods are able to alter the visual style of images; domain adaptation methods. are able to generalize learned functions to new domains even without labeled samples in the targel domain and transfer learning is now commonly used to import existing knowledge and to make. learning much more efficient.\nThese capabilities, however, do not address the general analogy synthesis problem that we tackle in this work. Namely, given separated but otherwise unlabeled samples from domains S and T and a perceptual function f, learn a mapping G : S -> T such that f(x) ~ f(G(x)).\nIn order to solve this problem, we make use of deep neural networks of a specific structure in which the function G is a composition of the input function f and a learned function g. A compound loss that integrates multiple terms is used. One term is a Generative Adversarial Network (GAN) term that encourages the creation of samples G(x) that are indistinguishable from the training samples of the target domain, regardless of x E S or x E T. The second loss term enforces that for every x in the source domain training set, f(x) - f(G(x))|| is small. The third loss term is a regularizer that encourages G to be the identity mapping for all x E T.\nThe type of problems we focus on in our experiments are visual, although our methods are not limited to visual or even to perceptual tasks. Typically, f would be a neural network representation that is taken as the activations of a network that was trained, e.g., by using the cross entropy loss, in order to classify or to capture identity\nAs a main application challenge, we tackle the problem of emoji generation for a given facial image. Despite a growing interest in emoji and the hurdle of creating such personal emoji manually, no. system has been proposed, to our knowledge, that can solve this problem. Our method is able to"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "produce face emoji that are visually appealing and capture much more of the facial characteristics than the emoji created by well-trained human annotators who use the conventional tools.\nWhile originally proposed for generating random samples, GANs can be used as a general tool. to measure equivalence between distributions. Specifically, the optimization of D corresponds tc taking the most discriminative D achievable, which in turn implies that the indistinguishability is. true for every D. Formally, Ganin et al.(2016) linked the GAN loss to the H-divergence between. two distributions ofBen-david et al.(2006).\nThe generative architecture that we employ is based on the successful architecture ofRadford et al.. (2015). There has recently been a growing concern about the uneven distribution of the samples generated by G - that they tend to cluster around a set of modes in the target domain (Salimans. et al.[[2016). In general, we do not observe such an effect in our results, due to the requirement to generate samples that satisfy specific f-constancy criteria.\nA few contributions (\"Conditional GANs\") have employed GANs in order to generate samples fron a specific class (Mirza & Osindero]2014), or even based on a textual description (Reed et al.]2016) When performing such conditioning, one can distinguish between samples that were correctly gen erated but fail to match the conditional constraint and samples that were not correctly generatec This is modeled as a ternary discriminative function D (Reed et al.2016 Brock et al.]2016).\nAnother class of very promising generative techniques that has recently gained traction is neural style transfer. In these methods, new images are synthesized by minimizing the content loss with respect to one input sample and the style loss with respect to one or more input samples. The content loss is typically the encoding of the image by a network training for an image categorization task. similar to our work. The style loss compares the statistics of the activations in various layers of the neural network. We do not employ style losses in our method. While initially style transfer was obtained by a slow optimization process (Gatys et al.[2016), recently, the emphasis was put on feed-forward methods (Ulyanov et al.]2016] Johnson et al.|2016).\nThere are many links between style transfer and our work: both are unsupervised and generate a. sample under f constancy given an input sample. However, our work is much more general in its. scope and does not rely on a predefined family of perceptual losses. Our method can be used in order. to perform style transfer, but not the other way around. Another key difference is that the current style transfer methods are aimed at replicating the style of one or several images, while our work considers a distribution in the target space. In many applications, there is an abundance of unlabeled. data in the target domain T, which can be modeled accurately in an unsupervised manner..\nGiven the impressive results of recent style transfer work, in particular for face images, one might get the false impression that emoji are just a different style of drawing faces. By way of analogy this claim is similar to stating that a Siamese cat is a Labrador in a different style. Emoji differ from facial photographs in both content and style. Style transfer can create visually appealing face images; However, the properties of the target domain are compromised.\nAs far as we know, the domain transfer problem we formulate is novel despite being ecological (i.e., appearing naturally in the real-world), widely applicable, and related to cognitive reasoning (Fau- connier & Turner2003). In the discussion below, we survey recent GAN work, compare our work to the recent image synthesis work and make links to unsupervised domain adaptation.\nGAN (Goodfellow et al.]2014) methods train a generator network G that synthesizes samples from. a target distribution given noise vectors. G is trained jointly with a discriminator network D, which. distinguishes between samples generated by G and a training set from the target distribution. The goal of G is to create samples that are classified by D as real samples.\nThe recent work byDosovitskiy & Brox(2016), has shown promising results for learning to map. embeddings to their pre-images, given input-target pairs. Like us, they employ a GAN as well as additional losses in the feature- and the pixel-space. Their method is able to invert the mid- level activations of AlexNet and reconstruct the input image. In contrast, we solve the problem of unsupervised domain transfer and apply the loss terms in different domains: pixel loss in the target domain, and feature loss in the source domain..\nIn the other direction, given the function f, one can invert f in the domain T by generating training samples (f(x), x) for x E T and learn from them a function h from f(T) = {f(x)|x E T} to T Domain adaptation can then be used in order to map f(S) = {f(x)[x E S} to T, thus achieving domain transfer. Based on the work byZhmoginov & Sandler(2016), we expect that h, even in the target domain of emoji, will be hard to learn, making this solution hypothetical at this point.\nGiven a set s of unlabeled samples in a source domain S sampled i.i.d according to some distributior. Ds, a set of samples in the target domain t C T sampled i.i.d from distribution DT, a function J from the domain S U T, some metric d, and a weight a, we wish to learn a function G : S -> T thai. minimizes the combined risk R = RgAn + aRconst, which is comprised of.\nThe first term is the adversarial risk, which requires that for every discriminative function D, th. samples from the target domain would be indistinguishable from the samples generated by G fo samples in the source domain. An adversarial risk is not the only option. An alternative term tha does not employ GANs would directly compare the distribution DT to the distribution of G(x. where x ~ Ds, e.g., by using KL-divergence..\nThe second term is the f-constancy term, which requires that f is invariant under G. In practice. we have experimented with multiple forms of d including Mean Squared Error (MSE) and cosine distance, as well as other variants including metric learning losses (hinge) and triplet losses. The performance is mostly unchanged, and we report results using the simplest MSE solution..\nSimilarly to other GAN formulations, one can minimize the loss associated with the risk R over G. while maximizing it over D, where G and D are deep neural networks, and the expectations in I are replaced by summations over the corresponding training sets. However, this baseline solution. as we will show experimentally, does not produce desirable results..\nIThe function trained this way would be more accurate on S than on T. This asymmetry is shared with al experiments done in this work.\nIn the computer vision literature, work has been done to automatically generate sketches from im-. ages, see Kyprianidis et al.(2013) for a survey. These systems are able to emphasize image edges and facial features in a convincing way. However, unlike our method, they require matching pairs of. samples, and were not shown to work across two distant domains as in our method. Due to the lack of supervised training data, we did not try to apply such methods to our problems. However, one can assume that if such methods were appropriate for emoji synthesis, automatic face emoji services would be available.\nUnsupervised domain adaptation addresses the following problem: given a labeled training set in S Y, for some target space Y, and an unlabeled set of samples from domain T, learn a function. h : T -> Y (Chen et al.]2012]Ganin et al.]2016). One can solve the sample transfer problem (our problem) using domain adaptation and vice versa. In both cases, the solution is indirect. In order to solve domain adaptation using domain transfer, one would learn a function from S to Y and use it as the input method of the domain transfer algorithm in order to obtain a map from S to T1 The. training samples could then be transferred to T and used to learn a classifier there..\nRGAN = max Ez~Ds log[1 - D(G(x))] + Ex~Dr log[D(x)] D\nwhere D is a binary classification function from T, D(x) the probability of the class 1 it assigns fo sample x E T, and\nRcONST = Ex~Ds d(f(x),f(G(x))\nFigure 1: The Domain Transfer Network. Losses are drawn with dashed lines, input/output wit solid lines. After training, the forward model G is used for the sample transfer..\nWe suggest to employ a more elaborate architecture that contains two high level modifications. First, we employ f(x) as the baseline representation to the function G. Second, we consider, during training, the generated samples G(x) for x E t.\nThe first change is stated as G = g o f, for some learned function g. By applying this, we focus the learning effort of G on the aspects that are most relevant to Rconst. In addition, in most applications f is not as accurate on T' as it on S. The composed function, which is trained on samples from both S and T, adds layers on top of f, which adapt it.\nLp =- logDi(g(f(x))) - ) logD2(g(f(x))) - log D3(x) xEs xEt xEt\nlog Di(g(f(x))) - log D2(g(f(x))) log D3 xEs x Et xEt LGANG = ->log D3(g(f(x))) ->log D3(g(f(x))) x Es x Et LcoNsT =d(f(x), f(g(f(x)))) xEs LTID = d2(x, G(x)) xEt\nand where D is a ternary classification function from the domain T to 1,2,3, and D,(x) is the probability it assigns to class i = 1, 2, 3 for an input sample x, and d2 is a distance function ir T. During optimization, Lg is minimized over g and Lp is minimized over D. See Fig.[1|for ar. illustration of our method.\nEq.3Jand4make sure that the generated analogy, i.e., the output of G, is in the target space T. Since D is ternary and can therefore confuse classes in more than one way, this role, which is capturec by Eq.1|in the baseline formulation, is split into two. However, the two equations do not enforce any similarity between the source sample x and the generated G(x). This is done by Eq.5|and 6 Eq.5|enforces f-constancy for x E S, while Eq.6 enforces that for samples x E T, which are already in the target space, G is the identity mapping. The latter is a desirable behavior, e.g., for the cartooning task, given an input emoji, one would like it to remain constant under the mapping of G It can also be seen as an autoencoder type of loss, applied only to samples from T. The experiments reported in Sec.5|evaluate the contributions of LconsT and LT1D and reveal that at least one of these is required, and that when employing only one loss, LcovsT leads to a better performance than LTI D.\nLTID LGAND/ LGANG G f(x) g f(g(f(x)) LCONST\nThe second change alters the form of LgAn, making it multiclass instead of binary. It also introduces a new term LT1D that requires G to be the identity matrix on samples from T. Taken together and written in terms of training loss, we now have two losses Lp and LG = Lgang + aLconsr +. LTID + yLTv, for some weights a, , y, where.\nLp =->`logDi(g(f(x)))->`logD2(g(f(x)))->`logD3(x) (3) xEs x Et xEt LGANG =->`log D3(g(f(x)))->`logD3(g(f(x))) (4) xEs xEt LcoNsT =d(f(x),f(g(f(x)))) (5) xEs LTID =>d2(x,G(x)) (6)\nFigure 2: Domain transfer in two visual domains. Input in odd columns; output in even columns (a) Transfer from SVHN to MNIST. (b) Transfer from face photos (Facescrub dataset) to emoji..\nThe last loss, LTy is an anisotropic total variation loss (Rudin et al.]1992) Mahendran & Vedaldi 2015), which is added in order to slightly smooth the resulting image. The loss is defined on the generated image z = (x)as\nIn our work, MSE is used for both d and d2. We also experimented with replacing d2, which, in visual domains, compares images, with a second GAN. No noticeable improvement was observed. Throughout the experiments, the adaptive learning rate method Adam byKingma & Ba (2016) is used as the optimization algorithm.\nThe Domain Transfer Network (DTN) is evaluated in two application domains: digits and face images. In the first domain, we transfer images from the Street View House Number (SVHN) dataset of Netzer et al.(2011) to the domain of the MNIST dataset byLeCun & Cortes[(2010). In\n(a) (b)\nLTv(z) = (Zi,j+1 Zi+1,j - i,I\nTable 1: Accuracy of the MNIST classifier on. the sampled transferred by our DTN method from SHVN to MNIST\nBaseline method (Sec.[3) 13.71% DTN 90.66% DTN w/0 LTID 88.40% DTN w/0 LCONST 74.55% DTN G does not contain f 36.90% DTN w/0 Lp and LGANG 34.70% DTN w/0 LcONST & LTID 5.28% Original SHVN image 40.06%"}, {"section_index": "3", "section_name": "5.1 DIGITS: FROM SVHN TO MNIST", "section_text": "For working with digits, we employ the extra training split of SVHN, which contains 531,131 images. for two purposes: learning the function f and as an unsupervised training set s for the domair transfer method. The evaluation is done on the test split of SVHN, comprised of 26,032 images. The architecture of f consists of four convolutional layers with 64, 128, 256, 128 filters respectively. each followed by max pooling and ReLU non-linearity. The error on the test split is 4.95%. Ever. tough this accuracy is far from the best reported results, it seems to be sufficient for the purpose. of domain transfer. Within the DTN, f maps a 32 32 RGB image to the activations of the las convolutional layer of size 128 1 1 (post a 4 4 max pooling and before the ReLU). In order tc apply f on MNIST images, we replicate the grayscale image three times..\nNetwork g, inspired byRadford et al.(2015), maps SVHN-trained f's 128D representations to 32 32 grayscale images. g employs four blocks of deconvolution, batch-normalization, and ReLU, with. a hyperbolic tangent terminal. The architecture of D consists of four batch-normalized convolutional. layers and employs ReLU. See Radford et al.(2015) for more details on the networks architecture In the digit experiments, the results were obtained with the tradeoff hyperparamemters a = = 15.. We did not observe a need to add a smoothness term and the weight of LTy was set to y = 0..\nDespite not being very accurate on both domains (and also considerably worse than the SVHN state of the art), we were able to achieve visually appealing domain transfer, as shown in Fig.2(a) In order to evaluate the contribution of each of the method's components, we have employed the MNIST network on the set of samples G(sTEsT) = {G(x)|x E STEsT}, using the true SVHN labels of the test set.\nWe first compare to the baseline method of Sec.3] where the generative function, which works directly with samples in S, is composed out of a few additional layers at the bottom of G. The results, shown in Tab.[1] demonstrate that DTN has a clear advantage over the baseline method. In addition, the contribution of each one of the terms in the loss function is shown in the table. The regularization term LT1p seems less crucial than the constancy term. However, at least one of them is required in order to obtain good performance. The GAN constraints are also important. Finally, the inclusion of f within the generator function G has a dramatic influence on the results.\nAs explained in Sec.2l domain transfer can be used in order to perform unsupervised domain adap. tation. For this purposes, we transformed the set s to the MNIST domain (as above), and using the. true labels of s employed a simple nearest neighbor classifier there. The choice of classifier was\nTable 2: Domain adaptation from SVHN to MNIST\nhe face domain, we transfer a set of random and unlabeled face images to a space of emoji images n both cases, the source and target domains differ considerably.\nThe set t contains the test set of the MNIST dataset. For supporting quantitative evaluation, we have trained a classifier on the train set of the MNIST dataset, consisting of the same architecture as f. The accuracy of this classifier on the test set approaches perfect performance at 99.4% accuracy, and is, therefore, trustworthy as an evaluation metric. In comparison, the network f, achieves 76.08% accuracy on t.\nTable 3: Comparison of recognition accuracy of the digit 3 as generated in MNIST\nMethod Accuracy of *3 DTN 94.67% 3' was not shown in s 93.33% '3' was not shown in t 40.13% '3' was not shown in both s or t 60.02% '3' was not shown in s, t, and during the training of f 4.52 %"}, {"section_index": "4", "section_name": "5.1.1 UNSEEN DIGITS", "section_text": "Another set of experiments was performed in order to study the ability of the domain transfer net. work to overcome the omission of a class of samples. This type of ablation can occur in the source or the target domain, or during the training of f and can help us understand the importance of each of these inputs. The results are shown visually in Fig.3] and qualitatively in Tab. 3] based on the. accuracy of the MNIST classifier only on the transferred samples from the test set of SVHN that belong to class '3'.\nIt is evident that not including the class in the source domain is much less detrimental than elimi nating it from the target domain. This is the desirable behavior: never seeing any '3'-like shapes in t, the generator should not generate such samples. Results are better when not observing '3' in both s, t than when not seeing it only in t since in the latter case, G learns to map source samples of '3 to target images of other classes.\nFigure 3: A random subset of the digit '3' from SVHN, transferred to MNIST. (a) The input images. (b) Results of our DTN. In all plots, the cases keep their respective locations, and are sorted by the probability of '3' as inferred by the MNIST classifier on the results of our DTN. (c) The obtained. results, in which the digit 3 was not shown as part of the set s unlabeled samples from SVNH. (d). The obtained results, in which the digit 3 was not shown as part of the set t of unlabeled samples in MNIST. (e) The digit 3 was not shown in both s and t. (f) The digit 3 was not shown in s, t, and. during the training of f .\nto emphasize the simplicity of the approach; However, the constraints of the unsupervised domain transfer problem would be respected for any classifier trained on G(s). The results of this experi ment are reported in Tab.2] which shows a clear advantage over the state of the art method of|Ganin et al.(2016). This is true both when transferring the samples of the set s and when transferring the test set of SVHN, which is much smaller and was not seen during the training of the DTN.\n3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 3 8 & 5 3 3 3 3 3 3 3 5 3 3 3 3 3 M 5 5 J 3 5 8 8 5 3 5 3 5 3 5 3 8 DE 5 3 3 3 3 3 3 S 5 5 3 3 3 3 3 3 3 3 5 3 3 3 3 3 5 3 5 5 8 8 3 3 < 3 3 3 3 3 8 5 S 3 3 3 3 3 8 5 3 5 3 3 5 3 3 3 3 5 5 5 5 5 3 3 3 3 3 5 & 5 5 5 3 3 8 5 5 3 3 3 5 OF 8 8 5 7 12383- S 5 33 33 83 5 (a) (b) (c) (d) (e) (f)\nTable 4: Comparison of retrieval accuracy out of a set of 100,001 face images for either manually created emoji or the one created by the DTN method."}, {"section_index": "5", "section_name": "5.2 FACES: FROM PHOTOS TO EMOJI", "section_text": "For face images, we use a set s of one million random images without identity information. Th set t consists of assorted facial avatars (emoji) created by an online service (bitmoji. com). Th emoji images were processed by a fully automatic process that localizes, based on a set of heuristics the center of the irides and the tip of the nose. Based on these coordinates, the emoji were centere. and scaled into 152 152 RGB images.\nAs the function f, we employ the representation layer of the DeepFace network Taigman et al. (2014). This representation is 256-dimensional and was trained on a labeled set of four million im. ages that does not intersect the set s. Network D takes 152 152 RGB images (either natural o. scaled-up emoji) and consists of 6 blocks, each containing a convolution with stride 2, batch nor malization, and a leaky ReLU with a parameter of 0.2. Network g maps f's 256D representations t 64 64 RGB images through a network with 5 blocks, each consisting of an upscaling convolutior batch-normalization and ReLU. Adding 1 1 convolution to each block resulted in lower Lcons. training errors, and made g 9-layers deep. We set a = 100, = 1, y = 0.05 as the tradeoff hyper. parameters within Lg via validation. As expected, higher values of a resulted in better f-constancy however introduced artifacts such as general noise or distortions. The network was trained for epochs, the point where no further reduction of validation error was observed on LconsT..\nIn order to upscale the 64 64 output to print quality, we used the method ofDong et al.[(2015] which was shown to work well on art. We did not retrain this network for our application, and apply the published one to the final output of our method after its training was finished. Results withou this upscale are shown, for comparison, in Appendix|C\nComparison With Human Annotators For evaluation purposes only, a team of professional an. notators manually created an emoji, using a web service, for 118 random images from the CelebA dataset (Yang et al.]2015). Fig.4 shows side by side samples of the original image, the human. generated emoji and the emoji generated by the learned generator function G. As can be seen,. the automatically generated emoji tend to be more informative, albeit less restrictive than the ones created manually.\nIn order to evaluate the identifiability of the resulting emoji, we have collected a second example for each identity in the set of 118 CelebA images and a set s' of 100,000 random face images, which were not included in s. We then employed the VGG face CNN descriptor of[Parkhi et al.(2015) ir order to perform retrieval as follows. For each image x in our manually annotated set, we create a gallery s' U x', where x' is the other image of the person in x. We then perform retrieval using the. VGG face descriptor using either the manually created emoji or G(x) as probe..\nThe VGG network is used in order to avoid a bias that might be caused by using f both for training the DTN and for evaluation. The results are reported in Tab.4] As can be seen, the emoji generatec by G are much more discriminative than the emoji created manually and obtain a median rank of 16 in cross-domain identification out of 105 distractors.\nMultiple Images Per Person We evaluate the visual quality that is obtained per person and nc. just per image, by testing DTN on the Facescrub dataset (Ng & Winkler2014). For each perso. p, we considered the set of their images Xp, and selected the emoji that was most similar to thei\nManual Emoji by DTN\nMedian rank 16311 16 Mean rank. 27.992.34 535.47 Rank-1 accuracy 0% 22.88% Rank-5 accuracy 0% 34.75%\nFigure 4: Shown, side by side are sample images from the CelebA dataset, the emoji images created. manually using a web interface (for validation only), and the result of the unsupervised DTN. See Tab.4|for retrieval performance.\narg minf(x) - f(G(x)] xEXp\nThis simple heuristic seems to work well in practice; The general problem of mapping a set X C S to a single output in T is left for future work. Fig.2(b) contains several examples from the Facescrub dataset. For the complete set of identities, see Appendix|A\nTransferring both identity and expression We also experimented with multiple expressions. As. it turns out the face identification network f encodes enough expression information to support a successful transfer of both identity as well as expression, see Appendix[B.\nNetwork Visualization The obtained mapping g can serve as a visualization tool for studying. the properties of the face representation. This is studied in Appendix D|by computing the emoj generated for the standard basis of IR256. The resulting images present a large amount of variability. indicating that g does not present a significant mode effect..\nFig.5(a-c) demonstrates that neural style transfer Gatys et al.(2016) cannot solve the photo to emoj transfer task in a convincing way. The output image is perhaps visually appealing; However, it doe. not belong to the space t of emoji. Our result are given in Fig.5(d) for comparison. Note that DTN. is able to fix the missing hair in the image..\nDomain transfer is more general than style transfer in the sense that we can perform style transfe. using a DTN. In order to show this, we have transformed, using the method of|Johnson et al.(2016) the training images of CelebA based on the style of a single image (shown in Fig.5[e)). The original photos were used as the set s, and the transformed images were used as t. Applying DTN, using face representation f, we obtained styled face images such as the one shown in the figure|5(f).\nM O KO O\nFigure 5: Style transfer as a specific case of Domain Transfer. (a) The input content photo. (b) Ar. emoji taken as the input style image. (c) The result of applying the style transfer method of|Gatys et al.[(2016). (d) The result of the emoji DTN. (e) Source image for style transfer. (f) The result, on the same input image, of a DTN trained to perform style transfer.."}, {"section_index": "6", "section_name": "6 DISCUSSION AND LIMITATIONS", "section_text": "Asymmetry is central to our work. Not only does our solution handle the two domains S and 7 differently, the function f is unlikely to be equally effective in both domains since in most practica. cases, f would be trained on samples from one domain. While an explicit domain adaptation step. can be added in order to make f more effective on the second domain, we found it to be unnecessary. Adaptation of f occurs implicitly due to the application of D downstream..\nUsing the same function f, we can replace the roles of the two domains, S and T. For example,. we can synthesize an SVHN image that resembles a given MNIST image, or synthesize a face that matches an emoji. As expected, this yields less appealing results due to the asymmetric nature of f and the lower information content in these new source domains, see Appendix E.\nDomain transfer, as an unsupervised method, could prove useful across a wide variety of computa. tional tasks. Here, we demonstrate the ability to use domain transfer in order to perform unsuper. vised domain adaptation. While this is currently only shown in a single experiment, the simplicity. of performing domain adaptation and the fact that state of the art results were obtained effortlessly with a simple nearest neighbor classifier suggest it to be a promising direction for future research.."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Shai Ben-david, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representation for domain adaptation. In NIPS, pp. 137-144. 2006.\nAndrew Brock, Theodore Lim, and Nick Ritchie, J. M.and Weston. Neural photo editing with introspective adversarial networks. arXiv preprint arXiv:1609.07093, 2016.\nChao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks. arXiv preprint arXiv:1501.00092, 2015.\nAlexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. CoRR, abs/1602.02644, 2016.\nBasura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual do main adaptation using subspace alignment. In ICCV. pp. 2960-2967. 2013\nYaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Francois Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural net- works. JMLR, 17(1):2096-2030, January 2016.\n(a) (b) (c) (d) (e) (f)\n(a) (b) (c) (d) (e) (f)\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPs, pp. 2672-2680 2014.\nJustin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer anc super-resolution. In ECCV, 2016.\nD.P. Kingma and J. Ba. Adam: A method for stochastic optimization. In The International Confer ence on Learning Representations (ICLR). 2016.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nScott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee Generative adversarial text to image synthesis. In ICML, 2016.\nTim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016\nYaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap tc human-level performance in face verification. In CVPR, 2014..\nAndrey Zhmoginov and Mark Sandler. Inverting face embeddings with convolutional neural net works. arXiv preprint arXiv:1606.04189, 2016.\nShuo Yang, Ping Luo, Chen Change Loy, and Xiaoou Tang. From facial parts responses to face detection: A deep learning approach. In ICCV. pp. 3676-3684. 2015"}, {"section_index": "8", "section_name": "FACESCRUB DATASET GENERATIONS", "section_text": "D\nFigure 6: All 80 identities of the Facescrub dataset. The even columns show the results obtained for the images in the odd column to the left. Best viewed in color and zoom\nFigure 7: Maintaining expression in the domain transfer. In order to support a smiling expression. random smiling emoji were added to set of unlabeled samples from the domain T and the DTN was re-trained. Each quadruplet include two pairs of {face, emoji} of the same identity in the two modes respectively: not-smiling and smiling. Odd columns are input; Subsequent even columns are output..\nf may encode, in addition to identity, other data that is desirable to transfer. In the example o faces, this information might include expression, facial hair, glasses, pose, etc. In order to transfer such information, it is important that the set of samples in the target domain t present variability along the desirable dimensions. Otherwise, the GAN applied in the target domain (Eq. 4) woulc maintain these dimensions fixed. The set t employed throughout our experiments in Sec.5.2|was constructed by sampling emoji of neutral expression. To support a smiling expression for example we simply added to set t random smiling emoji and re-trained the DTN. The results, presented ir Fig.7] demonstrate that f contains expression information in addition to identity information, anc that this information is enough in order to transfer smiling photos to smiling emoji.\nFor completion, we present, in Fig.10|results obtained by performing domain transfer using DTNs in the reverse direction of the one reported in Sec.5.\nFigure 8: The images in Fig.4|above with (right version) and without (left version) applying super resolution. Best viewed on screen.\nOR ORD\nD\nFigure 9: The emoji visualization of the standard basis vectors in the space of the face representa tion, i.e., g(e1),..,g(e256), where e, is the i standard basis vector in R256\nFigure 10: Domain transfer in the other direction (see limitations in Sec. 6). Input (output) in odd (even) columns. (a) Transfer from MNIST to SVHN. (b) Transfer from emoji to face photos..\n3 3 2 6 3 6 6 2 / 4 7 7 0 3 9 4 3 2 2 2 6 61 6 3 3 7 7 9 2 2 P (a) (b)"}]
SJiFvr9el
[{"section_index": "0", "section_name": "LINEAR TIME COMPLEXITY DEEP FOURIER SCATTERING NETWORK AND EXTENSION TO NONLINEAR INVARIANTS", "section_text": "Randall Balestriero\nDepartment of Electrical and Computer Engineering Rice University"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "In this paper we propose a scalable version of a state-of-the-art deterministic time invariant feature extraction approach based on consecutive changes of basis anc nonlinearities, namely, the scattering network. The first focus of the paper is tc extend the scattering network to allow the use of higher order nonlinearities as well as extracting nonlinear and Fourier based statistics leading to the required in variants of any inherently structured input. In order to reach fast convolutions anc to leverage the intrinsic structure of wavelets, we derive our complete model in th Fourier domain. In addition of providing fast computations, we are now able tc exploit sparse matrices due to extremely high sparsity well localized in the Fourie domain. As a result, we are able to reach a true linear time complexity with in puts in the Fourier domain allowing fast and energy efficient solutions to machine learning tasks. Validation of the features and computational results will be pre sented through the use of these invariant coefficients to perform classification or audio recordings of bird songs captured in multiple different soundscapes. In the end, the applicability of the presented solutions to deep artificial neural networks is discussed."}, {"section_index": "2", "section_name": "1.1 BACKGROUND", "section_text": "Invariants are the gems of machine learning enabling key latent space representations of given in. puts. Following this analogy, precious invariants shine out by being discriminative enough to detect. changes in the underlying data distribution yet with bounded variations to ensure stable and robust representations. The motivation to find informative invariants as latent representations is becom-. ing the principal focus from deep learning to signal processing communities aiming to tackle most machine learning tasks. Undoubtedly, given infinite datasets and computational power, learning in. variants will lead to the fittest descriptors. However, nowadays problems do not fit this situation. forcing the use of nested architectures and supercomputers to reach, in the limit, these utopian de-. scriptors. As an alternative, the scattering network (Mallat]2012) Bruna & Mallat]2013) Anden & Mallat|2014) provides a deterministic transformation of a given input signal x through a cascade of. linear and nonlinear operators which do not commute. The linear operator is able via a dimensional. increase to linearize the representation in the sense that x can be expressed as a linear combination. of basic yet fundamental structures. This linear transformation is in practice a wavelet transform. but can be generalized to any complete or over-complete change of basis. Originally. these wavelet\ntransforms were used with an over-complete basis derived from Morlet and Gabor wavelets (Mal- lat]1989). Recently, a discrete wavelet transform scheme (Mallat] [1999) and specifically a Haar transform (Chen et al.]2014) has been used instead to reduce the computational overload of the scattering transform. This framework, however, is not suited for general tasks due to poor frequency resolution of one wavelet per octave and the not continuously differentiable Haar wavelet (Graps. 1995) making it unsuitable for biological and natural waveforms detection. Following the change of basis, a k-Lipschitz nonlinear operator is applied which must also be contractive to enforce space contraction and thus bound the output variations (Mallat! 2016). The nonlinearity used in the scat- tering network is the complex modulus which is piecewise linear. This surjection aims to map the transformation into a subspace of smaller radius {[x x E N} C where is the space of studied signals. As a result, one can see these successions of transforms as a suite of expansion and contrac- tion of a deeper and deeper signal representation in the hope to decode, detect and separate all the underlying energy sources. These layered representations however still contain the time dimension and are thus overall extremely sparse and not translation invariant. This time sensitivity motivates the aggregation of the time dimension. This led to the scattering coefficients per se which are com- puted through the application of the operator S applied on each previously computed representation and each frequency band. It is defined as an order one statistic, the arithmetic mean, over the time dimension leading to a time-invariant central tendency description of the layered representation of x. The resulting scattering coefficients, when used as features in machine learning tasks, led to state-of-the-art results in music genre classification (Chen & Ramadge2013), texture classification (Sifre & Mallat2013] Bruna & Mallat2013) and object classification (Oyallon & Mallat2015) In this paper, we present a modification of the standard scattering network by replacing the complex modulus nonlinearity with a quadratic nonlinearity in order to increase the SNR while allowing to compute the complete scattering network without leaving the Fourier domain, which was before necessary after each level of decomposition."}, {"section_index": "3", "section_name": "1.2 SCATTERING NETWORK", "section_text": "We now present formally all the steps involved in the scattering network in order to clarify notation and concepts while making it as explicit as possible to easily develop our extensions. For reader more familiar with neural networks, it is first important to note that this framework can be seen a a restricted case of a Convolutional Neural Network (LeCun & Bengio] 1995) where the filters ar fixed wavelets and the nonlinearity is the complex modulus as well as some topological difference such as depicted in Fig[1]. The scattering coefficients are then computed through time averaging o each representation."}, {"section_index": "4", "section_name": "1.2.1 HIERARCHICAL REPRESENTATION", "section_text": "By definition a scattering network can have any fixed number of layers L defined a priori. These layers are ordered in a hierarchical fashion so that the output of layer l is the input of layer l + 1.. In the following, many presented properties and definitions hold for all l E {1, ..., L}. Each layer.\ntdt=0 ) 0) =0,VX e A(l\ndt=0 =0\nThe finite set of continuous scale factors needed to derive the filer-bank is given as a geometric pro gression governed by two hyper-parameters, the number of wavelets per octave Q and the number\nThe collection of scaling factors for layer l is denoted by A(l). The only admissibility condition that each filter must satisfy is to have zero mean:\nA *W * I*Y2 * * * I*W * *W *y * * *W * B\nA J*W\nFigure 1: Architecture difference between the CNN (A) and the scattering network (B) without depiction of the computation of the scattering coefficients which are obtained after averaging over each obtained representation.\nof octave to decompose J. The Q parameter, also called quality criteria, defines the frequency res. olution, the greater it is the finer the resolution is but the more redundant will be the representation. The J parameter defines the number of octave to decompose. Since these parameters can be layer specific we now denote them as Q(l) and J(l). We thus have.\nOne can notice that X() [x] coefficients form the well known wavelet transform or scalogram. We now denote by X(l) [x] the complete time-frequency representation for layer l if the scales are not necessary in the context.\nThinking of deeper layers as representations of more abstract concepts is inappropriate and thus not. analogous to deep neural networks representations simply because deeper layer filters are not linea. combination of the first layer filters. Since the used filters are renormalized to satisfy the Littlewood. Paley condition, the energy contained in each layer decays exponentially (Waldspurger2016). As. a result, deeper layer will contain less and less energy until all events have been captured and all. the next layers are zeros. With this renormalization, inverting one change of basis is instantaneous,. simply add up together all the coefficients obtained from the filters application:.\nx(t) = X1 EA1\nB\n1() _\nWhen the L filter-banks are generated, it is possible to compute the L representations by iteratively applying the filter-banks and the nonlinearity. As a result, the lth representation indexed by the time. dimension t with the l first scales as hyper-parameters is given by:.\nX(0)[x](t) := x(t) t)[x](t) :=|(X(0)[x] *y{))(t)],VX1 E A] x(t) =|(X VXj E A(1) ,A E A(l)\nOne can notice that X\\. [x] coefficients form the well known wavelet transform or scalogram. We\nThis last property highlights one motivation of dimensional increase, event or structure separation It is now possible to rewrite the treated signal as a combination of fundamental structures, namely the responses of the signal with the filters linearizing the events w.r.t the new A dimension"}, {"section_index": "5", "section_name": "1.2.2 SCATTERING COEFFICIENTS", "section_text": "p(l)(t)dt=1 <) $(l)(0)= 1, Vl\nThe greater the standard deviation is in the physical domain the more time invariant are the scattering coefficients. Ultimately, we reach global time-invariance and the scattering operator S[x] reduces to an arithmetic mean over the input support. Since only the standard deviation of the scaling function is layer dependent, we denote by $(l) the lth scaling function generated using o(l). We can thus. define the scattering operators as\nFor machine learning tasks, using global time-invariance yield robust yet biased time invariant de- scriptors due to too much many-to-one possible mappings. As a result, a local or windowed scatter- ing transform has been used (Bruna & Mallat|2013) leading to only local time-invariance through a smaller time support for the scaling function. This tweak is possible in computer vision tasks where each input is of the same size and the role of is to bring local diffeomorphism invariance which has been shown to smooth the underlying manifold (Zoran & Weiss!2011) 2012). However, for audio tasks and more general problems, this constant input size is rare forcing the use of for completely aggregating the time dimension."}, {"section_index": "6", "section_name": "2.1 HIGHER ORDER NONLINEARITY", "section_text": "The usual nonlinearity applied in a scattering network is the complex modulus. This nonlinearity is. not everywhere differentiable but is contractive leading to an exponential decay in the energy distri bution over the layers. However, as pointed out in (Waldspurger!2016), higher order nonlinearity might be beneficial sparsity-wise and to increase the SNR. As a result, we chose to use a continu ously differentiable second order nonlinearity which will have the beneficial property of adapting its contractive property for irrelevant inputs while maintaining bounded variations. This nonlinearity is. defined as\nProof.We now v of our nonlinearity. orove the adantive k-Linschitz\nI|P[a] - P[d]lI =||a|2 |bI2l l(Ia]-IbD(Ia]+lbDl =(a]+ [bD[a]-[6 ([a]+ [bD[[a-b]] K(a,b)a-b|]\nFor each of the L representations, one can extract the scattering coefficients by applying a scaling function o(l) on X(l) [x] leading to the time invariant representation S(l) [x]. The scaling function acts as a smoothing function on a given time support and satisfies.\n2 = e 20(l)2\nS(0) [x](t) := (X(0)[x] (t) )[x](t) :=(X] * d(1))(t) ),VXj E A(1) 2 * x|(t) := $()(t E A(1) 1(l)\ns(0)[x](t) := (X(0) [x](t) := (X{)[x] *g(1)(t),VX1 E A(1) (+) C [x] * $(l))(t VX1 E A(1),..., At E A(l)\nP[c]=|c|2,Vc E C\nSince the input signal is renormalized so that |[x|[1 = 1, we have that |a] + [b] e [0, 1[. As a result given the inputs constraints, P is a contractive operator with bounded variations. Yet, the degre of contraction will vary given the input amplitudes leading to a better SNR. This means that higl amplitudes resulting from close match between the filter and the signal will be efficiently representec whereas small amplitude coefficients resulting from noise filtering and mismatches between the filte and the signal will be highly contracted. Since in practice wavelet filters catch the relevant events this property allows high quality representations. This change will not only increase the relative sparsity of the representations but also allow us to perform some major computational tricks as describe in the following section. As a result we define the new representations as\nas well as the new scattering coefficients as"}, {"section_index": "7", "section_name": "2.2 INVARIANT DISPERSION COEFFICIENTS", "section_text": "The scattering coefficients used to characterize the signal of interest are known to be efficient for stationary inputs but not descriptive enough otherwise. We thus propose to generate complementary invariant coefficients based on a dispersion measure, the variance. As a result, these complementary coefficients derived from the second order moment will help to characterize the input leading to more discriminative features while maintaining global time invariance. We now define these invariant dispersion coefficients as\ncard ({s E L2(C)|S = (k1,..., kn)}) > card ({s E L2(C)|S = (k1, ..., kn) and V = (P1, ..., Pn)})\nwhere S represents a realization of the scattering coefficients for all layers, all frequency bands, an V a realization of the dispersion coefficients again for all layers and all frequency bands. From this it follows that the set of invariant coefficients (S[x], V[x) is more discriminative leading to mor precise data description than when using (S|x) only. The development of these presented invarian dispersion coefficients opened the door to the development of uncountably many new invariant coef ficients. We now present the elaboration of the scheme in the Fourier domain and the computationa tricks involved in order to reach linear complexity.\nXo[x](t) := x(t ()[x](t) r* E A(1 VXj E A(1 AE A(l)\nS(0)[x](t) := (X(0)[x] * q(0))(t), [x](t) := (X{l)[x] * q(1)(t),VX1 E A(1) [x](t) := (X(] V\\j E A(1) ..., A E A(l)\nV(0) [x] := ||x(0) [x] - S0) [x]||2, [x [x]|I2,VX1 E A(1) x:=Ix VXj E A(i) E A()\nThe resulting V(l) [x] coefficients are thus globally time invariant whatever scaling function was used to compute S(l) [x]. In fact, these invariant dispersion coefficients represent the variance be- tween (l) [x] and S(l) [x] representations whether S(l) [x] was globally time invariant or a smoothed version of (l) [x]. In order for V(l) [x] to be invariant to random permutations as well, S(l) [x] should be globally translation invariant and thus also globally invariant to random permutations. In addi- tion, regarding the discriminative ability gained through the use of these second order statistic, we. have that\nOne of the great benefits of the wavelet transform is the induced sparsity in the representation for certain class of signals (Elad & Aharon2006] Starck et al.] 2010) which is seen as a quality cri. teria of the representation (Coifman et al.||1992). In addition of providing sparse representations. wavelets are localized filters in time and frequency domain. However, the idea to exploit this known. sparsity in order to reduce the computational time of performing a transformation has not been lever-. aged yet. When dealing with the standard time domain, one can not know a priori where the filter. with match or not the signal and thus where are the nonzeros coefficients leading to no way to have. computational gains. On the other hand, applying the filter in the Fourier domain reduces to an. Hadamard product and thus the resulting support is deduced from the filter support which is known. to be localized. As a result, it is now possible to know a priori most of the zero coefficients posi-. tions since in the Fourier domain the filter is well localized. Also, the Fourier domain, the wavelet support is convex and compact but most importantly it can be computed a priori given the scale. parameter and the mother wavelet. This motivates our choice to perform our framework including. the wavelet transform, the nonlinearity P and the invariant features extraction in the Fourier domain. leading to linear complexity. Furthermore, using the Fourier domain allows us to efficiently leverage. sparse matrices leading to efficient storage and memory management on energy efficient platforms. such as presented in (Esser et al.[2015). We will first present the computation of the filters in the. Fourier domain as well as their convex compact support derivation. From that we present the sparse. application of the filters and how to compute the nonlinearity in Fourier. Finally, we will see that. extracting the invariant features can be done efficiently leading to our main result which is a linear. complexity overall framework. Concerning the Fourier transform, the Danielson-Lanczos lemma. (Flannery et al.[[1992) will be used in order to provide a true O(N log(N)) complexity for an input. of size N which is a power of 2. As we will see, this requirement will always be fulfilled without. any additional cost."}, {"section_index": "8", "section_name": "3.2 SPARSE FOURIER FILTERS", "section_text": "One particularity of the continuous wavelets such as DoG, Morlet wavelets reside in their localizec. compact support in the Fourier domain. In our description the used wavelet will be a Morlet wavele but this analysis can be extended to any continuous wavelet with analytical form. We define the\nsupport of the filter given the threshold e as\nsuppe[y)]:={w|y] (w) > e,w E [0,2]}\n(w-0)2 (w) = H(w)e 20%\nYet, instead of using the definition of scaling as defined in section [1.2.1|we will use these tw parameters as follows\nAs presented in (Balestriero et al.[ 2015) the scales define entirely the support of each wavelet. In order to develop synergistic computational tricks, we derive our framework in the Fourier domain. Let define the Morlet wavelet as\nwhere the parameters o and oo represent respectively the center frequency and bandwidth of the mother wavelet and H is the step-wise function. The ratio between these two quantities will re- main the same among all the filters, in fact, wavelets have a constant ratio of bandwidth to center frequency. These two mother hyper-parameters are taken as\no = (2-1/Q +1) o = 3(1 - 2-1/Q\nY0,00(Aw) =yo,g(w) :=yx(w)\n1.0 0.8 0.6 0.4 0.2 0.0 TT TT 3 4 2 TT 4 Ereduency (radian)\nFigure 2: Filter-Bank in Fourier with 4 wavelets per octave on 4 octaves. The sparsity is of 0.94% with e = 0.01\nTable 1: Sparsity in percentage of the Morlet Filter-Bank in Fourier for different signal sizes an realistic parameters (J = 9, Q = 16)\nTable 2: Time (in sec.) needed to compute the filter-bank given the signal size with standard param eters J = 5, Q = 16 growing linearly with the input size on 1 CPU @1.4GHz.\nIn fact, we have the following relation between the scale and the mother hyper-parameters\nAW to,o0(Xw) =H(Xw)e 2% \\2(w- 02 -H(w)e (w-0)2 H(w)e 2\nGiven this, we can compute explicitly the support of every filter . . As one can notice these filters. have a convex compact support around their center frequencies.\n00 -2 log(e) Lo\nIn Fig.2lone can see a filter-bank example where all the filters are presented in one plot demonstrat- ing the important sparsity inherent to wavelets. Varying the e parameter affects directly the number of nonzero coefficients and we thus also present in Table |1|the exact sparsity with different input sizes and e parameter. Since the support is known a priori, it is straightforward to optimize their computation and allocation through sparse vectors leading to fast filter-bank generation as presented in Table[2|with large input sizes. In fact, the input length defines the size of the generated filter since we now perform the convolution in the Fourier domain.\nMorlet Filter-Bank in Fourier, (J=4,Q =4), Sparsity: 0.87% with e=0.01\ne = 0.0001 e = 0.0000001 N = 524288 (219) 98.39883% 97.89803% N = 1048576 (220 98.39893% 97.89812% N = 2097152 221 98.39898% 97.89816%\nConcerning the () filter, given its bandwidth (). its. port is given by:\nSome examples are shown in Fig.8|where different o(l) are selected representing different time. supports. For each of the filters and each of the layers, the application is done through element-wise. multiplication in the Fourier domain as explained in the next section where the nonlinearity will be defined.\nThe nonlinearity used in this framework defined in section 2.1|is efficiently done in the Fourier domain through the following property\nF[|x[2] = F[x] * F[x]\nIf done directly, this operation would be slower in the Fourier domain since we jump from a linea complexity to a quadratic complexity. However, one should notice from section 3.2 that we ar dealing with F[x] which are extremely sparse but most importantly with convex compact support 0 size M << N. Exploiting this sparsity could lead to a faster convolution which would still be o quadratic complexity w.r.t the support size. However, using the convolution theorem it is possibl to perform this convolution in M log(M) complexity by applying again a Fourier transform nov only on the convex compact support of F|x]. In order to have proper boundary condition and nc the periodic assumption of the Fourier transform we use a zero-padded version of size2M instea of M leading to exact computation of the convolution. In addition, the support size of 2M is th minimum required size. For a fast Fourier transform algorithm, this has to be a power of 2. As result, in practice, the zero padding is done to reach the size which is the smallest power of 2 greate than 2M defined as\n2|log2(2M)]\nF[|x|2]= F-1 F[F[x]] F[F[x]\nwhere O is the Hadamard product, F is the Discrete Fourier Transform and F-1 its inverse oper ator. In addition of the second Fourier transform being applied on a really small support, it is alsc important to note that after application of the nonlinearity the output is conjugate symmetric in the Fourier domain but since the filter-banks are always applied on [0, ] we can store only this part fo further computation and re-generate the conjugate symmetric part when applying (l). We presen this operation in the Fourier domain in Fig.7\nIn order to highlight the high sparsity encountered in the Fourier domain when dealing with this. filter application, we present in Fig. 3 an example where the nonzero elements are shown. This. corresponds to the first representation namely () [x]. For the second representation, the input support will not be over the whole [0, 2| domain but around 0 and thus implies increased sparsity as demonstrated in Fig.9] Given these two descriptions, one is able to compute t(l) [x] for any l. We thus now present how to compute the scattering and dispersion coefficients given this representations. in the Fourier domain."}, {"section_index": "9", "section_name": "3.4 SCATTERING COEFFICIENTS EXTRACTION", "section_text": "2 log(e\nwhere log,(2M) denotes the smallest of the greater integers. As a result in the Fourier domain we will apply another Fourier transform in order to compute this auto-correlation which will correspond to the desired nonlinearity in the time domain.\nThe scattering coefficients S() [x] result from the application of a Gaussian filter parameterized by its standard deviation. In the general case where global time invariance is required, this standard deviation is taken to be infinite in the time domain resulting in\nN 1 N t=1\nNonzeros Elements After Filter Application and Nonlinearity in Fourier with (J=5,Q =8) x x .), Sparsity: 0.95% P[= xy()], Sparsity: 0.95% 0 0 Nyquist Frequency Nyquist Frequency r support 5 5 10 10 15 15 220 2 20 25 25 30 30 35 35 40 40 T 3 2 F2 TT 3 TT 2 2 2 2 Frequency (radian) Frequency (radian)\nIn this case, the corresponding result in Fourier is given by\ns(l)[x] =F\nHx(l) [x] S(l)[x]|I2 = ||F\n(w)=s(w\nwhere & denotes the Dirac function. As a result, the dispersion coefficients can be calculated as\nTT V(l)[x] = 2 L (Fx(l)[ w=1/N"}, {"section_index": "10", "section_name": "3.5 SCALABILITY", "section_text": "We now present some results and figures in order to highlight the high scalability of our frameworl with respect to the input size. Note that the current implementation is done in Python. Implement ing this toolbox in C is a future work which will lead to even better results than the ones displaye. below which are nevertheless already astonishing. First of all, one can see in Fig.4|that the numbe of nonzero coefficients increase linearly with the input size. This result is important in nowadays\nFigure 3: Nonzero elements are shown in black, after application of the filters and the nonlinearity operator P. The x-axis corresponds to the frequency w and the y-axis to the filter index. This follows heqeometric fthewoveletcc Onds to a/(i)\nThe invariant dispersion coefficients are extracted from the Fourier transform in a straightforward manner as shown in the Appendix which results in.\nThus (1 F () ) acts as a mask to reduce the norm computation by the amount of energy captured through the scaling function application. For the case where we have global time invariance or infinite standard deviation. this mask reduces to.\nwhich is the L2 norm computed without taking into account the coefficient at w = 0 exploiting the conjugate symmetry structure for the real input signal x. Conceptually, the V coefficients capture the remaining energy and thus ensures that for any depth of the scattering network, all the energy of the input is contained inside the computed invariants. In fact, one can see that V(l) [x] =\nFigure 4: Left: Figure showing the increase in nonzero coefficients is linear with respect to the input size. Right: Figure showing the increase in sparsity in our representations for the two layers. The increase is logarithmic with respect to the input size\nComputation Time @0.8GHz with Different Signal Sizes, (J=5,Q =16 First Layer Second Layer 2.0 1.5 1.0 0.5 0.0 217 218 219 220 Signal Size\nFigure 5: Needed computation time to perform the transform for the first (1) [x] and second (2) [x] layers. The need computation time is more than 16 times smaller than the known Scatnet toolbox im plemented in C (github.com/Randa11Balestriero/CIGAL_GUI) of O(N log(N)) time complexity even when dealing with Fourier domain inputs. This shows the advantage of our ap. proach which here is implemented in Python only..\nparadigm where technologies allow extreme frequency sampling and thus input signals with higl dimensions yet we aim to save as much memory and storage as possible. If we put this nonzer. coefficients in perspective with the total possible number of coefficients we obtain our sparsity co efficient which grows logarithmically with the input size as shown in Fig. 4 This result shows the advantage of using sparse matrices which increases as the input size increases. The sparsity is thus in our case a justification to exploit the Fourier domain.\nFinally, in Fig. 5|are presented some computational time for different input signals. We can see ir this figure the high efficiency of our approach put in perspective of an existing C implementation of the scattering network. In fact, in this latter, one had to perform multiple inverse Fourier transforms in order to apply the nonlinearity and in order to compute the second layer for example apply agair a Fourier transform and this for all the frequency bands. As a result the previously fastest knowr algorithm was of asymptotic complexity O(N log(N)) even with a Fourier input. In addition, i1 did not leverage the sparsity of the representation leading to poor memory management and storing Not however that for all the existing implementations, the complexity is linear with respect to the J and Q parameters. Finally, with our framework, one can directly store the sparse matrices of the\nLinear Increase of Nonzeros Coefficients w.r.t the Signal Size Logarithmic Sparsity Increase w.r.t the Signal Size. # of Nonzeros Coefficients for P # of Nonzeros Coefficients for Pu] 0.000007+9.75716e-1Sparsity % for P[4j] 0,0000012+9.99343e-1Sparsity % for P[u2] 2000000 4000000 0.000006 0.0000010 3500000 1500000 0.000005 0.0000008 3000000 0.000004 # 2500000 0.0000006 e 1000000 0.000003 2000000 0.0000004 0.000002 1500000 0.0000002 500000 0.000001 1000000 217218 217218 22 0.000000 219 220 219 217218 219 217218 219 220 Signal Size Signal Size Signal Size Signal Size\nrepresentations leading to space saving on the hard drive in addition of the actual Random Acces. Memory (RAM) saving during the computation."}, {"section_index": "11", "section_name": "4.1 DATASET PRESENTATION", "section_text": "We now validate the use of the scattering coefficients as features for signal characterization through a supervised problem of audio recordings classification. The bird song classification challenge is made of 50 classes and correspond to a small version of the BirdCLEF Challenge (Joly et al.]2015). The recordings come from the Xeno-Canto database and mainly focus on species form South America For our task, the dataset used for training is composed of 924 files sampled at 44100 Hz. The validation set used to present our classification accuracy contains about 400 files. Computing the S|x] and V[x] features on the training and validation set takes between 2 to 3 hours depending on the set of parameters used with a 2-layer architecture on 1 CPU. The files add up to a disk usage of 4.2Go, the computed set of features however represent 450Mo. As a result, we are able to encode and extract important characteristics of the signals while reducing the amount of redundant information. We present in Fig.1ql 112|examples of the dataset with the waveform as well as the representation by using a second order nonlinearity. In addition, one can see the different frequency modulated\nrecordings come from the Xeno-Canto database and mainly focus on species form South America For our task, the dataset used for training is composed of 924 files sampled at 44100 Hz. The. validation set used to present our classification accuracy contains about 400 files. Computing th. S[x] and V[x] features on the training and validation set takes between 2 to 3 hours depending o1 the set of parameters used with a 2-layer architecture on 1 CPU. The files add up to a disk usage o. 4.2Go, the computed set of features however represent 450Mo. As a result, we are able to encode anc. extract important characteristics of the signals while reducing the amount of redundant information. We present in Fig. 1q1112|examples of the dataset with the waveform as well as the representatior t(1) [x]. The aim is to first demonstrate the sparsity or high SNR in the physical domain involvec by using a second order nonlinearity. In addition, one can see the different frequency modulate. responses that could characterize a bird song. Overall, there are some fundamental difficulties ii. this task. The first challenge is to deal with translation invariance. In fact, the bird songs can be. captured anywhere inside each files which themselves are of many different durations, from second. to minutes. The second difficulty resides in characterizing well enough the time-frequency pattern. of each specie without being altered by the ambient noise or possible presence of other specie. including human voices. Finally, difficulties also arise from the machine learning point of view witl. large class imbalance in the dataset.\nMin and Max Min and Max J2=12Q2=16 1 J2=9,Q2=8 0.50 o 0.45 0.40 0.35 S[x V[x] (S[x], V[x]) Features\nFigure 6: Classification MAP given two set of parameters for the second scattering network layer Are also presented the results when using only the scattering coefficients S|x], the dispersion coef- ficients V[x] and a concatenation of both, best is 52%\nWe now present the classification results obtained via our developed framework. First of all, nc. additional features have been engineered and thus only the features developed in the paper are used. For the classification part, we decided to use a fast algorithm in accordance with the whole scheme developed earlier and thus used random forests (Liaw & Wiener 2002). In short, random forests\nClassification Performance, Two Layers with different (J(2). Q (2)). (.J(1) =5.Q(1) =16), 10 runs\nare using bagging of decision trees (Breiman1996) and thus are able to aggregate multiple weak classifiers to end up with efficient class boundaries. One of its drawback resides in the fact tha it can only create decision rule on each feature dimension without combining them as could do a logistic regression for example. In addition, we used a weighted loss function in order to deal witl the imbalanced dataset (Van Hulse et al.]2007). Finally, no additional pre-processing/denoising has been used and no feature extraction/selection technique has been piped in. Yet, with this basic approach, we were able to reach an accuracy of 47.6% and a Mean Average Precision (MAP) of 52.4%. The state-of-the-art technique for this problem reached a MAP of 53% (Cha2016). We present in Fig|6 some accuracy results where two sets of parameters have been used for the seconc layer of the scattering network. In addition, we show the classification results when using each fea tures independently and combination of the two in order to highlight their complementarity. Giver the deterministic transformation used and the lack of cross-validation and fine tuning, we think of these results as promising overall while being state-of-the-art if considering solutions where nc learning was involved outside of the classifier. For example, one extension on the classifier could be to use boosting algorithms (Schapire et al.1998) or neural networks. Concerning the represen tation, performing cross-validation on the parameters could lead to great improvements as finally a third scattering layered could also be considered."}, {"section_index": "12", "section_name": "5 CONCLUSION", "section_text": "We presented an extension of the scattering network in order to provide more discriminative time in. variant features which are complementary to each others. The derivation of a second order invariant. operator as well as the use of a second order nonlinearity in the layered representation computation led to efficient characterization of audio signals opening the door of many more possible time in. variant features derivation. The whole framework has been derived in the Fourier domain in order tc. reach linear complexity in the input size as it is now possible to compute all the layer representations. and feature without leaving the Fourier domain. Sparse storage is also a milestone of our algorithm. leading to not only efficient computation but smart memory management allowing this framework. to be applied online on energy efficient laptops and chips. In addition, only simple arithmetic op. erations are used and parallel implementation can be done easily as well as GPU portage. This. framework can be applied without any assumption on the input signal and thus aims to be as general. as possible as a unsupervised invariant feature extraction. Finally, we hope to bring the considera. tion of sparse filters and Fourier based computation for deep convolutional networks. In fact, as the datasets get larger and larger, the complexity of the networks increase and convolutions might no1. be efficiently computed in the physical domain anymore. Since the convergence of the filter ensure. their sparsity and smoothness, this consideration might help to bring deep learning to the family ol. scalable algorithms with the development of Fourier networks as a whole.."}, {"section_index": "13", "section_name": "ACKNOWLEDGEMENT", "section_text": "We thank Institut Universitaire de France for the Glotin's Chair in 'Scene Analysis'. We thank SABIOD.ORG Mission Interdisciplinaire of the CNRS and Alexis Joly for co-organisation of the LifeClef Bird Challenge. We also want to thank Mr. Romain Cosentino for his reviewing work and his help in bringing back in the physical domain some of our original sentences."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Randall Balestriero et al. Scattering decomposition for massive signal classification: from theory to fast algorithm and implementation with validation on international bioacoustic benchmark. In 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pp. 753-761. IEEE, 2015.\nMichael Elad and Michal Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing, 15(12):3736-3745, 2006.\nAndy Liaw and Matthew Wiener. Classification and regression by randomforest. R news, 2(3): 18-22, 2002.\nStephane Mallat. A wavelet tour of signal processing. Academic press, 1999\nStephane Mallat. Understanding deep convolutional networks. Phil. Trans. R. Soc. A, 374(2065) 20150203, 2016.\nStephane G Mallat. A theory for multiresolution signal decomposition: the wavelet representation IEEE transactions on pattern analvsis and machine intelligence. 11(7):674-693. 1989\nRobert E Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. Annals of statistics. pp. 1651-1686. 1998\nJean-Luc Starck, Fionn Murtagh, and Jalal M Fadili. Sparse image and signal processing: wavelets curvelets, morphological diversity. Cambridge university press, 2010.\nAlexis Joly, Herve Goeau, Herve Glotin, Concetto Spampinato, Pierre Bonnet, Willem-Pier Vel-. linga, Robert Planque, Andreas Rauber, Simone Palazzo, Bob Fisher, et al. Lifeclef 2015: multi- media life species identification challenges. In International Conference of the Cross-Language Evaluation Forum for European Languages, pp. 462-483. Springer, 2015.\nStephane Mallat. Group invariant scattering. Communications in Pure and Applied Mathematics. vol. 65, no. 10, pp. 1331-1398, 2012\nDaniel Zoran and Yair Weiss. Natural images, gaussian mixtures and dead leaves. In Advances in Neural Information Processing Systems, pp. 1736-1744, 2012"}, {"section_index": "15", "section_name": "3 ADDITIONAL MATERIAL AND BIRD SONG REPRESENTATIONS", "section_text": "Using these three examples, we also present in Fig. 13|the resulting features computed on the firs two layers of the scattering network in order to highlight the possibly linear hyperplanes separating these 3 species in this new feature space.\nDaniel Zoran and Yair Weiss. From learning models of natural image patches to whole image restoration. In 2011 International Conference on Computer Vision. pp. 479 486. IEEE. 2011.\n[x] S(l) [x]I2 [x](t) - S(l) [x](t x(l) [x](t) - S(l) [x](t] 7(t)g(t)*dt g(t) = X(l)[x](t) -S(l)[x](t F[g](w)F[g*](w)dw Plancherel Theorem =|F[g]|I2 l|F[x(l) [x] - S(l)[x]|I2 Linear Operator\n[x] S() [x]||2 ) [x](t) - S(l) [x](t )[x](t) - S(l) [x](t) dt g(t)g(t)*dt g(t) =X(l)[x](t)-S(l)[x](t] F[g](w)F[g*](w)dw Plancherel Theorem =|F[g]|I2 ||F[x(l) [x] s(l) [x]]|I] Y() [x]- Fs(l) [x] Linear Operator LMATERIAL ID RIpD SONC RE SENTATIONS\nDifferent o. Filters in the Fourier Domain 1.0 =0.01 =0.1 =0.3 0.8 I11111 I 0.6 0.4 III I I -- 0.2 IIIII 0.0 T 3T 2TT 2 TT 2 Frequency (radian)\nFigure 8: Some possible filters in the Fourier domain corresponding to Gaussian filtering witl bandwidth in the physical domain inversely proportional to the o in the Fourier domain withou renormalization.\nExample of Nonlinearity Computation in the Fourier Domain\n1112 dTTcyCOTTTp O111C11 Example of Output of Filter Application x * , in Fourier Real Part Imag Part Support Zero Padding of the Window to Convolve Real Part Imag Part Minimum Padding Optimal Padding Convolution Result in I *y, I2 Fourier, Conjugate Symmetric with Zeros to Remove Real Part Imag Part Support 0 100000 200000 300000 400000 500000\nFigure 7: Demonstration of the P computation in the Fourier domain. Top: input after application. of a specific filter. Middle: Extracted window with nonzero elements and optimal padding greater. than the minimum size up to the next power of 2. Bottom: Result of the convolution done through. another Fourier transform and the convolution theorem, the kept coefficients are from 0 to M since. they are followed by zeros and the complex conjugate of these coefficients leading to optimal results\nNonzeros Elements After Filter Application and Nonlinearity in Fourier with (J=5,Q =8) x x,(, Sparsity: 0.99% P[ xy,], Sparsity: 0.99% 0 0 Nyquist Frequency Nyquist Frequency x support 5 5 10 10 15 15 2 20 2 20 25 25 30 30 35 35 40 40 FI2 3 2T FIN 3 2 TT TT 2 2 Frequency (radian) Frequency (radian)\nFigure 9: nonzero elements present after application of the filters and the nonlinearity operator P o this representation for a sparse input.\nxC325365 - European Goldfinch - Carduelis carduelis Input Signal : x 0.03 0.02 0.01 0.00 An -0.01 -0.02 -0.03 t(1) [x],(J=5,Q=16) 0 10 20 X 30 40 U 50 60 70 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Time in sec..\nFigure 10: Example 1: transform , x]. In this case, clear frequency modulations appear for onl one source and high SNR. The noise is contracted to O through the nonlinearity\nXC325389 - Crested Lark - Galerida cristata Input Signal : x. 0.03 0.02 0.01 0.00 0.01 0.02 -0.03 t(1) [x], (J=5,Q=16) 0 10 20 X 30 7 40 50 60 70 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Time in sec..\n0.02 0.01 0.00 0.01 0.02\nFigure 11: Example 2: transform (1) [x]. In this case, the source presents multiple kinds of chirps, and frequency modulated patterns. 'Some harmonics are detected yet it is clear that aggregation of the time dimension with this representation only will aggregate the different patterns leading to poor [x]\nsignal characterization leading to the need of /\nxC325371 - Brown-headed Cowbird - Molothrus ater artemisiae Input Signal : x 0.03 0.02 apn!l 0.01 0.00 |dl -0.01 -0.02 -0.03 t(1) [x], (J=5,Q=16) 0 10 20 8 30 40 S 50 60 70 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Time in sec.\nFigure 12: Example 3: transform \\ x]. In this case only transients are present and almost not frequency modulation appear on the features. This kind of signal will be well captured with one layer only.\nx X File 1 File 1 File 2 File 2 File 3 File 3 v(3) S x x File 1 File 1 File 2 File 2 File 3 File 3\nFigure 13: We present here the features extracted form the 3 examples presented in Fig. 10 11 12. The left part contains the scattering coefficients encoding the arithmetic mean whereas the righ. part concerns the dispersion coefficients. On the top part the features are extracted from the first. layer and on the bottom are the features extracted form the second layer. It is clear that for these signals, the features of the first layer are enough to discriminate them. Notice that through global. time invariance, one ends up with features vectors of exact same dimension for each signal and tha they would remain the same if the input signal was translated.."}]
r1GKzP5xx
[{"section_index": "0", "section_name": "RECURRENT NORMALIZATION PROPAGATION", "section_text": "Cesar Laurent. Nicolas Ballas & Pascal Vincent\nMontreal Institute for Learning Algorithms (MILA). Departement d'Informatique et de Recherche Operationnelle Universite de Montreal. Montreal. Ouebec. Canada.\nfirstname.lastname}@umontreal.ca\nWe propose an LSTM parametrization that preserves the means and variances of the hidden states and memory cells across time. While having training benefits similar to Recurrent Batch Normalization and Layer Normalization, it does not need to estimate statistics at each time step, therefore, requiring fewer computa tions overall. We also investigate the parametrization impact on the gradient flows and present a way of initializing the weights accordingly.\nWe evaluate our proposal on language modelling and image generative modelling tasks. We empirically show that it performs similarly or better than other recurrent normalization approaches, while being faster to execute."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recurrent neural network have shown remarkably good performances for sequential modelling task including machine translation (Bahdanau et al., 2015), visual captioning (Xu et al., 2015; Yao et al. 2015) or question answering (Hermann et al., 2015). However, such models remain notoriously hard to train with gradient backpropagation. As the number of time steps in the input sequence increases, the contractive or expanding effects associated with the state-to-state transformation a each time step can shrink or grow exponentially, leading respectively to vanishing or exploding gradients (Hochreiter, 1991; Bengio et al., 1994; Pascanu et al., 2012). In particular, with gradi ent vanishing, states at a given time are not influenced by changes happening much earlier in the sequence, preventing the model from learning long-term dependencies.\nWhile the long-term dependencies problem is unsolvable in absolute (Hochreiter, 1991; Bengio. et al., 1994), different RNN parameterizations, such as LSTM or GRU (Hochreiter & Schmidhuber.. 1997; Cho et al., 2014) can help mitigate it. Furthermore, the LSTM parametrization has been. recently extended to include layer-wise normalization (Cooijmans et al., 2016; Ba et al., 2016),. building upon Batch Normalization (BN) (Ioffe & Szegedy, 2015). By normalizing the hidden state. distributions to a fix scale and shift through the different time steps, normalized LSTMs have been. shown to ease training. resulting in a parametrization that converges faster than a standard LSTM\nHowever, normalized LSTM introduces extra-computations as it involves standardizing the hidder states, enforcing their means and variances at each time step. By contrast, we propose an LSTM. reparametrization that allows by construction to cheaply preserve the normalization of the hidden states through time. Our approach can be seen as the recurrent counterpart to the recent normal-. ization propagation applied in feed-forward network (Arpit et al., 2016). It results in faster training. convergence similar to Layer Normalization (LN) and Recurrent Batch Normalization while requir-. ing fewer operations per time step and generalizing naturally to variable length sequences..\nIn addition, we investigate the impact of our parametrization, and more generally of normalizec LSTM, on the vanishing and exploding gradient problems. We observe that layer-wise normalizatior. provides a direct way to orient LSTM behaviour toward either gradient explosion or vanishing, anc therefore biases the LSTM either towards reliably storing bits of information throughout time o allowing it to be more sensitive to new input changes..\n*Associate Fellow, Canadian Institute For Advanced Research (CIFAR)"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We empirically validate our proposal on character-level language modelling on the Penn Treebank corpus (Marcus et al., 1993) and on image generative modelling, applying our normalisation to the DRAW architecture (Gregor et al., 2015).\nThe paper is structured as follows: section 2 provides a brief overview of the Batch-Normalized LSTM, in section 3 we derive our Normalized LSTM, section 4 investigates the impact of such normalization on the gradient flow, section 5 presents some experimental results, and we conclude in section 5."}, {"section_index": "3", "section_name": "2.1 BN-LSTM", "section_text": "Batch-Normalized Long Short-Term Memory (BN-LSTM) (Cooijmans et al., 2O16) is a reparametrization of LSTM that takes advantage of Batch Normalization (BN) to address the Co-. variate Shift (Shimodaira, 20o0) occurring between time steps. Changes in the LSTM output at one time-step are likely to cause correlated changes in the summed inputs of the sequence next time-. steps. This Temporal Covariate Shift can slow down the training process as the parameters of the. model must not only be updated to minimize the cost of the task at hand but also adapt to the chang. ing distribution of the inputs. In other words, the latter time steps in an LSTM need to account for the shifting distribution of the previous hidden states..\nBN-LSTM proposes to reduce this temporal covariate shift by fixing the mean and the variance a each time step, relying on the BN transform.\nwhere E[x], Var[x] are the activation mean and variance estimated from the mini-batch samples Given an input sequence X = (x1, X2, ..., XT), the BN-LSTM defines a sequence of hidden states. ht and memory cell states ct according to.\nBN(WxXt;7x,Px) + BN(Wnht-1;Yh,n) + b Ot gt o(it) O tanh(gt) + o(ft) O ct-1 Ct ht o(0t) O tanh(BN(ct;Yc,c))z"}, {"section_index": "4", "section_name": "2.2 NORMALIZATION PROPAGATION", "section_text": "While increasing the training convergence speed relatively to a standard LSTM (Cooijmans et al. 2016), BN-LSTM needs to perform more computations per sample as it requires to compute 3x the BN transform at each time step.\nOn the other hand, Normalization Propagation (Norm Prop) (Arpit et al., 2016) aims at preserve the normalization of the input throughout the network. Unlike BN, the normalization doesn't rely or the statistics of the mini-batch. Instead, it is the structure of the network itself that maintains the normalization. We therefore propose an LSTM reparametrization that preserves the normalization through the different time steps in order to avoid those extra computation.\nx-E[x] 3N(x;X, Var[x+ e\nBN(WxXt;7x,Px) + BN(Wnht-1;Yh,n) + b Ot ~ gt Ct ) O tanh(gt) + (ft) O ct-1 ht (0t) O tanh(BN(ct;Yc,c))\nwhere Wn E Rdn4dn,Wz E RdxX4dn,b E R4dn and the initial states ho E Rdn,Co E Rdn are model parameters. o is the logistic sigmoid function, and O denotes the Hadamard product. Ba et al. (2016) latter extended this parametrization by estimating the normalizing statistics (E[x], Var[x]) using the different feature channels rather than mini-batch samples in order to naturally generalize to variable length sequences."}, {"section_index": "5", "section_name": "3 NORMALIZED LSTM", "section_text": "While Norm Prop properties are appealing for recurrent models, its application to LSTM is not straightforward due to the memory cell structure. In this section we show how to derive a LSTM reparametrization that preserves normalization of the state h through time\nFollowing (Arpit et al., 2016; Salimans & Kingma, 2016), we will attempt to ensure, through ar analytical reparametrization, that several intermediate quantities in the computation remain approx-. imately standardized. We first compensate for the distribution changes induced by the weight matri. ces in the gates and cell candidate gt computations\nW x W n X+ ht-1+b Nh\n) O tanh(gt) + o(ft) O Ct-1 Ct = (it)\nVar[o(it)] + E[o(it)]2 Var[ct] = Var[tanh(gt)] - Var[o(f)]- E[o(ft)]\nWe can therefore analytically or numerically compute the mean and variance of each of those ele-. ments, assuming that both input x, and hidden state h-1 are independent drawn from N(0, 1)\nE[it]=E[o(7xZx+YnZh) Var[it] = Var[o(YxZx + YhZh)] E[gt] =E[tanh(7xZx + 7nZh)] Var[gt] = Var[tanh(yxZx + YnZh).\nYcCt Var[ht] = Var tanh (Var[o(ot)] + E[o(ot)]]\nIThis assumption is strong, but we don't have any easy way to model the covariance between those terms without estimating it from the data..\nwhere ||W.,||2 is the vector of L2-norm of each line of the matrix and 7x and n are the trainable rescaling factors that restore the representation power lost in the rescaling of the weight matrices To preserve the constant error carousel mechanism of the LSTM, we use the usual cell update,\nwhere zx, ~n ~ N(0, 1). The statistics of the gates ot and f, can be computed in a similar way. We. can then compute the value to which Var c, converges. Using this variance estimate, we compen- sate c+ in order to compute the next hidden state h.\nYcCt ht = o(0t) O tanh\nSince we assumed that Var[ht-1] -- 1, to ensure that we need to correct for the variance induced by. the product of the tanh with the output gate. Using again the variance product rule (equation 7) we obtain\nUsing equations 5, 6 and 13, we propose the following reparametrization of the LSTM, simply calle the Normalized LSTM\nNote that the reparametrization of equation 15 is identical to Weight Normalization (Weight Norm (Salimans & Kingma, 2016). The main difference comes from equation 17, where we compensate for the variance of ct, the tanh and o(ot), which ensures a normalized propagation. Overall, this reparametrization is equivalent in spirit to the BN-LSTM, but it benefits from the same advantages that Norm Prop has over BN: There is no dependence on the mini-batch size and the computation is the same for training and inference. Also, the rescaling of the matrices W, and Wh can be done before the recurrence. leading to computation time closer to a vanilla LSTM."}, {"section_index": "6", "section_name": "3.3 WEIGHTS INITIALIZATION", "section_text": "With such reparametrization of the weight matrices, one can think that the scale of the initialization of the weights doesn't matter in the learning process anymore. It is actually true for the forward and backward computation of the layer\naW; Wi Yi = x l|aWi|2 l|Wi2 aW; Wi dyi dx l|aWi|2 I|Wi|2\nWij dyi 1 a Xj Yi aW;; Oyi W dyi M\nThe scale of the parameters affect the learning rate of the layer: the bigger the weights, the smaller. the update. This induces a regularization effect in Norm Prop that is also present in BN (Ioffe & Szegedy, 2015). However, this could possibly be an issue for such parametrization: different initializations lead to different learning rates, and it is true even with adaptive step rules, such as Adam (Kingma & Ba, 2014). Moreover, the parameters that are not normalized (such as y and b). aren't affected by this effect, and so they are not regularized. This is the reason why forcing the weight matrices to have a unit L2 norm of the lines, as proposed in Arpit et al. (2016), helps the. training procedure.\nTo still benefit from the reduction of the learning rate, which is know to ease the optimization (Vog et al., 1988), we propose to simply force the unit L2 norm of the lines of the matrices and combine it with a global learning rate decay schedule\nIn this section we study the gradient flow in the Normalized LSTM. Since this reparametrization is similar to the BN-LSTM, the analysis we do here can be transposed to the BN-LSTM case\n.4t0 W x W h + b gt ) O tanh(gt) + (ft) O ct-1 Ct = (1t 1 YcCt (0t) O tanh\n#4 W x Wn (15) = Yx h Wx,i|2 Wn.i gt ct = o(it) O tanh(gt) + o(ft) O ct-1 (16) YcCt tanh (17)\n1 YcCt (0t) O tanh ar|h\nwhere Var[c] and Var[h] are computed using equations 8 and 14, respectively. Those two vari- ances are estimated at the initialization of the network (eq. 10 to eq. 12), and are then kept fixed during the training as in Norp Prop. . Yb and yc are parameters learned via gradient descent.\nand since the variance of both forward and backward passes is fixed, using an initialization scheme such as Glorot (Glorot & Bengio, 2010) doesn't make sense with Norm Prop. However, the update of the parameters is affected by their scale:"}, {"section_index": "7", "section_name": "THE EXPLODING AND VANISHING GRADIENTS PROBLEM", "section_text": "Given an input sequence X = (x1, X2, ..., xT), we consider a recurrent network, parametrized by 0, that defines a sequence of hidden states ht = fe(ht-1, x) and cost function which evaluates the model performance on a given task. Such network is usually trained using backpropagation through time, where the backpropagation is applied on the time-unrolled model. The chain rule can be applied in order to compute the derivative of the loss with respect to parameters 0.\n1 YcCt tanh a n+\nanh Oht dit dgt dat Oft O a+Ot gt+it. 0ht-1 t.- 0ht-1 Oht- 0ht-1 yn\nAs we can see in equation 23 with the normalization, the gradient depends not only on the derivative of the cell candidate, the gates and the output tanh, but also on on the variance of ht and ct\nVar[ct] ~ Var[gt] = Var[tanh(z)] z ~ N(0,xr + Var[ht] = Var[tanh(z)]] z ~ N(0,xc(xz + xh)\ndgt d tanh(gt) dgt (1- tanh(yxXt +ynht-1) 0ht-1 dgt dht-1\ndit do(it Oit = (YxXt+Yhht-1)(1-o(YxXt+Yhht-1)) dht-1 dit dht-\nOht dot dat [dit dgt Oft Yc Oa+o Yh +1+ Ct-1 ht-1 dot dat dit dgt Oft\nIn this equations we can see that the different y directly scale the gradient, and they also control the saturation of the activation functions. Bad initialization of y could thus lead to saturation or explosion regimes. Figure 1 shows the norm of the gradient with respect to Yx and yn in a simulated LSTM. As we can see, one important parameter is the ratio between Yh and Yx: They control most of the propagation of the gradients. If yx > Yh, the network will focus more on the input and so the gradients will tend to vanish more. On the other hand, if Yh > Yx, the network will tend have less vanishing gradients, but will focus less on its inputs.\na OLt dLt dhx 0ht de de Uhz Oht d0 1<t<T 1<t<T1<k<t\nTo study the gradient propagation of the Normalized LSTM, we first need to derive it. Using equa tion 15-17, we can write the gradient of ht with respect to ht-1\nIf we assume that h-1 and x, are independent, we can compute the variance of ct. Neglecting the weight matrices and the effect of the gates, we can write from equations 8 and 14.\nIn both cases, the variance depends explicitly on the value of the different y: The bigger the y, the higher the variance. Neglecting again the weight matrices, we can now write the equations of the cell candidates gt and the gates it, Ot and f, with respect to ht-1\nThe gradients of ot and f, can be computed similarly. The effect of the y here is double: They appeai both in the activation function, where they control the saturation regime, and Yn also appears as a multiplicative term in the gradient. They should therefore be small enough to prevent the activation from saturating too much, but at the same time n can't be too small, because it can also make the. gradients vanish. Putting it all together, we have.\ndht/dht-1|gamma c=0.1 dht/dht-1|lgamma c=1.0 2.1 2.1 5.4 1.9 1.9 4.8 1.7 1.7 4.2 1.5 1.5 3.6 1.3 1.3 xemmeb + 3.0 1.1 1.1 0.9 0.9 2.4 0.7 0.7 1.8 0.5 0.5 1.2 0.3 0.3 0.6 0.1 0.1 0.1 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1 0.1 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1 gamma h gamma h\nFigure 1: Norm of the gradients for one time step in an LSTM with respect to Yx and n (simulation) Left: c = 0.1. Right: c = 1.0"}, {"section_index": "8", "section_name": "5 EXPERIMENTS", "section_text": "The first task we explore is character-level language modelling on the Penn Treebank corpus (Marcus. et al., 1993). The goal is to predict the next character of the sequence given the previous ones. We. use the same splits as Mikolov et al. (2012) and the same training procedure as Cooijmans et al.. (2016), i.e. we train on sequences of length 100, with random starting point. The model is a. 1000 units LSTM followed by a Softmax classifier. We use orthogonal initialization for the weight. matrices. Because Norm Prop requires normalized inputs, we multiply the one-hot inputs vector with an untrained but fixed orthogonal matrix. This tricks does not only help the optimization of. Norm Prop, but also all other variants..\nTo compare the convergence properties of Norm Prop against LN and BN, we first ran experiments using Adam (Kingma & Ba, 2014) with learning rate 2e-3, exponentia1 decay of 1e-3 and gradient clipping at 1.0. As explained in section 3.3, we rescale the matrices such that they have a unit norm on the lines. For Norm Prop, we use 7x = Yh = 2 and c = 1, for LN all the = 1.0 and for BN all the y = 0.1. The results are presented in Table 1 and in Figure 2.\nTable 1: Perplexity (bits-per-character) on sequences of length 100 from the Penn Treebank valida tion set, and training time (seconds) per epoch.\nTo show the potential of Norm Prop against other state-of-the-art system, we followed Ha et al.. (2016) and apply dropout on both the input and output layer (p = 0.1) and recurrent dropout inside. the LSTM (p = 0.1). We also used the Batch Data Normalization scheme presented by Arpit et al.. (2016), so we standardize each input example using the mini-batch statistics and use population statistics at inference time. Finally, we also reduce the learning rate decay to 1e-4, to compensate. for the fact that a network with dropout needs more time to train. The results are presented in Table 2..\nAs we can see in Figure 2 and in Table 1, Norm Prop compares really well against the other reparametrization. Also Norm Prop is roughly 30 % computationally faster2 than BN and LN. LN shows better optimization performances, but also overfits more. We also see that both optimizatior and generalization are better than the ones from Weight Norm, which shows the importance of com pensating for the variance of ct and ht. Moreover, although Norm Prop doesn't combine well with\nModel Validation Time Baseline 1.455 386 Weight Norm 1.438 402 Batch Norm 1.433 545 Layer Norm 1.439 530 Norm Prop 1.422 413\nFigure 2: Perplexity (bits-per-character) on sequences of length 100 from the Penn Treebank corpus The dashed lines are the training curves, and the solid ones are the validation curves."}, {"section_index": "9", "section_name": "Model", "section_text": "Table 2: Perplexity (bits-per-character) of the full Penn Treebank test sequence"}, {"section_index": "10", "section_name": "5.2 DRAW", "section_text": "The second task we explore is a generative modelling task on binarized MNIST (Larochelle & Murray, 2011) using the Deep Recurrent Attentive Writer (DRAW) (Gregor et al., 2015) architecture DRAW is a variational auto-encoder, where both encoder and decoder are LSTMs, and has two attention mechanisms to select where to read and where to write..\nWe use Jorg Bornschein's implementation3, with the same hyper-parameters as Gregor et al. (2015). ie the read and write size are 2x2 and 5x5 respectively, the number of glimpses is 64, the LSTMs have 256 units and the dimension of z is 100. We use Adam with learning rate of 1e-2, exponential decay of 1e-3 and mini-batch size of 128. We use orthogonal initialization and force the norm of the lines of the matrices to be 1. For Norm Prop, we use x = /h = Yc = 0.5. The test variational bound for the first 100 epochs is presented in Figure 3.\n3https://github.com/jbornschein/draw\nCharacter-Level Language Modelling 1. 1.6 1 1.4 Baseline Weight Norm. 1.3 Batch Norm Layer Norm Norm Prop 1.2 0 5 10 15 20 25 Epochs\ndropout in feed-forward networks (Arpit et al., 2016), it works will with recurrent dropout, as we can see in Table 2. We believe it is because recurrent dropout is less affecting its output distribution than dropout in feed forward networks, because we copy the variable at the previous time step instead. of setting it to O. With such regularization, Norm Prop compares well with other state-of-the-art approaches.\nAs we can see in Figure 3, both Weight Norm and Norm Prop outperform the baseline network by. a significant margin. Also, as expected, Norm Prop performs better than Weight Norm, showing. one again the importance of the compensation of the variance of c and h. Table 3 shows the test variational bound after 200 epochs of training. Norm Prop also compares favorably against LN..\nDRAW 100 Baseline Norm Prop Weight Norm 95 TIN 90 85 80 0 20 40 60 80 100 Epochs\nFigure 3: Test negative log-likelihood on binarized MNIST\nTable 3: Test variational log likelihood (nats) after 200 epochs of training"}, {"section_index": "11", "section_name": "6 CONCLUSION", "section_text": "Based on the BN-LSTM, we have shown how to build a Normalized LSTM that is able to preserve the variance of its output at each time step, by compensating for the variance of the cell and the hidden state. Such LSTM can be seen as the Norm Prop version of the BN-LSTM, and thus benefits from the same advantages that Norm Prop has over BN, while being way faster to compute. Also, we propose a scheme to initialize the weight matrices that takes into account the reparametrization Moreover, we have derived the gradients of this LSTM and pointed out the importance of the initial- ization of the rescaling parameters. We have validated the performances of the Normalized LSTM on two different tasks, showing similar performances than BN-LSTM and LN-LSTM, while being significantly faster in computation time. Also, unlike the feed-forward case, this architecture works well with recurrent dropout, leading to close to state-of-the-art performances on the character-level language modelling task."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "Part of this work was funded by Samsung. We used Theano (Theano Development Team, 2016) Blocks and Fuel (van Merrienboer et al., 2015) for our experiments. We also want to thanks Caglai Gulcehre and Tim Cooijmans for the talks and Jorg Bornschein for his DRAW implementation."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "D. Arpit, Y. Zhou, B. U Kota, and V. Govindaraju. Normalization propagation: A parametric tech nique for removing internal covariate shift in deep networks. arXiv preprint, 2016.\nModel DRAW Baseline (ours) 84.30 Layer Norm (Ba et al., 2016) 82.09 Weight Norm (ours) 81.98 Norm Prop (ours) 81.17\nFuture work includes trying this architecture on more challenging tasks and also studying the impact of not keeping the variance estimates of the cell and the hidden states fixed during the learning process.\nY. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 1994. K. Cho, B. Van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint, 2014. T. Cooijmans, N. Ballas, C. Laurent, and A. Courville. Recurrent batch normalization. arXiv preprint, 2016. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249-256, 2010. K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint, 2015.\nD. Ha. A. Dai. and O. V Le. Hypernetworks. arXiv preprint. 2016\nS. Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Master's thesis, 1991\nD. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint, 2014\nH. Larochelle and I. Murray. The neural autoregressive distribution estimator. AISTATS, 2011\n. Van Merrienboer, D. Bahdanau, V. Dumoulin, D. Serdyuk, D. Warde-Farley, J. Chorowski, and Y. Bengio. Blocks and fuel: Frameworks for deep learning. CoRR, abs/1506.00619, 2015. T. P. Vogl, J. K. Mangis, A. K. Rigler, W. T. Zink, and D. L. Alkon. Accelerating the convergence of the back-propagation method. Biological Cybernetics, 59(4):257-263, 1988. K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint, 2015. L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville. Describing videos by exploiting temporal structure. In ICCV, 2015."}]
rJXTf9Bxg
[{"section_index": "0", "section_name": "CONDITIONAL IMAGE SYNTHESIS WITH AUXILIARY CLASSIFIER GANS", "section_text": "Augustus Odena*, Christopher Olah & Jonathon Shlens\naugustusodena, colah, shlens}@google.com\nSynthesizing high resolution photorealistic images has been a long-standing chal-. lenge in machine learning. In this paper we introduce new methods for the im. proved training of generative adversarial networks (GANs) for image synthesis.. We construct a variant of GANs employing label conditioning that results in. 128 128 resolution image samples exhibiting global coherence.We expand. on previous work for image quality assessment to provide two new analyses for. assessing the discriminability and diversity of samples from class-conditional im age synthesis models. These analyses demonstrate that high resolution samples. provide class information not present in low resolution samples. Across 1o00. ImageNet classes, 128 128 samples are more than twice as discriminable as ar-. tificially resized 32 32 samples. In addition, 84.7% of the classes have samples. exhibiting diversity comparable to real ImageNet data.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Characterizing the structure of natural images has been a rich research endeavor. Natural images obey intrinsic invariances and exhibit multi-scale statistical structures that have historically beer difficult to quantify (Simoncelli & Olshausen2001). Recent advances in machine learning of- fer an opportunity to substantially improve the quality of image models. Improved image models advance the state-of-the-art in image denoising (Balle et al.]2015), compression (Toderici et al. 2016), in-painting (van den Oord et al.]2016a), and super-resolution (Ledig et al.]2016). Bet- ter models of natural images also improve performance in semi-supervised learning tasks (Kingma et al.[2014] Springenberg2015 Odena2016Salimans et al.[ 2016) and reinforcement learning problems (Blundell et al. 2016).\nOne method for understanding natural image statistics is to build a system that synthesizes image de novo. There are several promising approaches for building image synthesis models. Variationa autoencoders (VAEs) maximize a variational lower bound on the log-likelihood of the training data (Kingma & Welling]2013] Rezende et al.[2014). VAEs are straightforward to train but introduc potentially restrictive assumptions about the approximate posterior distribution (but see Rezende & Mohamed (2015); Kingma et al.(2016)). Autoregressive models dispense with latent variables anc directly model the conditional distribution over pixels (van den Oord et al.||2016a b). These model produce convincing samples but are costly to sample from and do not provide a latent representatior Invertible density estimators transform latent variables directly using a series of parameterized func tions constrained to be invertible (Dinh et al.||2016). This technique allows for exact log-likelihooc computation and exact inference, but the invertibility constraint is restrictive.\nGenerative adversarial networks (GANs) offer a distinct and promising approach that focuses on game-theoretic formulation for training an image synthesis model (Goodfellow et al.2014). Recer work has shown that GANs can produce convincing image samples on datasets with low variabilit and low resolution (Denton et al. 2015] Radford et al.[2015). However, GANs struggle to ger erate globally coherent, high resolution samples - particularly from datasets with high variabilit Moreover, a theoretical understanding of GANs is an on-going research topic (Uehara et al.]2016 Mohamed & Lakshminarayanan2016).\nWork completed as a participant in the 2016-2017 Google Brain Residency program"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "monarch butterfly goldfinch daisy redshank grey whale\nFigure 1: 128 128 resolution samples from 5 classes taken from an AC-GAN trained on the ImageNet dataset Note that the classes shown have been selected to highlight the success of the model and are not representative. Samples from all ImageNet classes are in the Appendix.\nIn this work we demonstrate that that adding more structure to the GAN latent space along with a specialized cost function results in higher quality samples. We exhibit 128 128 pixel samples from all classes of the ImageNet dataset (Russakovsky et al.2015) with increased global coherence (Figure[1). Importantly, we demonstrate quantitatively that our high resolution samples are not just naive resizings of low resolution samples. In particular, downsampling our 128 128 samples to 32 32 leads to a 50% decrease in visual discriminability. We also introduce a new metric for assessing the variability across image samples and employ this metric to demonstrate that our synthesized images exhibit diversity comparable to training data for a large fraction (84.7%) of ImageNet classes.\nA generative adversarial network (GAN) consists of two neural networks trained in opposition to. one another. The generator G takes as input a random noise vector z and outputs an image X fake -. G(z). The discriminator D receives as input either a training image or a synthesized image from. the generator and outputs a probability distribution P(S | X) = D(X) over possible image sources.. The discriminator is trained to maximize the log-likelihood it assigns to the correct source:\nE|log P(S z : real Xreal)]+ E[log ] fake Xfake)\nThe generator is trained to minimize that same quantity\nThe basic GAN framework can be augmented using side information. One strategy is to supply both the generator and discriminator with class labels in order to produce class conditional samples (Mirza & Osindero2014). Class conditional synthesis can significantly improve the quality of generated samples (van den Oord et al.]2016b). Richer side information such as image captions and bounding box localizations may improve sample quality further (Reed et al.]2016a b).\nInstead of feeding side information to the discriminator, one can task the discriminator with re constructing side information. This is done by modifying the discriminator to contain an auxiliar. decoder network'|that outputs the class label for the training data (Odena!2016) Salimans et al. 2016) or a subset of the latent variables from which the samples are generated (Chen et al.2016 Forcing a model to perform additional tasks is known to improve performance on the original tas. (e.g.Sutskever et al.(2014); Szegedy et al.(2014); Ramsundar et al.(2016). In addition, an auxil iary decoder could leverage pre-trained discriminators (e.g. image classifiers) for further improvin. the synthesized images (Nguyen et al.2016). Motivated by these considerations, we introduce. model that combines both strategies for leveraging side information. That is, the model propose. below is class conditional, but with an auxiliary decoder that is tasked with reconstructing clas. labels.\nmonarch butterfly goldfinch daisy redshank grey whale\nreal c = real C= real real real real fake fake fake fake D D D D Xreal (data) X fake Xreal (data) X fake Xreal (data) Xfake Xreal (data) X fake ^ ^ ^ G G G G C (class) Z (noise) : (class) Z (noise) C (latent) Z (noise) C (class) Z (noise) Conditional GAN Semi-Supervised GAN InfoGAN AC-GAN (Mirza & Osindero, 2014) (Odena, 2016; Salimans, et al., 2016) (Chen, et al., 2016) (Present Work)\nFigure 2: A comparison of several GAN architectures with the proposed AC-GAN architecture"}, {"section_index": "3", "section_name": "3 AC-GANs", "section_text": "We propose a variant of the GAN architecture which we call an auxiliary classifier GAN (or AC GAN - see Figure[2). In the AC-GAN, every generated sample has a corresponding class label, c ~ Pc in addition to the noise z. G uses both to generate images Xfake = G(c, z). The discriminator. gives both a probability distribution over sources and a probability distribution over the class labels P(S X), P(C X) = D(X). The objective function has two parts: the log-likelihood of the correct source, Ls, and the log-likelihood of the correct class, Lc..\nLs = E[log P(S = real Xreal)]+ E[log P(S = fake|Xfake) Lc = E[log P(C = c |Xreal)]+ E[log P(C = c|Xfake)]\nD is trained to maximize Ls + Lc while G is trained to maximize Lc - Ls. AC-GANs learn representation for z that is independent of class label (e.g.Kingma et al.(2014))\nEarly experiments demonstrated that increasing the number of classes trained on while holding th model fixed decreased the quality of the model outputs (Appendix B). The structure of the AC GAN model permits separating large datasets into subsets by class and training a generator an discriminator for each subset. We exploit this property in our experiments to train across the entir ImageNet data set."}, {"section_index": "4", "section_name": "4 RESULTS", "section_text": "We train several AC-GAN models on the ImageNet data set (Russakovsky et al.2015). Broadly. speaking, the architecture of the generator G is a series of 'deconvolution' layers that transform the. noise z and class c into an image (Odena et al.f2016). We train two variants of the mode1 architecture for generating images at 128 128 and 64 64 spatial resolutions. The discriminator D is a deep convolutional neural network with a Leaky ReLU nonlinearity (Maas et al.|2013). See Appendix|A for more details. As mentioned earlier, we find that reducing the variability introduced by all 1000. classes of ImageNet significantly improves the quality of training. We train 100 AC-GAN models -. each on images from just 10 classes - for 50o00 mini-batches of size 100..\nEvaluating the quality of image synthesis models is challenging due to the variety of probabilis tic criteria (Theis et al.]2015) and the lack of a perceptually meaningful image similarity metric Nonetheless, in subsequent sections we attempt to measure the quality of the AC-GAN by building several ad-hoc measures for image sample discriminability and diversity. Our hope is that this work might provide quantitative measures that may be used to aid training and subsequent development of image synthesis models.\n1 Alternatively, one can force the discriminator to work with the joint distribution (X, z) and train a separat inference network that computes q(z|X) (Dumoulin et al.]2016]Donahue et al.2016).\nReal 0% 7% 62% 94% 94% Fake 0% 0% 42% 76% 76%\nFigure 3: Generating high resolution images improves discriminability. Top: Training data and synthesized in ages from the zebra class resized to a lower spatial resolution (indicated above) and subsequently artificiall. resized to the original resolution. Inception accuracy is shown below the corresponding images. Bottom Lef. Summary of accuracies across varying spatial resolutions for training data and image samples from 64 64 an. 128 128 models. Error bar measures standard deviation across 10 subsets of images. Dashed lines highligl. the accuracy at the output spatial resolution of the model. The training data (clipped) achieves accuracies c. 24%, 54%, 81% and 81% at resolutions of 32, 64, 128, and 256 respectively. Bottom Right: Comparison c. accuracy scores at 128 128 and 32 32 spatial resolutions (x and y axis, respectively). Each point represen. an ImageNet class. 84.4% of the classes are below the line of equality. The green dot corresponds to the zebr. clasS."}, {"section_index": "5", "section_name": "4.1 GENERATING HIGH RESOLUTION IMAGES IMPROVES DISCRIMINABILITY", "section_text": "Building a class-conditional image synthesis model necessitates measuring the extent to which syn thesized images appear to belong to the intended class. In particular, we would like to know that a high resolution sample is not just a naive resizing of a low resolution sample. Consider a simple experiment: pretend there exists a mode1 that synthesizes 32 32 images. One can trivially increase the resolution of synthesized images by performing bilinear interpolation. This would yield highei resolution images, but these images would just be blurry versions of the low resolution images thal are not discriminable. Hence, the goal of an image synthesis model is not simply to produce high resolution images, but to produce high resolution images that are more discriminable than low reso lution images.\nTo measure discriminability, we feed synthesized images to a pre-trained Inception network (Szegedy et al.]2015) and report the fraction of the samples for which the Inception network as- signed the correct labe[2] We calculate this accuracy measure on a series of real and synthesized im- ages which have had their spatial resolution artificially decreased by bilinear interpolation (Figure[3]\n2 One could also use the Inception score (Salimans et al. 2016), but our method has several advan- tages: accuracy figures are easier to interpret than exponentiated KL-divergences; accuracy may be as- sessed for individual classes; accuracy measures whether a class-conditional model generated samples from\n16 x 16 32 x 32 64 x 64 128 x 128 256 x 256 Real 0% 7% 62% 94% 94% Fake 0% 0% 42% 76% 76% 1.0 0.14 0.12 0.8 0.10 acuun Noeancce 0.6 0.08 0.06 0.4 0.04 0.2 0.02 es 0.00 0.0 16 32 64 128 256 0.0 0.2 0.4 0.6 0.8 1.0 image resolution sample accuracy at 128x128\ntop panels). Note that as the spatial resolution is decreased, the accuracy decreases - indicating that. resulting images contain less class information (Figure|3] scores below top panels). We summarizec this finding across all 1000 ImageNet classes for the ImageNet training data (black), a 128 128 resolution AC-GAN (red) and a 64 64 resolution AC-GAN (blue) in Figure|3|(bottom, left). The black curve (clipped) provides an upper-bound on the discriminability of real images..\nThe goal of this analysis is to show that synthesizing higher resolution images leads to increased. discriminability. The 128 128 mode1 achieves an accuracy of 10.1% 2.0% versus 7.0% 2.0% with samples resized to 64 64 and 5.0% 2.0% with samples resized to 32 32. In other words,. downsizing the outputs of the AC-GAN to 32 32 and 64 64 decreases visual discriminability. by 50% and 38% respectively. Furthermore, 84.4% of the ImageNet classes have higher accuracy at. 128 128 than at 32 32 (Figure3 bottom left).\nWe performed the same analysis on an AC-GAN trained to 64 64 spatial resolution. This model. achieved less discriminability than a 128 128 AC-GAN model. Accuracies from the 64 64 mode. plateau at a 64 64 spatial resolution consistent with previous results. Finally, the 64 64 resolution mode1 achieves less discriminability at 64 spatial resolution than the 128 128 mode1.."}, {"section_index": "6", "section_name": "4.2 MEASURING THE DIVERSITY OF GENERATED IMAGES", "section_text": "An image synthesis model is not very interesting if it only outputs one image. Indeed, a well-known. failure mode of GANs is that the generator will collapse and output a single prototype that maximally. fools the discriminator (Goodfellow et al.][2014] [Salimans et al.[2016). A class-conditional mode]. of images is not very interesting if it only outputs one image per class. The Inception accuracy can. not measure whether a model has collapsed. A model that simply memorized one example from each ImageNet class would do very well by this metric. Thus, we seek a complementary metric to explicitly evaluate the intra-class diversity of samples generated by the AC-GAN..\nSeveral methods exist for quantitatively evaluating image similarity by attempting to predict human. perceptual similarity judgements. The most successful of these is multi-scale structural similarity. (MS-SSIM) (Wang et al.]2004b] Ma et al.]2016). MS-SSIM is a multi-scale variant of a well- characterized perceptual similarity metric that attempts to discount aspects of an image that are not. important for human perception (Wang et al.[2004a). MS-SSIM values range between 0.0 and 1.0; higher MS-SsIM values correspond to perceptually more similar images. As a proxy for image. diversity, we measure the MS-SsIM scores between randomly chosen pairs of images within a given class. Samples from classes that have higher diversity result in lower mean MS-SsIM scores (Figure 4] left columns); samples from classes with lower diversity have higher mean MS-SsIM scores (Figure4] right columns). Training images from the ImageNet training data contain a variety of mean MS-SSIM scores across the classes indicating the variability of image diversity in ImageNet classes (Figure[5] left panel, x-axis). Note that the highest mean MS-SSIM score (indicating the least. variability) is 0.25 for the training data."}, {"section_index": "7", "section_name": "4.3 GENERATED IMAGES ARE BOTH DIVERSE AND DISCRIMINABLE", "section_text": "We have presented quantitative metrics demonstrating that AC-GAN samples may be diverse and discriminable but we have yet to examine how these metrics interact. Figure 6 shows the joint distribution of Inception accuracies and MS-SsiM scores across all classes. Inception accuracy and MS-SSIM are anti-correlated (r2 = 0.16). In fact, 74% of the classes with low diversity (MS. SSIM > 0.25) contain Inception accuracies 1%. These results suggest that GANs that drop modes\nthe intended class. To compute the Inception accuracy, we modified a version of Inception-v3 supplied ir https://github.com/openai/improved-qan/\nWe calculate the mean MS-SSIM score for all 1000 ImageNet classes generated by the AC-GAN model. We track this value during training to identify whether the generator has collapsed (Figure|5. right panel, red curve). We also employ this metric to compare the diversity of the training images. to the samples from the GAN model after training has completed. Figure|5|(left) plots the mean MS-SSIM values for image samples and training data broken up by class. The blue line is the line. of equality. Out of the 1000 classes, we find that 847 have mean sample MS-SsIM scores below. that of the maximum MS-SSIM for the training data. In other words, 84.7% of classes have sample. variability that exceeds that of the least variable class from the ImageNet training data..\nhot dog promontory green apple artichoke MS-SSIM = 0.11 MS-SSIM = 0.29 MS-SSIM = 0.41 MS-SSIM = 0.90 MS-SSIM = 0.05 MS-SSIM = 0.15 MS-SSIM = 0.08 MS-SSIM = 0.04 real\nFigure 4: Examples of different MS-SSIM scores. The top and bottom rows contain AC-GAN samples anc training data, respectively.\nFigure 5: (Left) Comparison of the mean MS-SsIM scores between pairs of images within a given class fo ImageNet training data and samples from the GAN (blue line is equality). The horizontal red line marks th maximum MS-SSIM value across all ImageNet classes. Each point is an individual class. The mean standarc deviation of scores across the training data and the samples was O.06 and O.08 respectively. Scores belov the red line (84.7% of classes) arise from classes where GAN training largely succeeded. (Right) Intra-clas MS-SsIM for selected ImageNet classes throughout a training run. Classes that successfully train tend to have decreasing mean MS-SSIM scores, to a point.\nare most likely to produce low quality images. Conversely, 78% of classes with high diversity (MS SSIM < 0.25) have Inception accuracies that exceed 1%. In comparison, the Inception-v3 model achieves 78.8% accuracy on average across all 1000 classes (Szegedy et al.|2015). A fraction of the classes AC-GAN samples reach this level of accuracy. This indicates opportunity for future image. synthesis models."}, {"section_index": "8", "section_name": "4.4 COMPARISON TO PREVIOUS RESULTS", "section_text": "Previous quantitative results for image synthesis models trained on ImageNet are reported in term. of log-likelihood (van den Oord et al.|2016a b). Log-likelihood is a coarse and potentially inaccu. rate measure of sample quality (Theis et al.2015). Addditionally, log-likelihood is intractable to. compute for GANs. Instead we compare with previous state-of-the-art results on CIFAR-10 using a. lower spatial resolution (32 32). Following the procedure in Salimans et al.(2016), we compute.\n1.0 1.0 0.8 0.8 vanuee sorre WISS-SW 0.6 WISS-SW O. 0.4 meaa 0.4 0.2 0.2 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0 10 20 30 40 50 training data MS-ssIM value thousands of minibatches\n1.0 1.0 0.8 0.8 yauee scrree WISS-SW 0.6 WIss-Sw ueam 0.6 0.4 0.4 0.2 0.2 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0 10 20 30 40 50 training data Ms-ssIM value. thousands of minibatches\n1.0 0.8 mnnen mins-mw uenne 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 Inception accuracy\nFigure 6: Inception accuracy vs MS-SSIM for all 1000 ImageNet classes (r2 = -0.16). Samples from AC GAN models do not achieve variability at the expense of discriminability.\nthe Inception scorq|for 50000 samples from an AC-GAN with resolution (32 32), split into 10. groups at random. We also compute the Inception score for 25000 extra samples, split into 5 groups. at random. We select the best model based on the first score and report the second score. Performing a grid search across 27 hyperparameter configurations, we are able to achieve a score of 8.25 0.07 compared to state of the art 8.09 0.07 (Salimans et al.2016). Moreover, we accomplish this with. out employing any of the new techniques introduced in that work (i.e. virtual batch normalization minibatch discrimination, and label smoothing). This provides additional evidence that AC-GANs are effective even without the benefit of class splitting (Appendix[B)."}, {"section_index": "9", "section_name": "4.5 SEARCHING FOR SIGNATURES OF OVERFITTING", "section_text": "One possibility that must be investigated is that the AC-GAN has overfit on the training data. As a first check that the network does not memorize the training data, we identify the nearest neighbors of image samples in the training data measured by L1 distance in pixel space (Figure[7). The nearest neighbors from the training data do not resemble the corresponding samples. This provides evidence that the AC-GAN is not merely memorizing the training data.\nFigure 7: Nearest neighbor analysis. (Left) Samples from a single ImageNet class. (Right) Corresponding nearest neighbor (L1 distance) in training data for each sample..\nA more sophisticated method for understanding the degree of overfitting in a model is to explor that model's latent space by interpolation. In an overfit model one might observe discrete transitions. in the interpolated images and regions in latent space that do not correspond to meaningful image. (Bengio et al.l2012] Radford et al.[2015} Dinh et al.||2016). Figure 8|(left) highlights interpolation. in the latent space between several image samples. Notably, the generator learned that certain com. binations of dimensions correspond to semantically meaningful features (e.g. size of the arch, lengtl. of a bird's beak) and there are no discrete transitions or 'holes' in the latent space. A second metho. for exploring the latent space of the AC-GAN is to exploit the structure of the model. The AC-GAN. factorizes its representation into class information and a class-independent latent representation z Sampling the AC-GAN with z fixed but altering the class label corresponds to generating sample. with the same 'style' across multiple classes (Kingma et al.|. 2014). Figure[8(right) shows sample\n3 The Inception score is given by exp (Ex[DkL(p(y|x) || p(y))]) where x is a particular image, p(y|x is the conditional output distribution over the classes in a pre-trained Inception network (Szegedy et al.l2014 given x, and p(y) is the marginal distribution over the classes..\nfrom 8 bird classes. Elements of the same row have the same z. Although the class changes fo each column, elements of the global structure (e.g. position, layout, background) are preserved. indicating that AC-GAN can represent certain types of 'compositionality'..\nFigure 8: (Left) Latent space interpolations for selected ImageNet classes. Left-most and right-columns shov three pairs of image samples - each pair from a distinct class. Intermediate columns highlight linear interpola tions in the latent space between these three pairs of images. (Right) Class-independent information contain global structure about the synthesized image. Each column is a distinct bird class while each row correspond to a fixed latent code z."}, {"section_index": "10", "section_name": "5 DISCUSSION", "section_text": "This work introduced the AC-GAN architecture and demonstrated that AC-GANs can generate glob. ally coherent ImageNet samples. We provided a new quantitative metric for image discriminability as a function of spatial resolution. Using this metric we demonstrated that our samples are more. discriminable than those from a model that generates lower resolution images and performs a naive. resize operation. We also analyzed the diversity of our samples with respect to the training data. and provided some evidence that the image samples from the majority of classes are comparable in. diversity to ImageNet training data. We hope that these metrics might provide quantitative measures. of sample quality for evaluating and improving future image synthesis models..\nSeveral directions exist for building upon this work. Much work needs to be done to improve the visual discriminability of the 128 128 resolution model. Although some synthesized image classes exhibit high Inception accuracies, the average Inception accuracy of the model (10.1% 2.0%) is still far below real training data at 81%. One immediate opportunity for addressing this is tc augment the discriminator with a pre-trained model to perform additional supervised tasks (e.g image segmentation, Ronneberger et al.[(2015)). Such techniques might allow for the synthesis of even higher resolution images with global coherence and meaningful visual content\nImproving the robustness and reliability of training a GAN is an ongoing research topic. Only 84.7% of the ImageNet classes avoided mode dropping and exhibited a diversity comparable to real training data. Training stability was vastly aided by dividing up 1000 ImageNet classes across 100 AC-GAN models. Building a single unified model that could generate diverse samples from all 1o00 classe. would be an important step forward.\nImage synthesis models provide a unique opportunity for performing semi-supervised learning. Namely, these models build a rich prior over natural image statistics that can be leveraged by clas. sifiers to improve predictions on datasets for which few labels exist. The AC-GAN model can perform semi-supervised learning by simply ignoring the component of the loss arising from class. labels when a label is unavailable for a given training image. Interestingly, prior work suggests. that achieving good sample quality might be independent of success in semi-supervised learning. (Salimans et al.2016)."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank the developers of TensorFlow (Abadi et al.]2016). We thank Luke Metz and Vincent. Dumoulin for extensive and helpful comments on drafts. We also thank Ben Poole, Sam Schoenholz. Barret Zoph, Martin Abadi, Manjunath Kudlur and Jascha Sohl-Dickstein for helpful discussions."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Yoshua Bengio, Gregoire Mesnil, Yann Dauphin, and Salah Rifai. Better mixing via deep represen. tations. CoRR, abs/1207.4404, 2012. URLhttp://arxiv.org/abs/1207.4404\nLaurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. CoRR abs/1605.08803.2016. URLhttp://arxiv.0rg/abs/1605.08803\nJ. Donahue, P. Krahenbuhl, and T. Darrell. Adversarial Feature Learning. ArXiv e-prints, May 2016\nD. P Kingma and M. Welling. Auto-Encoding Variational Bayes. ArXiv e-prints, December 2013\nShakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016.\nC. Blundell, B. Uria, A. Pritzel, Y. Li, A. Ruderman, J. Z Leibo, J. Rae, D. Wierstra, and D. Hassabis Model-Free Episodic Control. ArXiv e-prints, June 2016.. X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. InfoGAN: Interpretable. Representation Learning by Information Maximizing Generative Adversarial Nets. ArXiv e-. prints, June 2016.\nAnh Mai Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. . Synthe- sizing the preferred inputs for neurons in neural networks via deep generator networks. CoRR, abs/1605.09304. 2016. IRIh++n. ://arxi v org/abs/1605 09304\nBharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, and Vijay Pande. Massively multitask networks for drug discovery. In Proceedings of The 33rd Inter- national Conference on Machine Learning, 2016.\nScott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn ing what and where to draw. arXiv preprint arXiv:1610.02454, 2016a.\nScott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text-to-image synthesis. In Proceedings of The 33rd International Confer-. ence on Machine Learning, 2016b. D. Rezende and S. Mohamed. Variational Inference with Normalizing Flows. ArXiv e-prints, May. 2015. D. Rezende, S. Mohamed, and D. Wierstra. Stochastic Backpropagation and Approximate Inference. in Deep Generative Models. ArXiv e-prints, January 2014.. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomed-. ical image segmentation. CoRR, abs/1505.04597, 2015. URL http://arxiv.org/abs/ 1505.04597 Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng. Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved Tech niques for Training GANs. ArXiv e-prints, June 2016.. Eero Simoncelli and Bruno Olshausen. Natural image statistics and neural representation. Annual\nEero Simoncelli and Bruno Olshausen. Natural image statistics and neural representation. Annual Review of Neuroscience, 24:1193-1216. 2001.\nM. Uehara, I. Sato, M. Suzuki, K. Nakayama, and Y. Matsuo. Generative Adversarial Nets from a Density Ratio Estimation Perspective. ArXiv e-prints, October 2016.\nOperation Kernel Strides Feature maps BN? Dropout Nonlinearity Gx(z) -110 1 1 input Linear N/A N/A 768 x 0.0 ReLU Transposed Convolution 5 x 5 2 x 2 384 0.0 ReLU Transposed Convolution 5 5 2 x 2 256 0.0 ReLU V Transposed Convolution 5 x 5 2 x 2 192 V 0.0 ReLU Transposed Convolution 5 5 2 x 2 3 x 0.0 Tanh D(x) - 128 3 3 input Convolution 3 x 3 2 x 2 16 x 0.5 Leaky ReLU Convolution 3 x 3 1 x 1 32 V 0.5 Leaky ReLU Convolution 3 x 3 2 x 2 64 0.5 Leaky ReLU V Convolution 3 x 3 1 x 1 128 0.5 Leaky ReLU V Convolution 3 3 2 x 2 256 0.5 Leaky ReLU V Convolution 3 3 1 x 1 512 0.5 V Leaky ReLU Linear N/A N/A 11 0.0 x Soft-Sigmoid Optimizer Adam ( = 0.0002, 1 = 0.5, 2 = 10-3) Batch size100 Iterations50000 Leaky ReLU slope 0.2. Weight, bias initialization Isotropic gaussian ( = 0, = 0.02), Constant(0)\nTable 1: Model hyperparameters. A Soft-Sigmoid refers to an operation over K + 1 output units where we apply a Softmax activation to K of the units and a Sigmoid activation to the remaining unit. We also use activation noise in the discriminator as suggested in Salimans et al.(2016).\nClass conditional image synthesis affords the opportunity to divide up a dataset based on image label In our final model we divide 1000 ImageNet classes across 100 AC-GAN models. In this section we describe early experiments that highlight the benefit of cutting down the diversity of classes for training an AC-GAN. We employed an ordering of the labels and divided it into contiguous groups of 10. This ordering can be seen in the following section, where we display samples from all 1000 classes. Two aspects of the split merit discussion: the number of classes per split and the intra-split diversity.\nWe find that training a fixed model on more classes harms the model's ability to produce compelling samples (Figure|9). Performance on larger splits can be improved by giving the model more param eters. However, using a small split is not sufficient to achieve good performance. We were unable to train a GAN (Goodfellow et al.| 2014) to converge reliably even for a split size of 1.\n1.0 0.8 score 0.6 WISs-Sw ueam 0.4 0.2 0.0 0 20 40 60 80 100 of clac\n.0 0.8 0.6 0.4 0.2 0.0 0 20 40 60 80 100 number of classes\nFigure 9: Mean pairwise MS-SSIM values for 10 ImageNet classes plotted against the number of ImageNe classes used during training. We fix everything except the number of classes trained on, using values from 10. to 100. We only report the MS-SSIM values for the first 10 classes to keep the scores comparable. MS-SSIM. quickly goes above 0.25 (the red line) as the class count increases. These scores were computed using 9 random. restarts per class count, using the same number of training steps for each model..\nThis raises the question of whether it is easier to train a model on a diverse set of classes than on a. similar set of classes. We were unable to find conclusive evidence that the selection of classes in a split significantly affects sample quality.."}]
BJO-BuT1g
[{"section_index": "0", "section_name": "A LEARNED REPRESENTATION FOR ARTISTIC STYLE", "section_text": "Vincent Dumoulin & Jonathon Shlens & Maniunath Kudlur\nGoogle Brain, Mountain View, CA\nvi.dumoulin@gmail.com, shlens@google.com, keveman@google.com\nThe diversity of painting styles represents a rich visual vocabulary for the con. struction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level fea tures of paintings, if not images in general. In this work we investigate the con struction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network gen eralizes across a diversity of artistic styles by reducing a painting to a point in an. embedding space. Importantly, this model permits a user to explore new paint ing styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of. artistic style."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A pastiche is an artistic work that imitates the style of another one. Computer vision and more recently machine learning have a history of trying to automate pastiche, that is, render an image in the style of another one. This task is called style transfer, and is closely related to the texture synthesis task. While the latter tries to capture the statistical relationship between the pixels of a source image which is assumed to have a stationary distribution at some scale, the former does sc while also attempting to preserve some notion of content.\nOn the computer vision side,Efros & Leung(1999) and Wei & Levoy(2000) attempt to \"grow\" textures one pixel at a time using non-parametric sampling of pixels in an examplar image.Efros & Freeman(2001) and Liang et al.(2001) extend this idea to \"growing\" textures one patch at a time, andEfros & Freeman(2001) uses the approach to implement \"texture transfer\", i.e. transfering. the texture of an object onto another one.Kwatra et al.[(2005) approaches the texture synthesis problem from an energy minimization perspective, progressively refining the texture using an EM-. like algorithm.Hertzmann et al.(2001) introduces the concept of \"image analogies\": given a pair. of \"unfiltered\"' and \"filtered\"' versions of an examplar image, a target image is processed to create an analogous \"filtered\"' result. More recently, Frigo et al.(2016) treats style transfer as a local. texture transfer (using an adaptive patch partition) followed by a global color transfer, and Elad & Milanfar(2016) extends Kwatra's energy-based method into a style transfer algorithm by taking. content similarity into account..\nOn the machine learning side, it has been shown that a trained classifier can be used as a feature extractor to drive texture synthesis and style transfer.Gatys et al.(2015a) uses the VGG-19 network. (Simonyan & Zisserman] 2014) to extract features from a texture image and a synthesized texture. The two sets of features are compared and the synthesized texture is modified by gradient descent. so that the two sets of features are as close as possible.Gatys et al.(2015b) extends this idea to style transfer by adding the constraint that the synthesized image also be close to a content image with respect to another set of features extracted by the trained VGG-19 classifier..\nWhile very flexible, this algorithm is expensive to run due to the optimization loop being carried Ulyanov et al.(2016a), Li & Wand(2016) and Johnson et al.(2016) tackle this problem by intro ducing a feedforward style transfer network, which is trained to go from content to pastiche image in one pass. However, in doing so some of the flexibility of the original algorithm is lost: the style transfer network is tied to a single style, which means that separate networks have to be trained"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "(a) With conditional instance normalization, a single style transfer network can capture 32 styles at the same time, five of which are shown here. All 32 styles in this single model are in the Appendix. Golden Gate Bridge photograph by Rich Niewiroski Jr.\n(b) The style representation learned via conditional instance normalization permits the arbitrary combination of artistic styles. Each pastiche in the sequence corresponds to a different step in interpolating between the . and values associated with two styles the model was trained on..\nFigure 1: Pastiches produced by a style transfer network trained on 32 styles chosen for their variety\nfor every style being modeled. Subsequent work has brought some performance improvements to. style transfer networks, e.g. with respect to color preservation (Gatys et al.[[2016a) or style transfe. quality (Ulyanov et al.f 2016b), but to our knowledge the problem of the single-purpose nature of. style transfer networks remains untackled.\nWe think this is an important problem that, if solved, would have both scientific and practical im portance. First, style transfer has already found use in mobile applications, for which on-device processing is contingent upon the models having a reasonable memory footprint. More broadly building a separate network for each style ignores the fact that individual paintings share many com- mon visual elements and a true model that captures artistic style would be able to exploit and learn from such regularities. Furthermore, the degree to which an artistic styling model might general- ize across painting styles would directly measure our ability to build systems that parsimoniously capture the higher level features and statistics of photographs and images (Simoncelli & Olshausen 2001).\nIn this work, we show that a simple modification of the style transfer network, namely the in troduction of conditional instance normalization, allows it to learn multiple styles (Figure 1a).We demonstrate that this approach is flexible yet comparable to single-purpose style transfer networks, both qualitatively and in terms of convergence properties. This model reduces each style image into a point in an embedding space. Furthermore, this model provides a generic representation for artistic styles that seems flexible enough to capture new artistic styles much faster than a single-purpose net-\nsrrnnsserssr eoonnneeeenen - Cony Cony Cony Cony Cony - - - - - VGG-16 L S S S\nThe neural algorithm of artistic style ses the following definitions:.\nThe first point is motivated by the empirical observation that high-level features in classifiers tenc to correspond to higher levels of abstractions (see Zeiler & Fergus (2014) for visualizations; see Johnson et al.[(2016) for style transfer features). The second point is motivated by the observatior that the artistic style of a painting may be interpreted as a visual texture (Gatys et al.,2015a). A visual texture is conjectured to be spatially homogenous and consist of repeated structural motifs whose minimal sufficient statistics are captured by lower order statistical measurements (Julesz 1962; Portilla & Simoncelli1999).\n1 Ls(p) = G($i(p)) - G($i(s)) I U, iES\n1 Lc(p) = l $;(p) $;(c) II2 U jEC\nwhere i(x) are the classifier activations at layer l, Uj is the total number of units at layer l and G(i(x)) is the Gram matrix associated with the layer l activations. In practice, we set c = 1.0. and and leave As as a free hyper-parameter.\nFigure 2: Style transfer network training diagram (Johnson et al.) 2016]Ulyanov et al. 2016a). A pastiche image is produced by feeding a content image through the style transfer network. The two images, along with a style image, are passed through a trained classifier, and the resulting interme- diate representations are used to compute the content loss Lc and style loss Ls. The parameters of the classifier are kept fixed throughout training.\nwork. Finally, we show that the embeddding space representation permits one to arbitrarily combine artistic styles in novel ways not previously observed (Figure 1b).\nStyle transfer can be defined as finding a pastiche image p whose content is similar to that of a. content image c but whose style is similar to that of a style image s. This objective is by nature vaguely defined, because similarity in content and style are themselves vaguely defined.\nTwo images are similar in content if their high-level features as extracted by a trained classifier are close in Euclidian distance.. Two images are similar in style if their low-level features as extracted by a trained classifier share the same statistics or, more concretely, if the difference between the features' Gram. matrices has a small Frobenius norm..\nIn its original formulation, the neural algorithm of artistic style proceeds as follows: starting from some initialization of p (e.g. c, or some random initialization), the algorithm adapts p to minimize. the loss function\nL(s,c,p) = XsLs(p) + XcLc(p)\nLs(p) = 1 I G($i(p)) - G($i(s)) II? iES 1 Lc(p) => lI j(p)$;(c) lI2\nIn order to speed up the procedure outlined above, a feed-forward convolutional network, termed a. style transfer network T, is introduced to learn the transformation (Johnson et al.||2016f|Li & Wand 2016, Ulyanov et al.]2016a). It takes as input a content image c and outputs the pastiche image p directly (Figure [2). The network is trained on many content images (Deng et al.]2009) using the. same loss function as above, i.e..\nL(s,c) = XsLs(T(c)) + XcLc(T(c))\nWhile feedforward style transfer networks solve the problem of speed at test-time, they also suffer. from the fact that the network T is tied to one specific painting style. This means that a separate. network T has to be trained for every style to be imitated. The real-world impact of this limitation is that it becomes prohibitive to implement a style transfer application on a memory-limited device. such as a smartphone.\nOur work stems from the intuition that many styles probably share some degree of computation and that this sharing is thrown away by training N networks from scratch when building an N styles style transfer system. For instance, many impressionist paintings share similar paint strokes but differ in the color palette being used. In that case, it seems very wasteful to treat a set of N impressionist paintings as completely separate styles.\nWe call this approach conditional instance normalization. The goal of the procedure is transform a layer's activations x into a normalized activation z specific to painting style s. Building off the. instance normalization technique proposed in Ulyanov et al.(2016b), we augment the y and parameters so that they're N C matrices, where N is the number of styles being modeled and C is the number of output feature maps. Conditioning on a style is achieved as follows:.\nZ =\nBecause conditional instance normalization only acts on the scaling and shifting parameters, training. a style transfer network on N styles requires fewer parameters than the naive approach of training N. separate networks. In a typical network setup, the model consists of roughly 1.6M parameters, only. around 3K (or 0.2%) of which specify individual artistic styles. In fact, because the size of y anc grows linearly with respect to the number of feature maps in the network, this approach requires. O(V L) parameters, where L is the total number of feature maps in the network..\nTo take this into account, we propose to train a single conditional style transfer network T(c, s) fo V styles. The conditional network is given both a content image and the identity of the style to apply and produces a pastiche corresponding to that style. While the idea is straightforward on paper, there emains the open question of how conditioning should be done. In exploring this question, we founc a very surprising fact about the role of normalization in style transfer networks: to model a style, it is sufficient to specialize scaling and shifting parameters after normalization to each specific style. Ir other words, all convolutional weights of a style transfer network can be shared across many styles and it is sufficient to tune parameters for an affine transformation after normalization for each style\nwhere and o are x's mean and standard deviation taken across spatial axes and Ys and s are obtained by selecting the row corresponding to s in the y and matrices (Figure 3). One added. benefit of this approach is that one can stylize a single image into N painting styles with a single feed forward pass of the network with a batch size of N. In constrast, a single-style network requires. N feed forward passes to perform N style transfers (Johnson et al.2016f|Li & Wand|2016]Ulyanov\nIn addition, as is discussed in subsection 3.4] conditional instance normalization presents the advan tage that integrating an N + 1th style to the network is cheap because of the very small number of. parameters to train.\nS {, 0} 3 X Xnorm= (x -) / 0 Z =ys Xno + Z hor .norm\nFigure 3: Conditional instance normalization. The input activation x is normalized across bot spatial dimensions and subsequently scaled and shifted using style-dependent parameter vector Ys, s where s indexes the style label."}, {"section_index": "3", "section_name": "3.1 METHODOLOGY", "section_text": "We used the same network architecture as in Johnson et al.(2016), except for two key details. zero-padding is replaced with mirror-padding, and transposed convolutions (also sometimes calle deconvolutions) are replaced with nearest-neighbor upsampling followed by a convolution. The. use of mirror-padding avoids border patterns sometimes caused by zero-padding in SAME-padde convolutions, while the replacement for transposed convolutions avoids checkerboard patterning, a discussed in in|Odena et al.(2016). We find that with these two improvements training the networl. no longer requires a total variation loss that was previously employed to remove high frequenc noise as proposed inJohnson et al.(2016).\nOur training procedure follows Johnson et al.(2016). Briefly, we employ the ImageNet datase (Deng et al.]2009) as a corpus of training content images.We train the N-style network witl stochastic gradient descent using the Adam optimizer (Kingma & Ba2014). Details of the mode architecture are in the Appendix. A complete implementation of the model in TensorFlow (Abad et al.|2016) as well as a pretrained model are available for download[| The evaluation images usec for this work were resized such that their smaller side has size 512. Their stylized versions wer then center-cropped to 512x512 pixels for display.\n3.2 TRAINING A SINGLE NETWORK ON N STYLES PRODUCES STYLIZATIONS COMPARABLE TO INDEPENDENTLY-TRAINED MODELS\nTo get a sense of what is being traded off by folding 10 styles into a single network, we trained separate, single-style network on each style and compared them to the 10-styles network in terms o style transfer quality and training speed (Figure 5)\nIn order to quantify this observation, we compare the final losses for 10-styles and single-style models (center column). The 10-styles network's content loss is around 8.7 3.9% higher than its\nhttps://github.com/tensorflow/magenta\nAs a first test, we trained a 10-styles model on stylistically similar images, namely 10 impressionist paintings from Claude Monet. Figure 4 shows the result of applying the trained network on evalu- ation images for a subset of the styles, with the full results being displayed in the Appendix. The model captures different color palettes and textures. We emphasize that 99.8% of the parameters are shared across all styles in contrast to 0.2% of the parameters which are unique to each painting style.\nThe left column compares the learning curves for style and content losses between the single-style networks and the 10-styles network. The losses were averaged over 32 random batches of content images. By visual inspection, we observe that the 10-styles network converges as quickly as the single-style networks in terms of style loss, but lags slightly behind in terms of content loss..\nFigure 4: A single style transfer network was trained to capture the style of 10 Monet paintings five of which are shown here. All 10 styles in this single model are in the Appendix. Golden Gate Bridge photograph by Rich Niewiroski Jr..\nsingle-style counterparts, while the difference in style losses (8.9 16.5% lower) is insignificant While the N-styles network suffers from a slight decrease in content loss convergence speed, this. may not be a fair comparison, given that it takes N times more parameter updates to train N single style networks separately than to train them with an N-styles network.\nThe right column shows a comparison between the pastiches produced by the 10-styles network anc. the ones produced by the single-style networks. We see that both results are qualitatively similar.\nWe evaluated the flexibility of the N-styles model by training a style transfer network on 32 work. of art chosen for their diversity. Figure 1a shows the result of applying the trained network on eval. uation images for a subset of the styles. Once again, the full results are displayed in the Appendix The model appears to be capable of modeling all 32 styles in spite of the tremendous variation ir. color palette and the spatial scale of the painting styles.."}, {"section_index": "4", "section_name": "3.4 THE TRAINED NETWORK GENERALIZES ACROSS PAINTING STYLES", "section_text": "Since all weights in the transformer network are shared between styles, one way to incorporate a. new style to a trained network is to keep the trained weights fixed and learn a new set of y and parameters. To test the efficiency of this approach, we used it to incrementally incorporate Monet's Plum Trees in Blossom painting to the network trained on 32 varied styles.Figure 6|shows tha. doing so is much faster than training a new network from scratch (left) while yielding comparable. pastiches: even after eight times fewer parameter updates than its single-style counterpart, the fine. tuned model produces comparable pastiches (right)..\nThe conditional instance normalization approach raises some interesting questions about style rep resentation. In learning a different set of y and parameters for every style, we are in some sense learning an embedding of styles\n.0 S45000 strre N) 40000 connnnt 30000 5000 10000 15000 20000 25000 30000 35000 40000 30000 35000 40000 45000 Parameter updates Final content loss (1 style). 10b 25000 Z sso| srre 15000 104 F 10000 5000. 5000 10000 15000 20000 25000 30000 35000 40000 5000 10000150002000025000 Parameter updates Final style loss (1 style). N stvles 1 stvle.\nFigure 5: The N-styles model exhibits learning dynamics comparable to individual models. (Lel column) The N-styles model converges slightly slower in terms of content loss (top) and as fast i terms of style loss (bottom) than individual models. Training on a single Monet painting is repre sented by two curves with the same color. The dashed curve represents the N-styles model, an the full curves represent individual models. Emphasis has been added on the styles for Vetheui 1902) (teal) and Water Lilies (purple) for visualization purposes; remaining colors correspond t other Monet paintings (see Appendix). (Center column) The N-styles model reaches a slightl nigher final content loss than (top, 8.7 3.9% increase) and a final style loss comparable to (bo1 tom, 8.9 16.5% decrease) individual models. (Right column) Pastiches produced by the N-style network are qualitatively comparable to those produced by individual networks.\nFigure 6: The trained network is efficient at learning new styles. (Left column) Learning y anc 3 from a trained style transfer network converges much faster than training a model from scratch (Right) Learning y and for 5,000 steps from a trained style transfer network produces pastiches comparable to that of a single network trained from scratch for 40,o00 steps. Conversely, 5,000 step of training from scratch produces leads to a poor pastiche.\n2For instance, https: //github. com/jcjohnson/neural-style\n10 From scratch. 5,000 steps 40,000 steps Finetuned lnnnelneed 106 aeorteh reeh Sso| 10 0 5000 10000 15000 20000 25000 30000 35000 40000 Parameter updates\nFrom scratch 5,000 steps 40,000 steps Finetuned lnnnerneed 106 10 5000 10000 15000 20000 25000 30000 35000 40000\nPrevious work suggested that cleverly balancing optimization strategies offers an opportunity to blend painting styles2] To probe the utility of this embedding, we tried convex combinations of the\n1e9 8.3 8.3 7.4 7.4 6.4 6.4 5.5 5.5 4.6 4.6 3.6 3.6 2.7 2.7 1.7 1.7 0.8 0.8 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 Q\nFigure 7: The N-styles network can arbitrarily combine artistic styles. (Left) Combining four styles, shown in the corners. Each pastiche corresponds to a different convex combination of the four styles' y and values. (Right) As we transition from one style to another (Bicentennial Print and Head of a Clown in this case), the style losses vary monotonically..\ny and values to blend very distinct painting styles (Figure 1bf Figure 7[ left column). Employing. a single convex combination produces a smooth transition from one style to the other. Suppose. (y1, 1) and (y2, 2) are the parameters corresponding to two different styles. We use = Q /1 + (1 - ) 2 and = 1 + (1 - a) 2 to stylize an image. Employing convex combinations may be extended to an arbitrary number of styles] Figure 7|(right column) shows the style loss from the transformer network for a given source image, with respect to the Bicentennial Print anc. Head of a Clown paintings, as we vary from O to 1. As a increases, the style loss with respec. to Bicentennial Print increases, which explains the smooth fading out of that style's artifact in the. transformed image."}, {"section_index": "5", "section_name": "4 DISCUSSION", "section_text": "It seems surprising that such a small proportion of the network's parameters can have such an im pact on the overall process of style transfer. A similar intuition has been observed in auto-regressive models of images (van den Oord et al.[2016b) and audio (van den Oord et al.[2016a) where the conditioning process is mediated by adjusting the biases for subsequent samples from the model. That said, in the case of art stylization when posed as a feedforward network, it could be that the specific network architecture is unable to take full advantage of its capacity. We see evidence for this behavior in that pruning the architecture leads to qualitatively similar results. Another interpretation could be that the convolutional weights of the style transfer network encode transformations that represent \"elements of style'. The scaling and shifting factors would then provide a way for each style to inhibit or enhance the expression of various elements of style to form a global identity of style. While this work does not attempt to verify this hypothesis, we think that this would consti- tute a very promising direction of research in understanding the computation behind style transfer networks as well as the representation of images in general.\nConcurrent to this work,Gatys et al.(2016b) demonstrated exciting new methods for revising the loss to selectively adjust the spatial scale, color information and spatial localization of the artistic style information. These methods are complementary to the results in this paper and present an interesting direction for exploring how spatial and color information uniquely factor into artistic style representation.\nThe question of how predictive each style image is of its corresponding style representation is also o great interest. If it is the case that the style representation can easily be predicted from a style image\n3Please see the code repository for real-time, interactive demonstration. A screen capture is available at https://www.voutube.com/watch?v=6zHiARZmiUI\none could imagine building a transformer network which skips learning an individual conditional embedding and instead learn to produce a pastiche directly from a style and a content image, much like in the original neural algorithm of artistic style, but without any optimization loop at test time.\nFinally, the learned style representation opens the door to generative models of style: by modeling enough paintings of a given artistic movement (e.g. impressionism), one could build a collection of style embeddings upon which a generative model could be trained. At test time, a style represen tation would be sampled from the generative model and used in conjunction with the style transfer network to produce a random pastiche of that artistic movement.\nIn summary, we demonstrated that conditional instance normalization constitutes a simple, efficien and scalable modification of style transfer networks that allows them to model multiple styles at the same time. A practical consequence of this approach is that a new painting style may be transmittec to and stored on a mobile device with a small number of parameters. We showed that despite its simplicity, the method is flexible enough to capture very different styles while having very little impact on training time and final performance of the trained network. Finally, we showed that the learned representation of style is useful in arbitrarily combining artistic styles. This work suggests the existence of a learned representation for artistic styles whose vocabulary is flexible enough tc capture a diversity of the painted world."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Fred Bertsch, Douglas Eck, Cinjon Resnick and the rest of the Google Ma- genta team for their feedback; Peyman Milanfar, Michael Elad, Feng Yang, Jon Barron, Bhavik Singh, Jennifer Daniel as well as the the Google Brain team for their crucial suggestions and ad- vice; an anonymous reviewer for helpful suggestions about applying this model in a mobile domain Finally, we would like to thank the Google Cultural Institute, whose curated collection of art pho tographs was very helpful in finding exciting style images to train on."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Alexei A Efros and William T Freeman. Image quilting for texture synthesis and transfer. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp 341-346. ACM, 2001.\nLeon Gatys, Alexander S Ecker, and Matthias Bethge. Texture synthesis using convolutional neura networks. In Advances in Neural Information Processing Systems, pp. 262-270, 2015a\nLeon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXi preprint arXiv:1508.06576, 2015b.\nLeon A Gatys, Matthias Bethge, Aaron Hertzmann, and Eli Shechtman. Preserving color in neura artistic style transfer. arXiv preprint arXiv:1606.05897, 2016a.\nAaron Hertzmann, Charles E Jacobs, Nuria Oliver, Brian Curless, and David H Salesin. Image analogies. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 327-340. ACM. 2001.\nBela Julesz. Visual pattern discrimination. IRE Trans. Info Theory. 8:84-92. 1962\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014\nLin Liang, Ce Liu, Ying-Qing Xu, Baining Guo, and Heung-Yeung Shum. Real-time texture syn-. thesis by patch-based sampling. ACM Transactions on Graphics (ToG). 20(3):127-150. 2001.\nAugustus Odena, Christopher Olah, and Vincent Dumoulin. Avoiding checkerboard artifacts it neural networks. Distill, 2016.\nJavier Portilla and Eero Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. International Journal of Computer Vision, 40:49-71, 1999\nJustin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. arXiv preprint arXiv:1603.08155, 2016.\nVivek Kwatra, Irfan Essa, Aaron Bobick, and Nipun Kwatra. Texture optimization for example based synthesis. ACM Transactions on Graphics (ToG), 24(3):795-802, 2005\nEero Simoncelli and Bruno Olshausen. Natural image statistics and neural representation. Annu Review of Neuroscience. 24:1193-1216. 2001.\nDmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. Texture networks: Feed forward synthesis of textures and stylized images. arXiv preprint arXiv:1603.03417, 2016a\nDmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing in gredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016b.\nAaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelcnn decoders. CoRR, abs/1606.05328 2016b. URLhttp://arxiv.0rg/abs/1606.05328\nMatthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pp. 818-833. Springer, 2014.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.\nAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. CoRR,abs/1609.03499,2016a. URLhttp://arxiv.0rg/abs/1609.03499\nHYPERPARAMETERS\nOperation Kernel size Stride Feature maps Padding Nonlinearity Network - 256 256 3 input Convolution 9 1 32 SAME ReLU Convolution 3 2 64 SAME ReLU Convolution 3 2 128 SAME ReLU Residual block 128 Residual block 128 Residual block 128 Residual block 128 Residual block 128 Upsampling 64 Upsampling 32 Convolution 9 1 3 SAME Sigmoid Residual block - C feature maps Convolution 3 1 C SAME ReLU Convolution 3 1 C SAME Linear Add the input and the output Upsampling - C feature maps Nearest-neighbor interpolation, factor 2 Convolution 3 1 C SAME ReLU Padding mode REFLECT Normalization Conditional instance normalization after every convolution Optimizer Adam (Kingma & Ba2014) ( = 0.001, 1 = 0.9, 2 = 0.999) Parameter updates 40,000 Batch size 16 Weight initialization Isotropic gaussian ( = 0, = 0.01)\nTable 1: Style transfer network hyperparameters\nClaude Monet, Grainstacks at Giverny; the Evening Sun (1888/1889)\nClaude Monet. Plum Trees in Blossom (1879)\nCAP\nClaude Monet, Poppy Field (1873)\nClaude Monet, Rouen Cathedral, West Facade (1894)\nClaude Monet. The Road to Vetheuil (1879)\nClaude Monet. Sunrise (Marine) (1873)\nClaude Monet. Three Fishing Boats (1886)\nClaude Monet. Vetheuil (1879)\nClaude Monet. Vetheuil (1902)\nClaude Monet, Water Lilies (ca. 1914-1917)\nRoy Lichtenstein, Bicentennial Print (1975)\nErnst Ludwig Kirchner, Boy with Sweets (1918)\nPaul Signac. Cassis, Cap Lombard, Opus 196 (1889)\nPaul Klee, Colors from a Distance (1932)\nFrederic Edwin Church, Cotopaxi (1855)\nJamini Roy, Crucifixion\nHenri de Toulouse-Lautrec, Divan Japonais (1893)\nEgon Schiele, Edith with Striped Dress, Sitting (1915)\nJamini Roy, Crucifixion\nGeorges Rouault, Head of a Clown (ca. 1907-1908)\nWilliam Hoare, Henry Hoare. of Stourhead (about 1750-1760) \"The Magnificent\nGiorgio de Chirico, Horses on the seashore (1927/1928)\nVincent van Gogh, Landsc at Saint-Remy (Enclosed Field with Peasant) (1889)\nNicolas Poussin. Landscape with a Calm (1650-1651)\nBernardino Fungai, Madonna and Child with Two Hermit Saints (early 1480s)\nMax Hermann Maxy. Portrait of a Friend (1926)\nJuan Gris, Portrait of Pablo Picasso (1912)\nSeverini Gino, Ritmo plastico del 14 luglio (1913)\nRichard Diebenkorn, Seawall (1957)\nAlice Bailly, Self-Portrait (1917)\nGrayson Perry, The Annunciation of the Virgin Deal (2012)\nWilliam Glackens. The Green Boathouse (ca. 1922)\nEdvard Munch, The Scream (1910)\nVincent van Gogh, The Starry Night (1889)\nPieter Bruegel the Elder, The Tower of Babel (1563)\nWolfgang Lettl, The Trial (1981)\nDouglas Coupland, Thomson No. 5 (Yellow Sunset) (2011)\nClaude Monet. Three Fishing Boats (1886)\nJohn Ruskin. Trees in a Lane (1847)\nGiuseppe Cades, Tullia about to Ride over the Body of Her Father in Her Chariot (about 1770-1775)\nBerthe Morisot. Under the Orange Tree (1889)\nGiulio Romano (Giulio Pippi). Victory, Janus, Chronos and Gaea (about 1532-1534)\nWassily Kandinsky, White Zig Zags (1922)"}]
ryCcJaqgl
[{"section_index": "0", "section_name": "TRENET: HYBRID NEURAL NETWORKS FOR LEARN ING THE LOCAL TREND IN TIME SERIES", "section_text": "Tao Lin* Tian Guo* & Karl Aberer\nSchool of Computer and Communication Sciences Ecole polytechnique federale de Lausanne. Switzerland\n{tao.lin, tian.guo, karl.aberer}@epfl.ch\nLocal trends of time series characterize the intermediate upward and downwarc patterns of time series. Learning and forecasting the local trend in time series data play an important role in many real applications, ranging from investing in the stock market, resource allocation in data centers and load schedule in smart grid Inspired by the recent successes of neural networks, in this paper we propose TreNet, a novel end-to-end hybrid neural network that predicts the local trenc of time series based on local and global contextual features. TreNet leverages convolutional neural networks (CNNs) to extract salient features from local rav data of time series. Meanwhile, considering long-range dependencies existing ir the sequence of historical local trends, TreNet uses a long-short term memory recurrent neural network (LSTM) to capture such dependency. Furthermore, fo. predicting the local trend, a feature fusion layer is designed in TreNet to learr joint representation from the features captured by CNN and LSTM. Our pro posed TreNet demonstrates its effectiveness by outperforming conventional CNN LSTM, HMM method and various kernel based baselines on real datasets"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Time series, which is a sequence of data points in time order, is being generated in a wide spectrum o. domains, such as daily fluctuation of the stock market, power consumption records of households performance monitoring data of clusters in data centres, and so on. In many applications, users. are interested in understanding the evolving trend in time series and forecasting the trend, since. the conventional prediction on specific data points could deliver very little information about the semantics and dynamics of the underlying process generating the time series. For instance, time. series in Figure[1|are from the household power consumption dataset'] Figure[1[a) shows some raw data points of time series. Though point A and B have approximately the same value, the underlying system is likely to be in two different states when it outputs A and B, because A is in an upward. trend while B is in a downward trend (Wang et al.[2011f|Matsubara et al.|[2014). On the other hand even when two points with the similar value are both in the upward trend, e.g, point A and C, the. different slopes and durations of the trends where point A and C locate, could also indicate differen states of the underlying process.\nParticularly, in this paper we are interested in the local trend of time series which measures the in termediate local behaviour, i.e., upward or downward pattern of time series that characterized by the slope and duration (Wang et al.|[2011). For instance, in Figure1[(b) the linear segments over raw data points of time series represent the local trends extracted from a real household power consumptioi time series. For the ease of presentation, we will use the term trend and local trend interchangeabl in the rest of the paper. Learning and forecasting local trends are quite useful in a wide range o applications. For instance, in the stock market, due to its high volatility and noisy environment in reality predicting stock price trends is preferred over the prediction of the stock market absolut values (Atsalakis & Valavanis]2009). Predicting the local trend of stock price time series empowers\nhttps://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumptior"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Meanwhile, in recent years neural networks have shown the dramatical power in a wide spectrum o domains, e.g., natural language processing, computer vision, speech recognition, time series anal ysis, etc.(Wang et al.f2016b] Sutskever et al.]2014] Yang et al.f2015] Lipton et al.[2015). Fo time series data, two mainstream architectures, convolutional neural network (CNN) and recurren neural network (RNN) have been exploited in different time series related tasks, e.g., RNN in time series classification (Lipton et al.|2015) and CNN in activity recognition and snippet learning (Lit et al.[2015,Yang et al.]2015). RNN is powerful in discovering the dependency in sequence dat (Jain et al.J|2014j|Graves||2012) and particularly the Long Short-Term Memory (LSTM) RNN works well on sequence data with long-term dependencies (Chung et al.l2014f|Hochreiter & Schmidhuber 1997) due to the internal memory mechanism. CNN excels in exacting effective representation of local salience from raw data of time series by enforcing a local connectivity between neurons. (Yang et al. 2015 Hammerla et al.2016).\nDuration 245 250 Local Data 245 245 yanue yanue lne 240 240 240 Trend 3 Predict the 235 235 Trend 2 trend 235 Slope A local trend 230Trend 1 from here 230 230 225 0 10 20 30 4050 60 708090 0 100 200 300 400 0 20 40 60 80 100120140 Time Time Time (a) (b) (c)\nFigure 1: (a) Time series of household power consumption. (b) Local trends in time series. (c) Effect of local raw data on the trend forecasting.\nTo this end, we propose a end-to-end hybrid neural network, referred to as TreNet. In particular. it consists of a LSTM recurrent neural network to capture the long dependency in historical local. trends. a convolutional neural network to extract local features from local raw data of time series and a feature fusion layer to learn joint representation to take advantage of both features drawn from. CNN and LSTM. Such joint representation is used for the local trend forecasting. The experimental. analysis on real datasets demonstrates that TreNet outperforms individual recurrent neural network,. convolutional neural network and a variety of baselines in term of local trend prediction accuracy..\ntraders to design profitable trading strategies (Chang et al.]2012bf Atsalakis & Valavanis2009) In the smart energy domain, knowing the predictive local trend of power consumption time se- ries enables energy providers to schedule power supply and maximize energy utilization (Zhao &. Magoules2012).\nDuration Local Data 245 250 245 245 yane lne lne 240 240 240 Trend 3Predict the. oB 235 235 Trend 2 trend 235 Slope A local trend. 230Trend 1 from here 230 230 225 0102030405060708090 100 200 300 400 0 0 20 40 60 80 100 120140 Time Time Time (a) (b) (c)\nn this paper, we focus on learning and forecasting the local trends in time series via neural networks. This involves learning different aspects of the data. On one hand, the sequence of historical local. rends describes the long-term contextual information of time series and thus naturally affects the evolution of the following local trend. On the other hand, the recent raw data points of time series. Wang et al.]2011 Batal et al.]2012), which represent the local variation and behaviour of time. series, affect the evolving of the following trend as well and have particular predictive power for abruptly changing local trends (Wang et al.] 2011). For instance, in Figure [1[c), trend 1, 2 and 3 present a continuous upward pattern. Then when we aim at predicting the subsequent trend of. ime series at the end of the third local trend, the previous three successive upward trends outline a orobable increasing trend afterwards. However, the local data around the end of the third trend, e.g.,. lata points in the red circle, indicate that time series could stabilize and even decrease. The data oints after the third trend indeed present a decreasing trend indicated by the red dotted segment. In. his case, the subsequent trend has more dependency on the local data points. Therefore, it is highly. lesired to develop a systematic way to model such various hidden and complementary dependencies. n time series for the local trend forecasting problem..\nThe rest of the paper is organized as follows. Section2|presents related work, while Section[3|defines the problem to be solved and introduces the notations. In Section4] we present the proposed TreNet. Section|5|demonstrates the performance of our method and baselines on real datasets. Finally, the paper is concluded in Section[6] Refer to Section[7|and Section[8|for more experiment results and discussion.\nTraditional learning approaches over local trends of time series mainly make use of Hidden Markov. Models (HMMs) (Wang et al. 2011 Matsubara et al.| 2014). HMMs maintain short-term state de pendences, i.e., the memoryless Markov property and predefined number of states, which requires significant task specific knowledge. RNNs instead use high dimensional, distributed hidden states that could take into account long-term dependencies in sequence data. Previous time series seg. mentation approaches (Keogh et al.| 2001 Matsubara et al.]2014] Yuan2015) focus on achieving a meaningful segmentation and finding patterns, rather than modeling the relation in segments anc therefore are not suitable for forecasting local trends. Multi-step ahead prediction is another way to realize local trend prediction by fitting the predicted values to estimate the local trend. However multi-step ahead prediction is a non-trivial problem itself (Chang et al.][2012a). In this paper, we. concentrate on directly learning local trends through neural networks..\nRNNs have recently shown promising results in a variety of applications, especially when there ex ist sequential dependencies in data (Lyu & Zhu2014) Chung et al.]2014} Sutskever et al. 2014) Long short-term memory (LSTM) (Hochreiter & Schmidhuber1997Lyu & Zhu2014) Chung et al.[2014), a class of recurrent neural networks with sophisticated recurrent hidden and gated units, are particularly successful and popular due to its ability to learn hidden long-term sequential dependencies. (Lipton et al.]2015) uses LSTMs to recognize patterns in multivariate time series, especially for multi-label classification of diagnoses. (Chauhan & Vig]2015}Malhotra et al.]2015] evaluate the ability of LSTMs to detect anomalies in ECG time series. Bidirectional LSTM (Graves & Schmidhuber 2005) is usually intended for speech processing rather than time series forecasting problems. Our paper focuses on using LSTM to capture the dependency in the sequence of histor ical local trends and meanwhile the hidden states in LSTM are further used to learn joint feature representations for the local trend forecasting.\nCNN is often used to learn effective representation of local salience from raw data (Vinyals et al.. 2015, Donahue et al.2015] Karpathy et al.]2014). (Hammerla et al.]2016]Yang et al.2015 Lea et al 2016) make use of CNNs to extract features from raw time series data for activity/action. recognition. (Liu et al.|2015) focuses on the prediction of periodical time series values by using. CNN and embedding time series with the potential neighbors in the temporal domain. Our proposed. TreNet will combine the strengths of both LSTM and CNN and form a novel and unified neural network architecture for local trend forecasting.\nHybrid neural networks, which combines the strengths of various neural networks, are receiving in- creasing interest in the computer vision domain, such as image captioning (Mao et al.||2014]Vinyals et al.]2015] Donahue et al.]2015), image classification (Wang et al.[2016a), protein structure pre- diction (Li & Yu]2016), action recognition (Ballas et al.]2015Donahue et al.]2015) and so on. But efficient exploitation of such hybrid architectures has not been well studied for time series data. especially the trend forecasting problem. (Li & Yu]2016] Ballas et al.]2015) utilize CNNs over im- ages in cascade of RNNs in order to capture the temporal features for classification. (Bashivan et al. 2015) transforms EEG data into a sequence of topology-preserving multi-spectral images and then. trains a cascaded convolutional-recurrent network over such images for EEG classification. (Wang. et al.|2016a] [Mao et al.]2014] propose the CNN-RNN framework to learn a shared representation for image captioning and classification problems. In our proposed TreNet, LSTM and CNN first. respectively learn the trend evolution and local raw data of time series and then TreNet fuses the features captured by LSTM and CNN to predict the trend."}, {"section_index": "3", "section_name": "PROBLEM FORMULATION", "section_text": "We define time series as a sequence of data points = {x1, ..., xT}, where each data point xt is real-valued and subscript t represents the time instant. The corresponding local trend sequence ol I' is a series of piecewise linear representations of X, denoted by T = {(lk, sk)}. Each element of T, e.g., (lk, sk) describes a linear function over a certain subsequence (or segment) of A' and corresponds to a local trend in . Such local trends in T are extracted from A' by time series segmentation and fitting a linear function w.r.t. time t over each segment (Keogh et al. 2001; Wang\net al.[2011). lk and sk respectively represent the duration and slope of trend k. lg is measured in terms of the time range covered by trend k. Local trends in T are time ordered and non-overlapping. The durations of all the local trends in T address lk = T. In addition, a local trend sequence\nTherefore, given the training dataset D = X'UT, we aim to propose a neural network based approacl to learn a function yt = f(T (t), (t)) for the trend forecasting. In this paper, we focus on univariate time series. The proposed method can be naturally generalized to multivariate time series as well by augmenting the input to the neural network. Refer to Section[8|for more discussion."}, {"section_index": "4", "section_name": "4 HYBRID NEURAL NETWORKS FOR TREND LEARNING AND FORECASTING", "section_text": "In this section, we first present an overview about the proposed TreNet for the trend forecasting Then we will detail the components of TreNet.."}, {"section_index": "5", "section_name": "Overview.", "section_text": "The idea of our TreNet is to combine CNN with LSTM to utilize their representation abilities o1 different aspects of training data D (D = ' U T) and then to learn a joint feature for the trend pre. diction. Technically, TreNet is designed to learn a predictive function yt = f(R(T (t)), C((t)). R(T(t)) is derived by training the LSTM over sequence T to capture the dependency in the tren. evolving, while C((t)) corresponds to local features extracted by CNN from (t). The long-tern. and loca1 features captured by LSTM and CNN, i.e., R(T (t)) and C(L(t)) convey complementar information pertaining to the trend varying. Therefore, the feature fusion layer is supposed to tak. advantages of both features to produce a fused representation for improved performance. Finally. the trend prediction is realized by the function f(., -), which corresponds to the feature fusion anc. output layers in Figure2\nHistorical Trend Sequence k,Sk - T(t) LSTM LSTM LSTM Feature Output Feature Output Feature Output Fusion Layer Fusion Layer Fusion Layer Yt A CNN CNN CNN Local Raw Data\nFigure 2: Illustration of the hybrid architecture of TreNet. (best viewed in colour)\nlneuepenuenc lendsequence During the training phase, the duration lk and slope sk of each local trend k in sequence T are fed. into the LSTM layer of TreNet. Each j-th neuron in the LSTM layer maintains a memory cl. at step k. The output h. or the activation of this neuron is then expressed as (Hochreiter & Schmidhuber\nMeanwhile, as we discussed in Section 1 local raw data of time series affects the varying of trend. as well and thus we define the local data w.r.t. a certain time instant t as a sequence of data points in a window of size w, denoted by (t) = { xt-w, :. , Xt\nAt certain time t, trend forecasting is meant to predict the duration and slope of the following trend oased on a given sequence of historical trends T(t) and local data set (t). The predicted duration and slope at time t are denoted by l and st. Our proposed TreNet can be trained for predicting either lt or st. For simplicity, we use yt to represent the predicted value of TreNet throughout the paper.\nck = ffck-1+ifCk, cf =tanh(Wc[lk sk]+Uchk-1)\nLearning features from the local raw data of time series\nm+di/2 Wia) , Vm=1,... z=m-di/2\nwhere q is the pooling size\nThe feature fusion layer combines the representations R(T(t)) and C(L(t)), to form a joint feature Then, such joint feature is fed to the output layer to provide the trend prediction. Particularly, we first map R(T(t)) and C(L(t)) to the same feature space and add them together to obtain the activation of the feature fusion layer (Mao et al.][2014). The output layer is a fully-connect layer following the feature fusion layer. Mathematically, the prediction of TreNet is expressed as:\nwhere $() is element-wise leaky ReLU activation function and + denotes the element-wise addi tion. Wo and bo are the weights and bias of the output layer..\n= 0 tanh\n= 0(W,[lk sk] + U,hk-1 + V,ck)]\nThe extent to which the existing memory is forgotten is modulated by a forget gate f?, and the. degree to which the new memory content is added to the memory cell is modulated by an input gate .. Then, such gates are computed by.\nff =o(Wf[lk sk]+Ughk-1+ VfCk-1) ik =0(Wi[lk sk]+U;hk-1+Vick-1)\nwhere v?, is the activation of j-th filter of layer i on m position of the input signal. Here is the Leaky Rectified Linear Unit, which is shown to perform better (Xu et al.]2015). Then the max-. pooling is performed over the v, of each filter..\ninally, the output of CNN in TreNet is the concatenation of max-pooling of each filter on the lasi ayer H, namely:\nC(L(t)) = 1<z<q\nyt = f(R(T(t)), C(L(t))) = Wo.$(Wr. R(T(t)) + Wc.C(L(t)))+b f eature fusion\nTo train TreNet, we adopt the squared error function plus a regularization term as.\nwhere W and b represent the weight and bias parameters in TreNet, X is a hyperparameter for th regularization term and yk is the true value of trend slope or duration.."}, {"section_index": "6", "section_name": "5 EXPERIMENTAL ANALYSIS", "section_text": "In this section, we conduct extensive experiments to demonstrate the prediction performance o TreNet by comparing to a variety of baselines. Due to the page limit, refer to Section|7|for more experiment results.\nDataset: We test our method and baselines on three real time series datasets\nAll datasets are preprocessed by (Keogh et al.|2001) to extract local trends. Alternative time series segmentation and local trend extraction approaches can be used as well. We choose (Keogh et al. 2001) here due to its high efficiency. Totally, we obtain 42591, 4720 and 1316 local trends respec- tively from above datasets. For the ease of experimental result interpretation, the slope of extracted local trends is represented by the angle of the corresponding linear function and thus in a bounded value range 90, 90]. The duration of local trends is measured by the number of data points within the local trend. Then, the obtained trend sequences and the set of local data are split into training (80%), validation (10%) and test (10%) datasets.\nBaselines: We compare TreNet with the following six baselines.\nhttps://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption 3 https://archive.ics.uci.edu/ml/datasets/Gas+sensor+array+under+dynamic+gas+mixtures\n|T| 1 (9k-yk)2+X||W|2 J(W,b;T,X) T k=1\nThe cost function is differentiable and the architecture of TreNet allows the gradients from the loss function (9) to be backpropagated to both LSTM and CNN parts. TreNet can be trained respectively for the slope and duration of local trends using T and I. When performing forecasting, T(t) and. (t) are fed to TreNet and the prediction value yk could be either the slope or duration depending. on the training target.\nDaily Household Power Consumption (HousePC). This datasef|contains measurements. of electric power consumption in one household with a one-minute sampling rate over a. period of almost 4 years. Different electrical quantities and some sub-metering values are. available. We use the voltage time series throughout the experiments.. Gas Sensor (GasSensor). This dataset|contains the recordings of chemical sensors ex- posed to dynamic gas mixtures at varying concentrations. The measurement was con-. structed by the continuous acquisition of the sensor array signals for a duration of about 12. hours without interruption. We mainly use the gas mixture time series regarding Ethylene and Methane in air. Stock Transaction (Stock): This dataset is extracted from Yahoo Finance and contains the daily stock transaction information in New York Stock Exchange from 1950-10 to 2016-4\nCNN. This baseline method predicts the trend by only using CNN over the set of local raw data of time series to learn features for the forecasting. The size of local data is set at w as is defined in Section3 LSTM. This method uses LSTM to learn dependencies in the trend sequence T and pre dicts the trend only using the trained LSTM.. Support Vector Regression (SVR). A family of support vector regression based ap. proaches with different kernel methods is used for the trend forecasting. We consider three.\nDataset Model RMSE @ Duration RMSE @ Slope CNN 27.51 13.56 LSTM 27.27 13.27 SVRBF 31.81 12.94 HousePC SVPOLY 31.81 12.93 SVSIG 31.80 12.93 pHMM 34.06 26.00 Naive 39.68 21.17 CLSTM 25.97 13.77 TreNet 25.89 12.89 CNN 18.87 12.78 LSTM 11.07 8.40 SVRBF 11.38 7.40 Stock SVPOLY 11.40 7.42 SVSIG 11.49 7.41 pHMM 36.37 8.70 Naive 11.36 8.58 CLSTM 9.26 7.31 TreNet 8.86 6.84 CNN 53.99 11.51 LSTM 55.77 11.22 SVRBF 62.81 10.21 GasSensor SVPOLY 70.91 10.95 SVSIG 85.69 11.92 pHMM 111.62 13.07 Naive 53.76 10.57 CLSTM 54.20 14.86 TreNet 52.28 9.57\nGasSensor\nTable 1: RMSE of the prediction of local trend duration and slope on each dataset\nEvaluation metric: We evaluate the predictive performance of TreNet and baselines in terms of Root Mean Square Error (RMSE). The lower the RMSE, the more accurate the predictions\nTraining: The training procedure of TreNet and baselines in our paper follows the schema below\nThe CNN and LSTM components in TreNet share the same network structure (e.g., number of lay. ers, neurons in each layer) as CNN and LSTM baselines. CNN has two stacked convolutional layers which have 32 filters of size 2 and 4. The number of memory cells in LSTM is 600. For baselin CNN and LSTM, we tune the learning rate for each approach from {10-1, 10-2, 10-3, 10-4, 10-5 (Sutskever et al.[[2013), in order to achieve the least prediction errors and then fix the learning rate For TreNet, in addition to the learning rate, the number of neurons in the feature fusion layer i chosen from the range {300, 600, 900, 1200} to achieve the best performance. We use dropout anc. L2 regularization to control the capacity of neural networks to prevent overfitting, and set the value.. to 0.5 and 5 10-4 respectively for all datasets (Mao et al.]2014). The Adam optimizer (Kingma. & Ba[2014) is chosen to learn the weights in neural networks..\nRegarding the SVR based approaches, we carefully tune the parameters c (error penalty), d (degree of kernel function), and y (kernel coefficient) for kernels. Each parameter is selected from the sets c E {10-5,10-4,: .,1,..., 104, 105}, d E {1, 2, 3}, y E {10-5, 10-4,..., 1, ..., 105} respec- tively. We iterate through candidate values of each combination of c, d and y to train our model and keep the parameters that generate the lowest RMSE on the validation set, and then use them to. predict on the test set.\nThe training datasets of SVR and pHMM baselines are consistent as that of TreNet. Likewise, CNN and LSTM baselines are respectively fed by the set of local data and the trend sequence of the same size as TreNet. In addition, since the window size of local data is tunable, we vary the window size of local data, i.e. w, from the range { 100, 300, 500, 700, 900}, so as to investigate how the size of local data influences the predication performance. The results will be presented in Section |5.2 The model's performance on the validation set will be evaluated after each epoch of training. Each model is trained for at least 50 epochs. Meanwhile, the training process adopts early stopping if no further improvement in the performance of validation shows up after 50 epochs."}, {"section_index": "7", "section_name": "5.2 EXPERIMENT RESULTS", "section_text": "Table[1studies the prediction performances of TreNet and baselines. For each dataset, the window size of local data is constant for approaches (i.e., CNN, SVRBF, SVPOLY, SVSIG, pHMM anc TreNet) that take local data as input. Then, the results of each approach are obtained by tuning the corresponding parameter as described in Section|5.1\nIn Table [1] we observe that TreNet consistently outperforms baselines on the duration and slope prediction by achieving around 30% less errors at the maximum. It verifies that the hybrid architec- ture of TreNet can improve the performance by utilizing the information captured by both CNN and LSTM. Specifically, pHMM method performs worse due to the limited representation capability of HMM. On the slope prediction, SVR based approaches can get comparable results as TreNet.\nIn the following group of experiments, we investigate the effect of local data size (i.e., w) on the prediction. In particular, we tune the value of local data size for the approaches whose input fea\ncommonly used kernels (Liu et al.[2015), i.e., Radial Basis kernel (SVRBF), Polynomia. kernel (SVPOLY), Sigmoid kernel (SVSIG). The trend sequence and the correspondin. set of local time series data are concatenated as the input features to such SVR approaches. Pattern-based Hidden Markov Model (pHMM). (Wang et al.]2011) proposed a pattern. based hidden Markov model (HMM), which segments the time series and models the de. pendency in segments via HMM. The derived HMM model is used to predict the state o. time series and then to estimate the trend based on the state.. Naive. This is the naive approach which takes the duration and slope of the last trend a. the prediction for the next one.. ConvNet+LSTM(CLSTM). It is based on the cascade structure of ConvNet and LSTM in (Bashivan et al.| 2015) which feeds the features learnt by ConvNet over time series to a. LSTM and obtains the prediction from the LSTM..\nBaseline Naive has no original time series data as input CLSTM works on the whole time series anc has no local data. Thus they are excluded from this set of experiments.\nIn Table[2] we observe that compared to baselines TreNet has the lowest errors on the duration pre. diction across different window sizes. pHMM requires sufficient data points to model the relation. of segments and fails to work on 100 size. As the window size increases and more local data poin are fed to the training process, the prediction errors of CNN and TreNet decrease or nearly stabilize. This could be because only the certain amount of local data has predictive power. The filtering an. pooling mechanism enables CNN to focus on the certain local data having strong predictive powe. and thus giving more local data only gives rise to marginal improvements. Such similar phenomeno. is observed on the slope prediction as is shown in Table[3] For more results and discussion, pleas. refer to Section7\nTable 2: RMSE of the duration predictions w.r.t. different sizes of local data in HousePC datase\nTable 3: RMSE of the slope predictions w.r.t. different sizes of local data in HousePC dataset"}, {"section_index": "8", "section_name": "6 CONCLUSION", "section_text": "In this paper we propose TreNet, a novel hybrid neural network to learn and predict the local trend behaviour of time series. The experimental results demonstrate that such a hybrid framework car indeed utilize complementary information extracted by CNN and LSTM to enhance the prediction performance. Moreover, such architecture is generic and extendible in that additional exogenous time series can be fed to TreNet, so as to boost the performance and investigate the effect of different data sources on the trend evolving."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "George S Atsalakis and Kimon P Valavanis. Forecasting stock market short-term trends using a neuro-fuzzy based methodology. Expert Systems with Applications, 36(7):10696-10707, 2009\nPouya Bashivan, Irina Rish, Mohammed Yeasin, and Noel Codella. Learning representations from eeg with deep recurrent-convolutional neural networks. arXiv preprint arXiv:1511.06448, 2015.\nIyad Batal, Dmitriy Fradkin, James Harrison, Fabian Moerchen, and Milos Hauskrecht. Mining recent temporal patterns for event detection in multivariate time series data. In Proceedings of\ntures contains local data and observe the prediction errors. Such approaches include CNN, SVRBF, SVPOLY, SVSIG, pHMM and TreNet. LSTM only consumes the trend sequence and thus is not included. Due to the page limit, we report the results on the HousePC dataset in Table[2and Table[3 The results on Stock and GasSensor datasets can be referred to Section7\nWindow Size CNN SVRBF SVPOLY SVSIG pHMM TreNet 100 29.37 31.48 31.96 31.88 25.93 300 27.33 31.17 31.61 31.66 30.03 25.94 500 27.51 31.81 31.81 31.80 34.06 25.89 700 27.41 31.10 31.09 31.11 27.37 25.72 900 27.42 31.28 31.27 31.27 28.45 25.62\nWindow Size CNN SVRBF SVPOLY SVSIG pHMM TreNet 100 13.68 12.93 12.9352 12.9346 13.14 300 13.60 12.93 12.9346 12.9345 27.75 13.15 500 13.56 12.94 12.9342 12.9346 26.00 12.89 700 13.52 12.93 12.9345 12.9345 35.32 12.86 900 13.60 12.94 12.9350 12.9346 37.60 12.96\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of. gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. 2014\nJeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venu gopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual. recognition and description. In Proceedings of the IEEE Conference on Computer Vision and. Pattern Recognition, pp. 2625-2634, 2015.\nAlex Graves and Jurgen Schmidhuber. Framewise phoneme classification with bidirectional 1stm and other neural network architectures. Neural Networks. 18(5):602-610. 2005\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nZachary C Lipton, David C Kale, Charles Elkan, and Randall Wetzell. Learning to diagnose with lstm recurrent neural networks. arXiv preprint arXiv:1511.03677, 2015.\nJiajun Liu, Kun Zhao, Brano Kusy, Ji-rong Wen, and Raja Jurdak. Temporal embedding in convolu- tional neural networks for robust learning of abstract snippets. arXiv preprint arXiv:1502.05113,. 2015. Qi Lyu and Jun Zhu. Revisit long short-term memory: An optimization perspective. In Advances in neural information processing systems workshop on deep Learning and representation Learning. 2014.\nPei-Chann Chang et al. A novel model by evolving partially connected neural network for stock price trend forecasting. Expert Systems with Applications, 39(1):611-620, 2012b.\nAndrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725-1732, 2014.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.\nZhen Li and Yizhou Yu. Protein secondary structure prediction using cascaded convolutional and recurrent neural networks. arXiv preprint arXiv:1604.07176, 2016.\nPankaj Malhotra, Lovekesh Vig, Gautam Shroff, and Puneet Agarwal. Long short term memory. networks for anomaly detection in time series. In European Symposium on Artificial Neural Networks, volume 23, 2015.\nJunhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632, 2014\nYasuko Matsubara, Yasushi Sakurai, and Christos Faloutsos. Autoplait: Automatic mining of co evolving time sequences. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pp. 193-204. ACM, 2014.\nIlya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initial. ization and momentum in deep learning. In Proceedings of the 3Oth international conference on machine learning (1CML-13), pp. 1139-1147, 2013.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in neural information processing systems. pp. 3104-3112. 2014.\nJiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, and Wei Xu. Cnn-rnn: A unified framework for multi-label image classification. arXiv preprint arXiv:1604.04573, 2016a\nLinlin Wang, Zhu Cao, Yu Xia, and Gerard de Melo. Morphological segmentation with window lstm neural networks. In Thirtieth AAAI Conference on Artificial Intelligence, 2016b\nHai-xiang Zhao and Frederic Magoules. A review on the prediction of building energy consumption Renewable and Sustainable Energy Reviews, 16(6):3586-3592, 2012"}, {"section_index": "10", "section_name": "7.1 DATA PRE-PROCESSING", "section_text": "In this part, we describe the data pre-processing, which extracts the local trend sequence from raw time series data for the subsequent neural network training and testing\nWe convert the raw time series data into a piecewise linear representation, namely consecutive seg. ments (Keogh et al.]2001) Wang et al.]2011). Each segment corresponds to a local trend and is fitted by a linear function of time series value w.r.t. time, e.g., xt = 1t + o + e over the time range. [t1, t2) of this segment. Then, the slope and duration are derived from the coefficient 3 and (t1, t2)\nTechnically, we adopt the bottom-up approach in (Keogh et al.]2001), since it can achieve lower. approximate errors compared with top-down and sliding window methods. The process is illustrated in Figure[3 Initially, we approximate time series A' with [T] line segments (T is the length of the.\nSegment 2 Segment 3 : Iteration 1 : Iteration 2 - : Linear representation : Raw data points of time series Segment 1\nFigure 3: Illustration of local trend extraction via time series segmentation. (Best viewed in colour\ntime series). Then, we iteratively merge the neighbouring segments to build longer ones. In each iteration, neighbouring segments with the minimal approximation error are merged into a new one. The merging process repeats until every possible merge gives rise to a segment with errors above. a specified threshold. We use the relative mean squared error as the error metric and specify the threshold as 0.05.\n242 640 240 620 238 600 ue tne 236 580 234 560 232 540 230 228 520 20 40 60 80 100 120 0 20 40 60 80 100 0 Time Time (a) HousePC (b) Stock 1200 1100 1000 lne 900 800 700 600 500 50 100 150 0 Time (c) GasSensor\nFigure 4: Visualization of the trend prediction by TreNet in HousePC, Stock and GasSensor datasets The blue line in each figure represents the historical trend sequence. The yellow line represents the. predicted local trend.\nIn this group of experiments, we visualize the trend prediction using the sample testing data instance. from each dataset in Figure 4] We can observe that in HousePC TreNet successfully predicts the. changed trend, though there are successive upward trends before. In Stock and GasSensor datasets. the succeeding upward and downward trends are correctly predicted as well..\nWindow Size CNN SVRBF SVPOLY SVSIG pHMM TreNet 100 18.87 11.38 11.40 11.49 8.86 300 18.17 11.41 11.44 11.42 39.84 8.85 500 18.06 11.39 11.44 11.36 32.10 8.51 700 18.10 11.45 11.59 11.58 36.37 8.58 900 18.07 11.32 11.47 11.59 38.36 8.78\nTable 4: RMSE of the duration predictions on different sizes of local data in Stock dataset\nWindow Size CNN SVRBF SVPOLY SVSIG pHMM TreNet 100 12.78 7.40 7.42 7.41 6.84 300 12.24 7.42 7.51 7.38 6.67 6.53 500 12.13 7.47 7.41 7.42 7.59 6.58 700 12.24 7.53 7.58 7.51 9.74 6.75 900 12.25 7.61 7.45 7.59 14.00 6.73\nThen, we provide the RMsE w.r.t. the varying window size on Stock and GasSensor datasets in Table4] Table[5] Table [6|and Table7\nFrom the results, we observe that TreNet outperforms baselines almost on all window sizes. Mean while, the prediction errors often present the decreasing and stable pattern as the window size varies\nWindow size of local data: The observation in above experiments w.r.t. the varying window size provides inspiration for choosing the window size of local data. Given the training dataset, we can find out the maximum duration of local trends and takes it as the local data size. This is because doing so can ensure that the range of local data in each training instance can cover the most recent local trend, whose raw data is believed to have strong predictive power for the subsequent trend. Additionally, we observe that setting the window size of local data of CNN and TreNet in this way can achieve comparable prediction errors compared to the cases with larger window sizes\nTable 6: RMSE of the duration predictions on different sizes of local data in GasSensor datase\nWindow Size CNN SVRBF SVPOLY SVSIG pHMM TreNet 100 11.98 11.16 11.19 12.48 10.30 - 300 11.51 10.21 10.95 11.92 9.57 500 11.75 10.08 10.65 11.64 13.07 9.60 700 11.59 9.54 10.44 11.72 12.29 9.55 900 12.10 9.61 10.37 11.54 12.37 9.46\nTable 7: RMSE of the slope predictions on different sizes of local data in GasSensor datase."}, {"section_index": "11", "section_name": "8 DISCUSSION", "section_text": "For multivariate time series, we can augment the input of TreNet by including the trend sequence. and local data of exogenous time series and then train TreNet for a certain target time series to predic its trend. Another line of research is to explore equipping TreNet with multi-task learning. This i. motivated by the observation that if we decompose the trend forecasting problem into classificatior. and regression respectively for the slope and duration, we can utilize the correlation between slope.\nTable 5: RMSE of the slope predictions on different sizes of local data in Stock dataset\nWindow Size CNN SVRBF SVPOLY SVSIG pHMM TreNet 100 54.23 57.77 65.99 99.78 53.91 - 300 53.99 62.81 70.91 85.69 52.28 500 53.82 61.86 64.33 91.51 111.62 51.77 700 53.14 61.20 63.89 78.20 175.36 51.15 900 53.19 61.45 63.83 68.09 255.73 51.25\nand duration to boost the prediction performance. In addition, there could be alternative framework to combine the outputs of CNN and LSTM and our work opens the door for applying hybrid neura. networks for trend analysis in time series.."}]
H1MjAnqxg
[{"section_index": "0", "section_name": "INTELLIGIBLE LANGUAGE MODELING WITH INPUT SWITCHED AFFINE NETWORKS", "section_text": "jakob.foerster@cs.ox.ac.uk, jan.chorowski@cs.uni.wroc.p. {gilmer, jaschasd, sussillo}@google.com\nThe computational mechanisms by which nonlinear recurrent neural networks. (RNNs) achieve their goals remains an open question. There exist many problem. domains where intelligibility of the network model is crucial for deployment. Here. we introduce a recurrent architecture composed of input-switched affine transforma. tions, in other words an RNN without any nonlinearity and with one set of weights. per input. We show that this architecture achieves near identical performance to. traditional architectures on language modeling of Wikipedia text, for the same. number of model parameters. It can obtain this performance with the potential. for computational speedup compared to existing methods, by precomputing the. composed affine transformations corresponding to longer input sequences. As. our architecture is affine, we are able to understand the mechanisms by which it. functions using linear methods. For example, we show how the network linearly. combines contributions from the past to make predictions at the current time step. We show how representations for words can be combined in order to understand. how context is transferred across word boundaries. Finally, we demonstrate how the system can be executed and analyzed in arbitrary bases to aid understanding.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Neural networks and the general field of deep learning have made remarkable progress over the las1. few years in fields such as object recognition (Krizhevsky et al.]|2012), language translation (Sutskever. et al.]2014), and speech recognition (Graves et al.[2013). For all of the success of the deep learning. approach however, there are certain application domains in which intelligibility of the system is. an essential design requirement. One commonly used example is the necessity to understand the decisions that a self-driving vehicle makes when avoiding various obstacles in its path. Another. example is the application of neural network methodologies to scientific discovery (Mante et al. 2013). Even where intelligibility is not an overt design requirement, it is fair to say that most users of. neural networks would like to better understand the models they deploy..\nThere are at least two approaches to creating intelligible network models. One approach is to build networks as normal, and then apply analysis techniques after training. Often this approach yields systems that perform extremely well, and whose intelligibility is limited. A second approach is to build a neural network where intelligibility is an explicit design constraint. In this case, the typical result is a system that can be understood reasonably well, but may underperform. In this work we follow this second approach and build intelligibility into our network model, yet without sacrificing performance for the task we studied.\nDesigning intelligibility into neural networks for all application domains is a worthy, but daunting goal Here we contribute to that larger goal by focusing on a commonly studied task, that of character based\n* This work was performed as an intern at Google Brain. tWork done as a member of the Google Brain Residency program (g . co/brainresidency Work performed when author was a visiting faculty at Google Brain."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Work by the authors of (Karpathy et al.][2015) attempted to use character-based language modeling to begin to understand how the LSTM (Hochreiter & Schmidhuber 1997) functions. In it, they employ n-gram word models to highlight what the LSTM has - and has not - learned about the text corpus. They were able to break down LSTM language model errors into classes, such as e.g., \"rare word\" errors. The authors of (Greff et al.[2015) engaged in a large study to understand the relative importance of the various components of an LSTM. The authors of (Collins et al.|2016) performed an enormous hyperparameter study to disentangle the effects of capacity and trainability in a number of RNN architectures.\nAttempts to understand networks in more general contexts include the use of linearization anc nonlinear dynamical systems theory to understand RNNs in (Sussillo & Barak]2013). In feed forward networks the use of linear probes has been suggested by (Alain & Bengio] 2016), and there exist a host of back-propagation techniques used to infer the most important input to drive various components of the feed-forward network, e.g. (Le et al.2012).\nThe ISAN uses an input-switched affine model. The highly related linear time-varying systems. are standard material in undergraduate electrical engineering text books. Probabilistic versions oi. switching linear models with discrete latent variables have a history in the context of probabilistic graphical models. A recent example is the switched linear dynamical system in (Linderman et al.. 2016). Focusing on language modeling, (Belanger & Kakade] 2015) defined a probabilistic linear. dynamical system as a generative language model for creating context-dependent token embeddings. and then used steady-state Kalman filtering for inference over token sequences. They used singula. value decomposition and discovered that the right and left singular vectors were semantically anc. syntactically related. One difference between the ISAN and the LDS is that the weight matrices oj. the ISAN are input token dependent (while the biases of both models are input dependent). Finally. multiplicative neural networks (MRNNs) were proposed precisely for character based language. modeling in (Sutskever et al.]2011] [Martens & Sutskever]2011). The MRNN architecture is similai to our own, in that the dynamics matrix switches as a function of the input character. However, the MRNN relied on a tanh nonlinearity, while our model is explicitly linear. It is this property of ou.. model which makes it both amenable to analysis, and computationally efficient..\nThe Observable Operator Model (OOM) (Jaeger2000) is similar to the ISAN in that the OOM update a latent state using a separate transition matrix for each input symbol and performs probabilisti sequence modeling. Unlike the ISAN, the OOM requires that a linear projection of the hidden stat corresponds to a normalized sequence probability. This imposes strong constraints on both the mode parameters and the model dynamics, and restricts the choice of training algorithms. In contrast, th ISAN applies an affine readout to the hidden state to obtain logits, which are then pushed throug a SoftMax to obtain probabilities. Therefore no constraints need to be imposed on the ISAN parameters and training is easy using backprop. Lastly, the ISAN is formulated as an affine, rathe than linear model. While this doesn't change the class of processes that can be modeled, it enhance the stability of training and greatly enhances interpretability. We elaborate upon these ideas in Sectio 6.1\nlanguage modeling. We develop and analyze a model trained on a one-step-ahead prediction task of the Text8 dataset, which is 10 million characters of Wikipedia text (Mahoneyl2011). The model we use is a switched affine system, where the input determines the switching behavior by selecting a transition matrix and bias as a function of that input, and there is no nonlinearity. Surprisingly we find that this simple architecture performs as well as a vanilla RNN, Gated Recurrent Unit (GRU) (Cho et al.]2014), IRNN (Le et al.] 2015), or Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber! |1997) in this task, despite being a simpler and potentially far more computationally efficient architecture.\nIn what follows, we discuss related work, define our Input Switched Affine Network (ISAN) model demonstrate its performance on the one-step-ahead prediction task, and then analyze the model in a multitude of ways, most of which would be currently difficult or impossible to accomplish with modern nonlinear recurrent architectures."}, {"section_index": "3", "section_name": "3.1 MODEL DEFINITION", "section_text": "The network also learns an initial hidden state ho. We emphasize the intentional absence of any nonlinear activation function.\nThe RNNs are trained on the Text8 Wikipedia dataset, for one-step-ahead character prediction. The Text8 dataset consists only of the 27 characters 'a'-'z' and '_' (space). Given a character sequence of X1, ..., Xt, the RNNs are trained to minimize the cross-entropy between the true next character, and the output prediction. We map from the hidden state, ht, into a logit space via an affine map. The probabilities are computed as\np (Xt+1) = softmax (1t) lz = Wro ht + bro,\nwhere Wro and bro are the readout weights and biases, and 1, is the logit vector. In line with (Collins et al.[2016) we split the training data into 80%, 10%, and 10% for train, test, and evaluation set respectively. The network was trained with the same hyperparameter tuning infrastructure as in (Collins et al.|2016). Analysis in this paper is carried out on the best-performing ISAN model, which has 1, 271, 619 parameters, corresponding to 216 hidden units, and 27 dynamics matrices Wx and biases bx."}, {"section_index": "4", "section_name": "4.1 ISAN PERFORMANCE ON THE TEXT8 TASK", "section_text": "The results on Text8 are shown in Figure1a. For the largest parameter count, the ISAN matches almost exactly the performance of all other nonlinear models with the same number of maximum parameters: RNN, IRNN, GRU, LSTM. However, we note that for small numbers of parameters the ISAN performs considerably worse than other architectures. All analyses use ISAN trained with 1.28e6 maximum parameters (1.58 bpc cross entropy). Samples of generated text from this model are relatively coherent. We show two examples, after priming with \"annual reve\", at inverse temperature of 1.5, and 2.0, respectively:\nAs a preliminary, comparative analysis, we performed PCA on the state sequence over a large set of sequences for the vanilla RNN, GRU of varying sizes, and ISAN. This is shown in Figure[1b. The eigenvalue spectra, in log of variance explained, was significantly flatter for the ISAN than the other architectures.\nWe also compare the ISAN performance to a fully linear RNN without input switched dynamics. This achieves a cross-entropy of 3.1 bits / char, independent of network size. This perplexity is only. slightly better than that of a Naive Bayes model on the task, at 3.3 bits / char. The output probability. of the fully linear network is a product of contributions from each previous character, as in Naive. Bayes. Those factorial contributions are learned however, giving ISAN a slight advantage. We also run a comparison to a fully linear network with a non-linear readout. This achieves 2.15 bits /.\nIn what follows Wx and bx respectively denote a transition matrix and a bias vector for a specific input x, the symbol x, is the input at time t, and h, is the hidden state at time t. Our ISAN model is defined as\nht = Wxt ht-1+ bxt.\n\"annual revenue and producer of the telecommunications and former communist action and saving its new state house of replicas and many practical persons\" \"annual revenue seven five three million one nine nine eight the rest of the country in the united states and south africa new\"\nFigure 1: The ISAN has near identical performance to other RNN architectures, and makes fulle use of its latent space. a) Performance of RNN architectures on Text8 one-step-ahead prediction measured as cross-entropy loss on a held-out test set, in bits per character. The loss is shown as a function of the maximum number of parameters a model is allowed. The values reported for al other architectures are taken from (Collins et al.|2016). b) The explained variance ratio of the firs1 210 most significant PCA dimensions of the hidden states across several architectures. The legend provides the number of latent units for each architecture. We find the ISAN model uses the hidden space more uniformly than the vanilla RNN or GRU.\nchar, independent of network size. Both of these comparisons illustrate the importance of the input switched dynamics for achieving good results in the absence of non-linear hidden state dynamics\nLastly we also test to what extent the ISAN can deal with large dictionaries by running it on a byte-pair encoding of the text8 task, where the input dictionary consists of the 272 different possible. character combinations. We find that in this setup the LSTM consistently outperforms the ISAN for. the same number of parameters. At 1.3m parameters the LSTM achieves a cross entropy of 3.4 bits 7 char-pair, while ISAN achieves 3.55. One explanation for this finding is that the matrices in ISAN. are a factor of 27 smaller than the matrices of the LSTMs. For very large numbers of parameters the. performance of any architecture saturates in the number of parameters, at which point the ISAN can. catch-up' with more parameter efficient architectures like LSTMs..\nTaking advantage of the linearity of the hidden state dynamics for any sequence of inputs, we can decompose the current latent state h into contributions originating from different timepoints s in the history of the input:\nt t ht = I W s=0 s'=s+1\nwhere the empty product when s + 1 > t is 1 by convention, and bxo = ho is the learned initia hidden state. This is useful because we can analyze which factors were important in the past, fo determining the current character prediction.\nUsing this decomposition and the linearity of matrix multiplication we can also write the unnormalize logit-vector, 1t, as a sum of terms linear in the biases,.\nwhere kt is the contribution from timestep s to the logits at timestep t, and = bx.. For notational convenience we will sometimes replace the subscript s with the corresponding input character xs at step s when referring to t - e.g. kt., to refer to the contribution from the character 'q' in a string\n10-1 GRU-308 ernnnee ennneneee Parameter count 8e4 3.2e5 1.28e6 GRU-145 RNN-256 RNN 1.88 1.69 1.59 RNN-116 10-2 IRNN 1.89 1.71 1.58 ISAN-216 GRU 1.83 1.66 1.59 LSTM 1.85 1.68 1.59 10-3 ISAN 1.92 1.71 1.58 0 50 100 150 200 a) b) PCA dim\nt lt = bro + K s=0 t I t W W K s'=s+1\ninput: n d e V e n u e S step: 2 3 6 9 10 11 12 13 14 15 16 1.0 0.0\nFigure 2: Using the linearity of the hidden state dynamics, predictions at step t can be broken ou into contributions, kt, from previous steps. Accordingly, each row of the top pane corresponds to the propagated contribution () of the input character at time s, to the prediction at time t (summed tc create the logit at time t). The penultimate row contains the output bias vector replicated at ever time step. The last row contains the logits of the predicted next character, which is the sum of al rows above. The bottom pane contains the corresponding softmax probabilities at each time t for al characters (time is separated by gray lines). Labeled is the character with the maximum predictec probability. The timestep boxed in red is examined in more detail in Figure|3\nSimilarly, when discussing the summed contributions from a word or substring we will sometimes e.g. the, to refer to the total logit contribution from the word 'the'.\nWhile in standard RNNs the nonlinearity causes interdependence of the bias terms across time steps. in the ISAN the bias terms can be interpreted as independent linear contributions to the state tha are propagated and transformed through time. We emphasize that t includes the multiplicativ contributions from the Wx., for s < s' t. It is however independent of prior inputs, xs, for s' < s This is the main difference between the analysis we can carry out with the ISAN compared to a. non-linear RNN. In general the contribution of a specific character sequence will depend on the hidden state at the start of the sequence. Due to the linearity of the dynamics, this dependency doe not exist in the ISAN.\nIn Figure[2|we show an example of how this decomposition allows us to understand why a particular. prediction is made at a given point in time, and how previous characters influence the decoding. For example, the sequence '_annual_revenue_' is processed by the ISAN: Starting with an all-zero hidden state, we use equation (6) to accumulate a sequence of kt , kta', ktn', tn', .. These values can. then be used to understand the prediction of the network at some time t, by simple addition across the s index, which is shown in Figure|2.\nIn Figure 3|we provide a detailed view of how past characters contribute to the logits predicting the next character. There are two competing options for the next letter in the word stem 'reve': either 'revenue' or 'reverse'. We show that without the contributions from '_annual' the most likely decoding of the character after the second 'e' is 'r' (to form 'reverse'), while the contributions from _annual' tip the balance in favor of 'n', decoding to 'revenue'. In a standard RNN a similar analysis could be carried out by comparing the prediction given an artificially limited history.\nUsing the decomposition of current step predictions in to kt, we can also investigate how quickly the contributions of t decay as a function of t - s. In Figure|4a we can see that this contribution decays on two different exponential timescales. We hypothesize that the first time scale corresponds to the decay within a word, while the next corresponds to the decay of information across words anc sentences. This effect is also visible in Figure|5] We note that it would be difficult to carry out this analysis in a non-linear RNN.\nWe can also show the relevance of the kt contributions to the decoding of characters at different positions in the word. For examples, we observe that kt , makes important contributions to the. prediction of the next character at time t. We show that using only the t ,, the model can achieve.\na) b) c) 10 10 2.5 n logit n logit 8 r logit 8 r logit 12 2.0 Fvs 6 1.5 6 4 1.0 4 2 0.5 0 0.0 2 -2 -0.5 O 4 -1.0 -2 abcdef ghi j k Imnopqr st uwwxyz a n a e V e _annual _reve _annual_reve Output character S\nFigure 3: Detailed view of the prediction stack for the final 'n' in '_annual revenue'. In a) all ? are shown, in b) only the contributions to the 'n' logit and 'r' logits are shown, in orange and red respectively, from each earlier character in the string. This corresponds to a zoom in view of the columns highlighted in orange and red in a). In c) we show how the sum of the contributions from the string '_annual', ?_annuar, pushes the prediction at _annual_reve' from 'r' to 'n'. Without this contribution the model decodes based only on ! reve, leading to a MAP prediction of 'reverse'. With the contribution from kt and 'r' logits is purely linear.\na) b) c) 101 5 4.0 all chars no space 3.5 space only 4 100 all 3.0 Crrnn errny 3 2.5 101 2.0 2 1.5 10-2 1.0 0.5 103 0 0.0 0 20 40 60 80 100 0 2 4 A 8 10 12 14 0 5 10 15 20 Number of steps Position in word Lenght of history\nFigure 4: The time decay of the contributions from each character to prediction. a) Average norn. of t across training text, E [||t|], plotted as a function of t s, and averaged across all sourc characters. The norm appears to decay exponentially at two rates, a faster rate for the first ten or s characters, and then a slower rate for more long term contributions. b) The median cross entropy. as a function of the position in the word under three different circumstances: the red line uses al of the t (baseline), the green line sets all ' apart from t , to zero, while the blue line only sets. kt , to zero. The results from panel c demonstrate the disproportionately large importance of '_' i1. decoding, especially at the onset of a word. c) Shown is the cross-entropy as a function of history. when artificially limiting the number of characters available for prediction. This corresponds to onl considering the most recent n of the , where n is the length of the history..\nThe ISAN provides a natural means of moving from character level representation to word level Using the linearity of the hidden state dynamics we can aggregate all of the kt belonging to a given\nFurthermore we can link back from the norm-decay to the importance of past characters for the decoding quality. By artificially limiting the number of past k available for prediction, Figure|4c, we show that the prediction quality improves rapidly when extending the history from O to 10 characters and then saturates. This rapid improvement aligns with the range of faster decay in Figure|4a.\nInput: t h e a n n u aI revenuewashigherthanexpected Wor 1.0 I+ 0.5 0.0 t h e an n u aI r ev e n u e W a s hi gher_t han e x p e c t e d\nFigure 5: The ISAN architecture can be used to precisely characterize the relationship between word. and characters. The top pane shows how exploiting the linearity of the network's operation we car combine the t, .tn in a word to a single contribution, word, for each word. Shown is the norm of Kt word, a measure of the magnitude of the effect of the previous word on the selection of the current character (red corresponds to a norm of 10, blue to 0). The bottom pane shows the probabilitie. assigned by the network to the next sequence character. Lighter lines show predictions conditionec on a decreasing number of preceding words. For example, when predicting the characters of than there is a large contribution from both was and higher, as shown in the top pane. The effect on the log probabilities can be seen in the bottom pane as the model becomes less confident when excluding representation clearly shows that the system leverages contextual information across multiple words\nFigure 6:We can use the kt.. word as an embedding space. Although the model was only trained on. character level representations, the kt, word show clear semantic clustering under t-SNE (Maaten & Hinton 2008). Shown is an overview of the 4000 most common words in a). In b) a zoomed in. version is shown, a region that is primarily filled with numbers. In c) the zoom captures a variety of different professions.\nIn Figure6we show that these kt word-level semantic information. Shown is a t-SNE embedding of the kt,ord for the most common 2Or 4000 words in the data-set. with examples of the kind of clusters that arise\na) b) c) 10 7.6 na every seetopmer -7.8 7.4 zero each queergsdpe poet 7.2 sentberposer. 5 one. forty saint orofesmeean pngwriter four -8.0 five 7.0 seven presidetreptagian ninight Shatraate director wan 6.8 cardinal. putyeansician 0 secretary deannatician 8.2 6.6 honor 6.4 archBno -5 qpianeer -8.4 thirty 6.2 carnegibishop politician Wenty count 6.0 attila tegleven saints -10 -8.6 5.8 astronomers -10 -5 0 5 10 -5.5 -5.0 -4.5 -4.0 2.5 3.0 3.5 4.0 4.5 5.0\nword and visualize them as a single contribution to the prediction of the letters in the next word. This allows us to understand how each preceding word impacts the decoding for the letters of later words In Figure|5|we show that the words 'higher' and 'than' make large contributions to the prediction of the characters 'h' and 'n' in 'tevenue', as measured by the norm of the t the and nt_. annual' :\na) b) c) 11 12 o9i ox 10 2.2 oq hoo 10 ok oXw 9 ou ok oXw 2.0 BW 8 8 l|*qoa| Or l|*qua|| Ymeh 7 8tbe 1*q| 1.8 apm 6 6 OJ OV on 5 of 4 1.6 4 og to .e 2 3 1.4 2 0 -11-10-9-8-7-6-5-4-3-2 -11-10-9-8-7-6-5-4-3-2 -11-10-9-8-7-6-5-4-3-2 Bits of entropy Bits of entropy Bits of entropy\nFigure 7: By transforming the ISAN dynamics into a new basis, we can better understand the actior of the input-dependent biases. a) We observe a strong correlation between the norms of the inpu dependent biases, bx, and the log-probability of the unigram x in the training data. We can begin to. understand this correlation structure using a basis transform into the 'readout basis'. Breaking out the norm into its components in Pro and Pro in b) and c) respectively, shows that the correlation is due. to the component orthogonal to Wro. This implies a connection between information or 'surprise. and distance in the 'computational' subspace of state space.."}, {"section_index": "5", "section_name": "4.4 CHANGE OF BASIS", "section_text": "In particular we can construct a 'readout basis' that explicitly divides the latent space into a subspace representation explicitly divides the hidden state dynamics into a 27-dimensional 'readout' subspace that is accessed by the readout matrix to make predictions, and a 'computational' subspace comprising the remaining 216 - 27 dimensions that are orthogonal to the readout matrix.\nWe apply this change of basis to analyze an intriguing observation about the hidden offsets bx: As shown in Figure7] the norm of the bx is strongly correlated to the log-probability of the unigram x in the training data. Re-expressing network parameters using the readout basis' shows that this correlation is not related to reading out the next-step prediction. This is because the norm of the projection of bx into Pro remains strongly correlated with character frequency, while the projections into Pro have norms that show little correlation. This indicates that the information content or surprise of a letter is encoded through the norm of the component of bx in the computational space rather than in the readout space.\nWe compared the computation performed by n-gram language models and those performed by the. ISAN. An n-gram model with back-off weights expresses the conditional probability p (xt|x1...Xt-1). as a sum of smoothed count ratios of n-grams of different lengths, with the contribution of shorter. n-grams down-weighted by backoff weights. On the other hand, the computations performed by the ISAN start with the contribution of bro to the logits, which as shown in Figure9h) corresponds to. the unigram log-probabilities. The logits are then additively updated with contributions from longer. n-grams, represented by kt. This additive contribution to the logits corresponds to a multiplicative. modification of the emission probabilities from histories of different length. For long time lags, the. additive correction to log-probabilities becomes small (Figure|2), which corresponds to multiplication. by a uniform distribution. Despite these differences in how n-gram history is incorporated, we.\nWe are free to perform a change of basis on the hidden state, and then to run the affine ISAN dynamics in that new basis. Note that this change of basis is not possible for other RNN architectures, since the. action of the nonlinearity depends on the choice of basis.\nSimilarly, in Figure 8|we illustrate that the structure in the correlations between the bx is due to their. of high correlations between the vowels and consonants respectively, while b. , is uncorrelated to either.\nFigure 8: By transforming ISAN dynamics into a new basis, we can better interpret structure in th input-dependent biases. In a) we show the cosine distance between the input dependent bias vector. split between vowels and consonants ( ' is first). In b) we show the correlation only considerin, the components in the subspace Pro spanned by the rows of the readout matrix Wro. c) shows the correlation of the components in the orthogonal complement Pro. In all plots white corresponds to ( (aligned) and black to 2.\na) b) C 0.35 0.14 1.0 q predicted predicted Ppneeeeed 0.30 Pqdqeqold I 0.12 empirical empirical 0.8 0.25 0.10 Ri 0.6 0.20 0.08 eq leuonuo 0.15 0.06 pPor 0.4 0.10 0.04 0.2 0.05 0.02 0.00 0.00 0.0 e O t e 1 0 t y 0.00.20.40.60.81. Output Character Output Character corr(empirical, unigram)\nFigure 9: The predictions of ISAN for one and two characters well approximate the predictions of unigram and bigram models. In a) we compare softmax(bro) to the empirical unigram distribution P(x). In b) we compare softmax(Wrob + bro) with the empirical distribution P(xt+1[_'). In c) we show the correlation of softmax(Wrobx + bro) with P(xt+1|xt) for all 27 characters (y-axis), and compare this to the correlation between the empirical unigram probabilities P(x) to P(xt+1|xt (x-axis). The plot shows that the readout of the bias vector is a better predictor of the conditional distribution than the unigram probability.\nnevertheless observe an agreement between empirical models estimated on the training set and model predictions for unigrams and bigrams. Figure 9|shows that the bias term bro gives the unigram probabilities of letters, while the addition of the offset terms bx accurately predict the bigram distribution of P (xt+1xt). Shown are both an example, P (x_), and a summary plot for all 27 letters.\nWe further explore the n-gram comparison by artificially limiting the length of the character history that is available to the ISAN for making predictions, as shown in Figure4)\nTo show the interpretability of the ISAN we train a model on the parenthesis counting task. Bringing together ideas from sections4.4 and|6.1|we re-express the transition dynamics in a new basis that fully reveals computations performed by the ISAN.\nWe analyze the simple task of parentheses counting, which was defined in (Collins et al.2016). Briefly, the RNN is required keep track of the nesting level of 3 different types of parentheses\na) b) c) tw-03Q00+0c-xEcQ0-n+>3x>N tq-03Q00+0C-x-EcQo-s+>3x>N t-03Q0o+0C-x-EcQo-n+>3x>N aeioubcdfghjk lmnpqrstvwxyz aeioubcdf ghjk Imnpqrstvwxyz aeioubcdfghjklmnpqrstvwxyz\nindependently. The inputs are the one-hot encoding of the different opening and closing parentheses (e.g. '(', )', '{', '}') as well as a noise character ('a'). The output is the one-hot encoding of the nesting level between (0-5), one set of counts for each parentheses task. One change from (Collins et al.[2016) is that we slightly simplify the problem by exchanging the cross-entropy error with an L2 error and linear readout (this change leads to slightly cleaner figures, but does not qualitatively. change the results).\nwhere O is the orthogonal complement of the subspace spanned by the row vectors of [Wro bro] in an N + 1-dimensional space. We perform the following change of basis of the dynamics matrices,.\nWro'Wx M\nand visualize the results in Fig. 10] Figure [10 shows that this system created delay lines which. count the nesting level, with fixed point dynamics at the O count (5 count), so that the system stays at. both numbers when the input would otherwise increment (decrement) the count. The matrices alsc implement fixed point dynamics, as implemented via an identity submatrix to preserve the memory. of the parenthesis nesting counts when an unrelated symbol enters the system (e.g. The '{}' count is. preserved by the '(' matrix when a '(' symbol enters the system).."}, {"section_index": "6", "section_name": "6 DISCUSSIOn", "section_text": "In this paper we motivated an input-switched affine recurrent network for the purpose of intelligibility We showed that a switched affine architecture achieves the same performance, for the same number of maximum parameters, on a language modeling task as do more common RNN architectures including GRUs and LSTMs. We performed a series of analyses, demonstrating that the simplicity of the latent dynamics makes the trained RNN far easier to understand and interpret."}, {"section_index": "7", "section_name": "6.1 BENEFITS OF AFFINE TRANSITIONS OVER LINEAR", "section_text": "ISAN uses affine operators to model state transitions assigned to each input symbol. Following. eq. (1) each transition consists of matrix multiplication and bias vector addition. An important. question is whether the biases are needed and how the ISAN would be impacted if linear transition. operators were used instead of affine ones. The answer is two-fold. First, affine dynamics can be. exactly implemented using linear operators in a hidden space expanded by one additional dimension Therefore, the expressivity of ISAN does not depend on choosing a linear or affine formulation. However, we found that the affine parametrization of transitions is much easier to train. We attempted. to train models using only linear transitions, but achieved a loss of only 4.1 bits per character, which\nWe first re-express the transition dynamics in terms of linear, rather than affine operations. Conside.\nw b W' 0 1 9\nh Wh+ b W' 1 1\nThe matrices W and W' are closely connected. Each eigenvalue of W is also an eigenvalue of W'. Moreover, eigenvectors of W become the eigenvectors of W' when expanded with a zero dimension In fact, W' only has one extra eigenvalue of exactly 1 that is necessary to preserve the last dimension of the expanded hidden state..\nTo analyze the the parentheses task we analyze W'. The key to understanding how the network solves the parentheses task is to find a change of bases that clarifies the dynamics necessary to count. 3 sets of independent parentheses nesting levels. In this case we use a matrix composed of the readout. matrix, modified by adding a set of vectors that spans the null space of the readout (including the. additional bias dimension).\nWro bro 0\nTransition matrix for '(' in original basis. Hidden states in the original basis. 0 0 10 10 Hnndunh!p unpa!n 20 20 30 30 40 40 50 50 0 10 20 30 40 50 (aa))[][][]a[[[[[)))a)a{{}{}aa[]a[[[[[)aa))[] a) b) Transition matrix for '(. Hidden state in output-oriented basis. when hidden state is output-oriented.. (first 18 dimensions are aligned with output) 0 0 10 10 un!dunw!p unpp!n 20 20 30 30 40 40 50 50 0 10 20 30 40 50 aa))[][][]a[[[[[))aa{{}{}aa[]a[[[[[)aa))[] c) d)\nFigure 10: A visualization of the dynamics of an ISAN for the parenthesis counting task. In a) the weight matrix for '(' is shown in the original basis. In c) it is shown transformed to highlight the delay-line dynamics. The activations of the hidden units are also shown b) in the original basis, and d) rotated to the same basis as in c), to highlight the delay-line dynamics in a more intelligible way The white line delineates the transition matrix elements and hidden state dimensions that directly contribute to the output. All matrices for parentheses types appear similarly, with closing parentheses e.g. ')', changing the direction of the delay line.\ncorresponds to the performance of a unigram character model. Second, affine operators are easier to interpret because they permit easy visualization of contributions of each input token on the final network's prediction, as demonstrated in Section4.2\nSwitched affine networks hold the potential to be massively more computationally and memory efficient for text processing than standard RNNs, as explained in the next two subsections"}, {"section_index": "8", "section_name": "6.2.1 SPARSE PARAMETER ACCESS", "section_text": "As shown in Figure 1a, the performance for fixed parameter count is nearly identical between the. ISAN and other recurrent networks. However, at each time step, only the parameters associated with. a single input are used. For K possible inputs and N parameters, the computational cost per update\nThe memory and computational benefits in Section 6.2.1 are shared by other switched networks. However, ISAN is unique in its ability to precompute affine transformations corresponding to. input strings. This is possible because the composition of affine transformations is also an affine. transformation. This property is used in Section4.3|to evaluate the linear contributions of words. rather than characters. This means that the hidden state update corresponding to an entire inpu sequence can be computed with identical cost to the update for a single character (plus the dictionary. lookup cost for the composed transformation). ISAN can therefore achieve very large speedups or. input processing, at the cost of increased memory use, by accumulating large lookup tables of the. W, and b, corresponding to common input sequences. Of course, practical implementations will. have to incorporate complexities of memory management, batching, etc.."}, {"section_index": "9", "section_name": "6.3 FUTURE WORK", "section_text": "There are some obvious future directions to this work. Currently, we define switching behavior using an input set with finite and manageable cardinality. Studying word-level language models with enormous vocabularies may require some additional logic to scale. Adapting this model to continuous-valued inputs is another important direction. One approach is to use a tensor factorization similar to that employed by the MRNN (Sutskever et al.][2014). Another is to build a language model which switches on bigrams or trigrams, rather than characters or words, targeting an intermediate number of affine transformations.\nTraining very large switched linear models has the potential to be extremely fruitful, due both to. their improved computational efficiency, and our ability to better understand and manipulate their behavior.\nWe would like to thank Jasmine Collins for her help and advice, and Quoc Le, David Ha and Mohammad Norouzi for helpful discussions."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes. arXi preprint arXiv:1610.01644, 2016.\nDavid Belanger and Sham Kakade. A linear dynamical system model for text. In Proceedings of the 32n International Conference on Machine Learning (ICML-15), pp. 833-842, 2015.\nJasmine Collins, Jascha Sohl-Dickstein, and David Sussillo. Capacity and trainability in recurrent neura networks. ICLR 2017 submission. 2016\nKlaus Greff, Rupesh Kumar Srivastava, Jan Koutnik, Bas R Steunebrink, and Jurgen Schmidhuber. Lstm: A search space odyssey. arXiv preprint arXiv:1503.04069. 2015\nHerbert Jaeger. Observable operator models for discrete stochastic time series. Neural Computation, 12(6) 1371-1398, 2000.\ntep 1s( a factor of K speedup over non-switched architectures. Similarly, the number of\nAndrej Karpathy, Justin Johnson, and Fei-Fei Li. Visualizing and understanding recurrent networks. arXi preprint arXiv:1506.02078, 2015.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neura networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.\nScott W Linderman, Andrew C Miller, Ryan P Adams, David M Blei, Liam Paninski, and Matthew J Johnson Recurrent switching linear dynamical systems. arXiv preprint arXiv:1610.08466, 2016.\nMatt Mahoney. Large text compression benchmark: About the test data, 2011. URLhttp : / /mattmahoney net/dc/text data [Online; accessed 15-November-2016].\nValerio Mante, David Sussillo, Krishna V Shenoy, and William T Newsome. Context-dependent computation b recurrent dynamics in prefrontal cortex. Nature, 503(7474):78-84, 2013.\nDavid Sussillo and Omri Barak. Opening the black box: low-dimensional dynamics in high-dimensiona recurrent neural networks. Neural computation, 25(3):626-649, 2013.\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579-2605, 2008\nJames Martens and Ilya Sutskever. Learning recurrent neural networks with hessian-free optimization. In Proceedings of the 28th International Conference on Machine Learning (ICML-11). pp. 1033-1040. 2011.\nlya Sutskever, James Martens, and Geoffrey E Hinton. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 1017-1024, 2011.\nya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advance. in neural information processing systems, pp. 3104-3112, 2014."}]
S1j4RqYxg
[{"section_index": "0", "section_name": "EFFICIENT CALCULATION OF POLYNOMIAL FEATURES ON SPARSE MATRICES", "section_text": "Nystrom, Andrew\nawnystrom@gmail.com *\nWe provide an algorithm for polynomial feature expansion that both operates on. and produces a compressed sparse row matrix without any densification. For a vector of dimension D, density d, and degree k the algorithm has time complexity O(dk Dk) where k is the polynomial-feature order; this is an improvement by a. factor dk over the standard method."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Polynomial feature expansion has long been used in statistics to approximate nonlinear func tions Gergonne (1974);Smith (1918). The compressed sparse row (CSR) matrix format is a widely used data structure to hold design matrices for statistics and machine learning applications. How. ever, polynomial expansions are typically not performed directly on sparse CsR matrices, nor or any sparse matrix format for that matter, without intermediate densification steps. This densificatior not only adds extra overhead, but wastefully computes combinations of features that have a produc of zero, which are then discarded during conversion into a sparse format.\nWe provide an algorithm that allows CSR matrices to be the input of a polynomial feature expansion without any densification. The algorithm leverages the CSR format to only compute products of features that result in nonzero values. This exploits the sparsity of the data to achieve an improved time complexity of O(dk Dk) on each vector of the matrix where k is the degree of the expansion, D is the dimensionality, and d is the density. The standard algorithm has time complexity O(Dk) Since 0 < d < 1, our algorithm is a significant improvement. While the algorithm we describe uses CSR matrices, it could be modified to operate on other sparse formats."}, {"section_index": "2", "section_name": "2 PRELIMINARIES", "section_text": "Matrices are denoted by uppercase bold letters thus: A. The ithe row of A is written a;. All vectors are written in bold, and a, with no subscript, is a vector.\nA compressed sparse row (CsR) matrix representation of an r-row matrix A consists of three vec tors: c, d, and p and a single number: the number of columns of A. The vectors c and d contain the same number of elements, and hold the column indices and data values, respectively, of all nonzerc elements of A. The vector p has r entries. The values in p index both c and d. The ith entry pi ol p tells where the data describing nonzero columns of a, are within the other two vectors: Cp:pi+1 contain the column indices of those entries; dp,:pi+1 contain the entries themselves. Since only nonzero elements of each row are held, the overall number of columns of A must also be stored since it cannot be derived from the other data.\nScalars, vectors, and matrices are often referenced with the superscript k. This is not to be interpreted. as an exponent, but to indicate that it is the analogous aspect of that which procedes it, but in its polynomial expansion form. For example, c2 is the vector that holds columns for nonzero values in. A's quadratic feature expansion CSR representation..\nFor simplicity in the presentation, we work with polynomial expansions of degree 2, but continue to use the exponent k to show how the ideas apply in the general case.\ntThe authors contributed equally important and fundamental aspects of this work.\nHughes, John\njfh@cs.brown.edu"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "We have also developed an algorithm for second and third degree interaction features (combinations without repetition), which can be found in the implementation."}, {"section_index": "4", "section_name": "3 MOTIVATION", "section_text": "In this section, we present a strawman algorithm for computing polynomial feature expansions or. dense matrices. We then modify the algorithm slightly to operate on a CSR matrix, in order tc. expose its infeasibility in that context. We then show how the algorithm would be feasible with ar added component, which we then derive in the following section..\nA natural way to calculate polynomial features for a matrix A is to walk down its rows and, for eacl row, take products of all k-combinations of elements. To determine in which column of A products of elements in A, belong, a simple counter can be set to zero for each row of A and incrementec efter each polynomial feature is generated. This counter gives the column of A; into which each expansion feature belongs.\n1 N = row count of A 2 D = column count of A 3 Ak = empty N matrix 4 for i = 0 to N - 1 5 Cp = 0 6 for j1 = 0 to D - 1 7 for j2 = j1 to D - 1 Alice 8 = AijAij2 9 Cp = Cp +1\nNow consider how this algorithm might be modified to accept a CsR matrix. Instead of walkin. directly down rows of A, we will walk down sections of c and d partitioned by p, and instead o. inserting polynomial features into Ak, we will insert column numbers into ck and data elements into dk.\nWe do provide an algorithm for third degree expansions, and derive the big-O time complexity of the general case.\n1 N = row count of A 2 pk = vector of size IV + 1 3 p = 0 4 nnzk = 0 5 fori = 0 to N - 1 6 istart = Pi 7 istop = Pi+1 8 Cj = Cistart:istop 9 2 10 11 // Build up the elements of pk, ck, and d 12 pk = vector of size NV + 1 ck = vector of size nnzk 13 14 dk = vector of size nnzk 15 n = 0 16 fori = 0to N - 1 17 istart = Pi 18 istop = Pi+1 19 Ci = Cistart:istop 20 di = distart:istop. 21 for c1 = 0 to c; 1 22 for c2 = C1 to |ci- 1 23 dk. = dco : dc1 ch =? 24 25 n = n+1\nalOTA 2 p yector of size N + 1 3 Po K = 0 nnzk = 0 4 5 for i = 0to N -1 6 Istart = Pi 7 istop = Pi+1 8 Ci = Cistart:istop 9 K ci nnz 2 0 nnz k = nnz' +nnz 1 k Pi+1 Pi Innzi K\nfori = 0to N -1 istart = Pi istop = Pi+1 Ci = Cistart:istop di = distartistop. for c1 = 0 toc;1 for c2 = C1 toci dk. = dco : dc1 ck =? n=n+1\nThe crux of the problem is at line[24] Given the arbitrary columns involved in a polynomial featur. of A, we need to determine the corresponding column of Ak. We cannot simply reset a counter fo. each row as we did in the dense algorithm, because only columns corresponding to nonzero value are stored. Any time a column that would have held a zero value is implicitly skipped, the counte would err.\nJ0,J1,...,Jk-1 > Pjoji...ik-1 E{0, 1,..\nwhere the entry in row i, column j, displays the value f(i, j). We let T2(n) = n(n + 1) be the nth triangular number; then in Equation[2] column j (for j > O) contains entries with T2(j - 1)\nTo develop a general algorithm, we require a mapping from columns of A to a column of Ak. If there are D columns of A and ( ) columns of Ak, this can be accomplished by a bijective mapping of the following form:\n[x 0 1 37 x x 2 4 x x x 5 [x x x x\ne < T2(j); the entry in the ith row is just i + T2(j - 1). Thus we have f(i, j) = i + T(j - 1) = (2i + j2 - j). For instance, in column j = 2 in our example (the third column), the entry in row i =1isi+T(j-1) =1+1=2.\nFor three indices, i,j,k, with 0 < i < j < k < D, we have a similar recurrence. Calling the mapping h. we have.\nh(i,j,k)=i+T2(j-1)+T3(k-2)\nh(i,j,k)=T(i) +T2(j-1)+T3(k-2):\nh(i,j,k)=1+(i-1)+ 2 (k-2)3+3(k-2)2+2(k-2) 6\nWith the mapping from columns of A to a column of Ak, we can now write the final form of the innermost loop of the algorithm from3.2. Let the mapping for k = 2 be denoted h2. Then the. innermost loop becomes:\nC2 = C1 toc; J0 = Cco J1 Cc1 = h2(jo,j1) = dco : dc1 n=n+1\nThe algorithm can be generalized to higher degrees by simply adding more nested loops, using higher order mappings, modifying the output dimensionality, and adjusting the counting of nonzerc polynomial features in line |9\nCalculating k-degree polynomial features via our method for a vector of dimensionality D and density d requires /dL (with repetition) products. The complexity of the algorithm, for fixed k <\nand from this the generalization to higher dimensions is straightforward. The formulas for \"higher triangular numbers', i.e., those defined by.\nn Tk(n)= Tk-1(n i=1\ndD + k (dD + k - 1)! k k!(dD - 1)! (dD+k-1)(dD+k-2)...(dD) k! =O((dD+k-1)(dD+k-2)...(dD) f =O(d Dk\nTo demonstrate how our algorithm scales with the density of a matrix, we compare it to the tradi. tional polynomial expansion algorithm in the popular machine library scikit-learn Pedregosa et al.. (2011) in the task of generating second degree polynomial expansions. Matrices of size 100 5000. were randomly generated with densities of 0.2, 0.4, 0.6, 0.8, and 1.0. Thirty matrices of each den- sity were randomly generated, and the mean times (gray) of each algorithm were plotted. The red or blue width around the mean marks the third standard deviation from the mean. The time to densify the input to the standard algorithm was not counted..\nThe standard algorithm's runtime stays constant no matter the density of the matrix. This is because it does not avoid products that result in zero, but simply multiplies all second order combinations of features. Our algorithm scales quadratically with respect to the density. If the task were third degree expansions rather than second, the plot would show cubic scaling\nHow Expansion Algorithms Scale with Density Dense Algorithm. 100 95 90 (reennns) fonnone oo toee 85 80 Sparse Algorithm 16 r 14 12 10 8 6 4 2 0.2 0.4 0.6 0.8 1.0 Matrix Density (d)\nFigure 1: Our algorithm (bottom) scales with the density of a matrix, unlike the traditional polyno mial feature expansion method (top). The task was a second degree expansion, which is why the time of our algorithm scales quadratically with the density..\nThe fact that our algorithm is approximately 6.5 times faster than the scikit-learn algorithm on 100 5000 matrices that are entirely dense is likely a language implementation difference. What matters is that the time of our algorithm increases quadratically with respect to the density in accordance with the big-O analysis."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "J.D. Gergonne. The application of the method of least squares to the interpolation of sequences Historia Mathematica, 1(4):439-447, 1974.\nWe have developed an algorithm for performing polynomial feature expansions on CsR matrices. that scales polynomially with respect to the density of the matrix. The areas within machine learning. that this work touches are not en vogue, but they are workhorses of industry, and every improvement in core representations has an impact across a broad range of applications.."}]
B1s6xvqlx
[{"section_index": "0", "section_name": "RECURRENT ENVIRONMENT SIMULATORS", "section_text": "Silvia Chiappa, Sebastien Racaniere. Daan Wierstra & Shakir Mohamed\n{csilvia, sracaniere, wierstra, shakir}@google.com\nModels that can simulate how environments change in response to actions can be used by agents to plan and act efficiently. We improve on previous environment simulators from high-dimensional pixel observations by introducing recurrent neural networks that are able to make temporally and spatially coherent prediction for hundreds of time-steps into the future. We present an in-depth analysis of the factors affecting performance, providing the most extensive attempt to advance the understanding of the properties of these models. We address the issue of computationally inefficiency with a model that does not need to generate a high dimensional image at each time-step. We show that our approach can be used tc improve exploration and is adaptable to many diverse environments, namely 10 Atari games, a 3D car racing environment, and complex 3D mazes."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Simulating an environment requires models of temporal sequences that must possess a number. of properties to be useful: the models should make predictions that are accurate, temporally and spatially coherent over long time periods; and allow for flexibility in the policies and action sequences that are used. In addition, these models should be general-purpose and scalable, and able to learn. from high-dimensional perceptual inputs and from diverse and realistic environments. A model that achieves these desiderata can empower agent-based systems with a vast array of abilities, including counterfactual reasoning (Pearl|2009), intuitive physical reasoning (McCloskey|1983), model-based exploration, episodic control (Lengyel & Dayan2008), intrinsic motivation (Oudeyer et al.2007) and hierarchical control.\nDeep neural networks have recently enabled significant advances in simulating complex environments allowing for models that consider high-dimensional visual inputs across a wide variety of domains [Wahlstrom et al.]2015,Watter et al.[[2015] [Sun et al.[[2015, Patraucean et al.[[2015). The model of Oh et al.(2015) represents the state-of-the-art in this area, demonstrating high long-term accuracy in deterministic and discrete-action environments.\nIn this paper we advance the state-of-the-art in environment modelling. We build on the work of|Oh. et al.(2015), and develop alternative architectures and training schemes that significantly improves. performance, and provide in-depth analysis to advance our understanding of the properties of these\nFigure 28: Prediction error for different prediction lengths through truncated BPTT on (a) Qbert anc (b) Riverraid."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In order to plan and act effectively, agent-based systems require an ability to anticipate the conse. quences of their actions within an environment, often for an extended period into the future. Agents can be equipped with this ability by having access to models that can simulate how the environments changes in response to their actions. The need for environment simulation is widespread: in psy-. chology, model-based predictive abilities form sensorimotor contingencies that are seen as essential. for perception (O'Regan & NoeJ 2001); in neuroscience, environment simulation forms part of deliberative planning systems used by the brain (Niv|2009); and in reinforcement learning, the ability. to imagine the future evolution of an environment is needed to form predictive state representations Littman et al.]2002) and for Monte Carlo planning (Sutton & Barto1998).\nDespite these advances, there are still several challenges and open questions. Firstly, the properties. of these simulators in terms of generalisation and sensitivity to the choices of model structure and training are poorly understood. Secondly, accurate prediction for long time periods into the future. remains difficult to achieve. Finally, these models are computationally inefficient, since they require the prediction of a high-dimensional image each time an action is executed, which is unnecessary in situations where the agent is interested only in the final prediction after taking several actions..\n120 40 BPTT(15, 1) 0% PDT 35 100 BPTT(15, 2) 33%PDT BPTT(15, 5) 5 100% PDT 80 25 60 at 40 15 20 10 0 50 25 50 75 100 40 80 120 160 200 240 Time-steps Number of Frames 40 160 35 140 10 100 120 100 25 80 20 60 15 40 10 20 5 0. 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (a) 200 50 BPTT(15, 1) 45 0% PDT 175 BPTT(15, 2) 33%PDT 5 40 BPTT(15, 5) 150 100% PDT 35 rne 30 Preleion 100 Ernr 25 75 20 50 15 25 2 10 0 5 1 25 50 75 100 0 40 80 120 160 200 240 Time-stepse Number of Frames 50 220 45 200 001 180 40 160 35 140 30 120 25 100 ureeionn 20 80 15 60 10 40 0 0 40 80 120 160 200 240 40 80 120 160 200 240 Number of Frames Number of Frames (b)\nat aT a+1 Lt. S 1 ST decod XT+2 X X1 X +1 Xt Xt Xt-1 Xr+1 XT+2 Xt-1 Xt (a) (b)\nFigure 1: Graphical model representing (a) the recurrent structure used in Oh et al.[(2015) and (b) our recurrent structure. Filled and empty nodes indicate observed and hidden variables respectively.\nmodels. We also introduce a simulator that does not need to predict visual inputs after every actior reducing the computational burden in the use of the model. We test our simulators on three divers and challenging families of environments, namely Atari 2600 games, a first-person game where ar agent moves in randomly generated 3D mazes, and a 3D car racing environment; and show that they can be used for model-based exploration."}, {"section_index": "3", "section_name": "RECURRENT ENVIRONMENT SIMULATORS", "section_text": "Our starting point is the recurrent simulator of Oh et al.[(2015), which is the state-of-the-art in simulating deterministic environments with visual observations (frames) and discrete actions. This simulator is a recurrent neural network with the following backbone structure:\nSt =fSt-1,C(ll(Xt-1,Xt-1))), Xt = DSt,t-1\nIn this equation, st is a hidden state representation of the environment, and f a non-linear deterministic state transition function. The symbol I indicates the selection of the predicted frame -1 or real frame x-1, producing two types of state transition called prediction-dependent transition and observation-dependent transition respectively. C is an encoding function consisting of a series of convolutions, and D is a decoding function that combines the state st with the action at-1 through a multiplicative interaction, and then transforms it using a series of full convolutions to form the predicted frame xt.\nThe model is trained using stochastic gradient decent, in which each mini-batch consists of a se of segments of length t + T randomly sub-sampled from x1:r+r'. For each segment in the mini batch, the model uses the first t observations to evolve the state and forms predictions of the last 7 observations only. Training comprises three phases differing in the use of prediction-dependent o observation-dependent transitions (after the first t transitions) and in the value of the predictioi length T. In the first phase, the model uses observation-dependent transitions and predicts for T = 1 time-steps. In the second and third phases, the model uses prediction-dependent transitions an predicts for T = 3 and T = 5 time-steps respectively. During evaluation or usage, the model cai only use prediction-dependent transitions.\nFigure 29: Prediction error for different prediction lengths through truncated BPTT on (a) Seaques and (b) Space Invaders.\nAn environment simulator is a model that, given a sequence of actions a1, . .. , a-1 = a1:r-1 and. corresponding observations x1:- of the environment, is able to predict the effect of subsequent actions ar:r+r'-1, such as forming predictions x+1:r+r' or state representations S+1:r+r' of the. environment.\nThe model is trained to minimise the mean squared error between the observed time-series x-+1:r+t',. corresponding to the evolution of the environment, and its prediction. In a probabilistic framework. this corresponds to maximising the log-likelihood in the graphical model depicted in Fig.1(a). In this graph, the link from x to xt represents stochastic dependence, as x is formed by adding to X a Gaussian noise term with zero mean and unit variance, whilst all remaining links represent deterministic dependences. The dashed lines indicate that only one of the two links is active, depending on whether the state transition is prediction-dependent or observation-dependent..\nA strong feature of the model of Oh et al.(2015) described above is that the actions influence the state transitions only indirectly through the predictions or the observations. Allowing the actions to condition the state transitions directly could potentially enable the model to incorporate action. information more effectively. We therefore propose the following backbone structure:.\nIn this section we compare the baseline state transition\n1t-1 Action fusion: vt = Whht-1 Wat-1, Gate update: i = 0(Wivt + Wizzt-1), ft = 0(Wfvt + Wf*zt-1) Ot = 0(W0Vt + W0%Zt-1), Cell update: ct = ft Ct-1 + it tanh(Wcvvt + Wczzt-1), State update: h = Ot tanh(ct)\nSt = fSt-1,Qt-1,C(l(Xt-1,Xt-1))), Xt = D(St)\nIn the graphical model representation, this corresponds to replacing the link from at-1 to x with a link from at-1 to st as in Fig.1(b)\nMore specifically, in Figs. 30|34|we compare the baseline transition with the following alternatives\nIn principle, the highest accuracy should be obtained by training the model as closely as possibl to the way it will be used, and therefore by using a number of prediction-dependent transition. which is as close as possible to the number of time-steps the model will be asked to predict fo However, prediction-dependent transitions increase the complexity of the objective function sucl that alternative schemes are most often used (Talvitie2014] Bengio et al.[2015] Oh et al.] [2015] Current training approaches are guided by the belief that using the observation xt-1, rather than the. prediction x-1, to form the state s has the effect of reducing the propagation of the errors made i1 the predictions, which are higher at earlier stages of the training, enabling the model to correct itseli from the mistakes made up to time-step t - 1. For example,Bengio et al.(2015) introduce a schedulec sampling approach where at each time-step the type of state transition is sampled from a Bernoull. distribution, with parameter annealed from an initial value corresponding to using only observation. dependent transitions to a final value corresponding to using only prediction-dependent transitions. according to a schedule selected by validation..\nBase2816: The vectors ht-1 and vt have the same dimension as zt-1, namely 2816. This model has around 80M parameters\niz and i/2816: Have a separate gating for in the cell update, i.e\nOur analysis of different training schemes on Atari, which considered the interplay among warm-up length t, prediction length T, and number of prediction-dependent transitions, suggests that, rather than as having a corrective effect, observation-dependent transitions should be seen as restricting the time interval in which the model considers its predictive abilities, and therefore focuses resources Indeed we found that, the higher the number of consecutive prediction-dependent transitions, the more the model is encouraged to focus on learning the global dynamics of the environment, which results in higher long-term accuracy. The highest long-term accuracy is always obtained by a training scheme that uses only prediction-dependent transitions even at the early stages of the training. Focussing on learning the global dynamics comes at the price of shifting model resources away from learning the precise details of the frames, leading to a decrease in short-term accuracy. Therefore, for complex games for which reasonable long-term accuracy cannot be obtained, training schemes that mix prediction-dependent and observation-dependent transitions are preferable. It follows from this analysis that percentage of consecutive prediction-dependent transitions, rather than just percentage of such transitions, should be considered when designing training schemes.\nFrom this viewpoint, the poor results obtained in Bengio et al.(2015) when using only prediction-. dependent transitions can be explained by the difference in the type of the tasks considered. Indeed. unlike our case in which the model is tolerant to some degree of error such as blurriness in earlier. predictions, the problems considered in Bengio et al.(2015) are of discrete nature and such that one prediction error at earlier time-steps can severely affect predictions at later time-steps, so that the model needs to be highly accurate short-term in order to perform reasonably longer-term. Also. Bengio et al.(2015) treated the prediction used to form s as a fixed quantity, rather than as a function. of st-1, and therefore did not perform exact maximum likelihood..\nwith one of the following cell updates\nAs we can see from the figures, there is no other transition that is clearly preferable to the baseline with the exception of Fishing Derby, for which transitions with 2816 hidden dimensionality perform better and converge earlier in terms of number of parameter updates."}, {"section_index": "4", "section_name": "PREDICTION-INDEPENDENT STATE TRANSITION", "section_text": "In addition to potentially enabling the model to incorporate action information more effectively. allowing the actions to directly influence the state dynamics has another crucial advantage: i1. allows to consider the case of a state transition that does not depend on the frame, i.e. of the form St = f(st-1, at-1), corresponding to removing the dashed links from xt-1 and from xt-1 to st in.\nIn Figs. 35|39] we compare the baseline transition with the following convolutional alternatives. (where to apply the convolutional transformations the vectors zt-1 and vt of dimensionality 2816 are reshaped into tensors of dimension 32 11 8).\nwhere the vectors h-1 and v have dimension 1024 and 2048 respectively (this model has around 25 millions (25M) parameters), with alternatives using unconstrained or convolutional transformations for prediction length T = 15 and the 0%-100%PDT training scheme\nThe last two phases in the training scheme of Oh et al.[(2015) described above are used to address the issue of poor accuracy that recurrent neural networks trained using only observation-dependent tran- sitions display when asked to predict several time-steps ahead. However, the paper does not analyse nor discuss alternative training schemes\nCt = ft ct-1 +it tanh(Wcvt) + i3 tanh(Wczzt\nCt = fz 8 ct-1 +it 8 tanh(Wco. + it tanh(zt-1) V+\nit = (Wvt f = o(WJ Ot = (Woy+\nit =0(Wivt+ Wihht-1), ft =o(Wfvt+ Wfhht Ot = (Wovt + Wohht-\nFig. 1(b). We shall call such a model prediction-independent simulator, referring to its ability to. evolve the state without using the prediction during usage. Prediction-independent state transitions. for high-dimensional observation problems have also been considered in Srivastava et al.(2015)\nC and 2C: Convolutiona1 gate and cell updates, i.e"}, {"section_index": "5", "section_name": "PREDICTION-DEPENDENT SIMULATORS", "section_text": "CDA and 2CDA: As above but with different action fusion parameters for the gate and cell updates\nWe analyse simulators with state transition of the form st = f(st-1, at-1, C(I(t-1, Xt-1))) on three families of environments with different characteristics and challenges, namely Atari 2600 games from the arcade learning environment (Bellemare et al.f2013), a first-person game where an agent moves in randomly generated 3D mazes (Beattie et al.l2016), and a 3D car racing environment called TORCS (Wymann et al.2013). We use two evaluation protocols. In the first one, the model is asked to predict for 100 or 200 time-steps into the future using actions from the test data. In the second one a human uses the model as an interactive simulator. The first protocol enables us to determine how the model performs within the action policy of the training data, whilst the second protocol enables us to explore how the model generalises to other action policies.\nThese two models have around 40M parameters\nAs state transition, we used the following action-conditioned long short-term memory (LSTM Hochreiter & Schmidhuber 1997):\nAction fusion: Vt = Whht-1 Waat-1, Gate update: it = o(Wivt + Wizzt-1), ft = o(Wfvt + Wfzzt- Ot = 0(W0Vt + W0zZt-1), Cell update: ct = ft 8 Ct-1 + it tanh(Wcvt + Wczzt-1), State update: h+ tanh(c+\nwhere denotes the Hadamard product, the logistic sigmoid function, a-1 is a one-hot vector. representation of at-1, and W are parameter matrices. In Eqs. (2)-(5), ht and ct are the LSTM state and cell forming the model state st = (ht, ct ); and it, ft, and ot are the input, forget, and output gates. respectively (for simplicity, we omit the biases in their updates). More details about this structure and. exact form of the encoding and decoding functions C and D for each family of environments can be found in AppendixB.1][B.2|and B.3] We used a warm-up phase of length t = 10 and we did not. backpropagate the gradient to this phase.\nWe considered the 10 Atari games Freeway, Ms Pacman, Qbert, Seaquest, Space Invaders, Bowling Breakout, Fishing Derby, Pong, and Riverraid. Of these, the first five were analysed in Oh et al (2015) and are used for comparison. The remaining five were chosen to better test the ability of the model in environments with other challenging characteristics, such as scrolling backgrounds (Riverraid), small/thin objects that are key aspects of the game (lines in Fishing Derby, ball in Pong and Breakout), and sparse-reward games that require very long-term predictions (Bowling). We used training and test datasets consisting of five and one million 210 160 RGB images respectively, with actions chosen from a trained DQN agent (Mnih et al.2015) according to an e = 0.2-greedy policy Such a large number of training frames ensured that our simulators did not strongly overfit to the training data (see training and test lines in Figs.2land 3] and the discussion in AppendixB.1)"}, {"section_index": "6", "section_name": "B.1.3 ACTION INCORPORATION", "section_text": "Below we summarise our results on the interplay among warm-up length t, prediction length T, an number of prediction-dependent transitions - the full analysis is given in Appendix B.1.1\nThe warm-up and prediction lengths t and T regulate degree of accuracy in two different ways. 1). The value of + T determines how far into the past the model can access information -- this is the case. irrespectively of the type of transition used. although when using prediction-dependent transitions\nThis model has around 22M parameters\nit=O(Cw(vt)+Cz(zt-1)), ft=O(Cf(vt)+Cfz(zt-1)) Ot = 0(Cv(vt) + Coz(Zt-1)), Ct = ft Ct-1 + it tanh(Ccv(vt) + Ccz(zt_1 1\nit=o(C(vt)+Crz(zt-1)), ft=o(CJ V+ +CJ~(Zt-1)) Ot = 0(Cv(vt) + Coz(Zt-1)), Ct = ft Ct-1 + it O tanh(Ccu(vt) + Ccz(zt-1)),\nA prediction-independent simulator can dramatically increase computational efficiency in situations is which the agent is interested in the effect of a sequence of actions rather than of a single action Indeed, such a model does not need to project the lower dimensional state into the higher dimensional prediction through the set of convolutions and vice versa at each time-step.\nwhere C denotes either one convolution with 32 filters of size 33, with stride 1 and padding 1 (as to preserve the input size), or two such convolutions with RReLU nonlinearity. in between. These two models have around 16M parameters..\naat-1 1 8 W0aat-1, Wcuat-1 it=O(Civ(v)+Ciz(Zt-1)) ft= Ot = 0(Cv(vt) + Coz(zt-1)), Ct = ft Ct-1 + it O tanh(Ccv(vf) + Ccz(zt-1))\nht-1-i2816-CDA and ht-1-i2816-2CDA: As above but with different parameters for the gate and cell updates, and one or two convolutions. These two models have around 48M parameters. ht-1-i2816-2CA: As 'ht-1-i2816' with convolutional action fusion, gate and cell updates, i.e. Vt = Ch(ht-1) O Wat-1, it =0(Civ(vt)+Cih(ht-1)), ft =0(Cf(vt)+Cfh(ht-1)), Ot = 0(Co(vt) + Coh(ht-1)), Ct = ft O Ct-1 + it O tanh(Cc(vt) + Cch(ht-1)) +if O tanh(zt-1) ,\nwhere C indicates two convolutions as above. This model has around 8M parameters\nIn Figs.4044|we compare different ways of incorporating the action for action-dependent state. transitions, using prediction length T = 15 and the 0%-100%PDT training scheme. More specifically we compare the baseline structure (denoted as 'Whht-1 Waat-1' in the figures) with the following alternatives:\nV+ f =o(Wfhht-1 + W It = J Ct = f 8 Ct-1 + it 8 tanh(Wchht-1 + Wcvvt\ninformation about the last T time-steps of the environment would need to be inferred. Accessing. information far back into the past can be necessary even when the model is used to perform one-step. ahead prediction only. 2) The higher the value of T and the number of prediction-dependent transi. tions, the more the corresponding objective function encourages long-term accuracy. This is achieved. by guiding the one-step ahead prediction error in such a way that further predictions will not be strongly affected, and by teaching the model to make use of information from the far past. The. more precise the model is in performing one-step ahead prediction, the less noise guidance should be required. Therefore, models with very accurate convolutional and transition structures should need. less encouragement.\nWhh-1 W zt-1 Waat-1: Multiplicative interaction of the action with both ht-1 and zt in the following way\nIncreasing the percentage of consecutive prediction-dependent transitions increases long-term accuracy, often at the expense of short-term accuracy.We found that using only observation. dependent transitions leads to poor performance in most games. Increasing the number of consecutive. prediction-dependent transitions produces an increase in long-term accuracy, but also a decrease in short-term accuracy usually corresponding to reduction in sharpness. For games that are too. complex, although the lowest prediction error is still achieved with prediction-dependent transitions. only, reasonable long-term accuracy cannot be obtained, and training schemes that mix prediction. dependent and observation-dependent transitions are therefore preferable.\nWhht-1 Wa1at- 1: Alternative multiplicative interaction of the action with ht-1 and z\nTo illustrate these results, we compare the following training schemes for prediction length T = 1\no 0% PDT: Only observation-dependent transitions. : 33% PDT: Observation and prediction-dependent transitions for the first 10 and last 5 time-steps respectively. : 0% -20% -33% PDT: Only observation-dependent transitions in the first 10,000 parameter updates; observation-dependent transitions for the first 12 time-steps and prediction-dependent transitions for the last 3 time-steps for the subsequent 100,000 parameters updates; observation-dependent transi- tions for the first 10 time-steps and prediction-dependent transitions for the last 5 time-steps for the remaining parameter updates (adaptation of the training scheme of|Oh et al.(2015) to T = 15). : 46% PDT Alt.: Alternate between observation-dependent and prediction-dependent transitions from a time-step to the next. : 46% PDT: Observation and prediction-dependent transitions for the first 8 and last 7 time-steps respectively. : 67% PDT: Observation and prediction-dependent transitions for the first 5 and last 10 time-steps respectively. : 0%-1oo% PDT: Only observation-dependent transitions in the first 1000 parameter updates; only prediction-dependent transitions in the subsequent parameter updates : 100% PDT: Only prediction-dependent transitions.\nAs Input: Consider the action as an additional input. i.e\nThis model has around 19M parameters\nFor completeness, we also consider a training scheme as in Oh et al.(2015), which consists of three phases with T = 10, T = 3, T = 5, and 500,000, 250,000, 750,000 parameter updates respectively In the first phase st is formed by using the observed frame xt-1, whilst in the two subsequent phases. St is formed by using the predicted frame Xt-1.\nIn Figs. 2|and|3|we show the prediction error averaged over 10,o00 sequences1for the games of. BowlingT Fishing Derby, Pong and Seaquest. More specifically, Fig. 2[a) shows the error for predicting up to 100 time-steps ahead after the model has seen 200 million frames (corresponding to half million parameter updates using mini-batches of 16 sequences), using actions and warm-up frames from the test data, whilst Figs.2(b)-(c) and 3|show the error at time-steps 5, 10 and 100 versus number of frames seen by the model.\nAs we can see from the figures, 'C A' is generally considerably worse than the other structures\nThese figures clearly show that long-term accuracy generally improves with increasing number of consecutive prediction-dependent transitions. When using alternating (46% PDT Alt.), rather than consecutive (46% PDT), prediction-dependent transitions, long-term accuracy is worse, as we are effectively asking the model to predict at most two time-steps ahead. We can also see\nACTION-INDEPENDENT VERSUS ACTION-DEPENDENT STATE TRANSITION\n2In this game, the player is given two chances to roll a ball down an alley in an attempt to knock down as many of the ten pins as possible, after which the score is updated and the knocked pins are relocated. Knocking down every pin on the first shot is a strike, while knocking every pin down in both shots is a spare. The player's score is determined by the number of pins knocked down, as well as the number of strikes and spares acquired\nW~z+_1 Wat-1 Vt = W lt = Ot = Ct = ft Ct-1 + it tanh(Wcv,\nVt = W~Z+1 W Iaat-1 It = 0 Ot = (Wov C+ = f+ tanh(Wcv\nat-1 : W~ Ot = (Wov Ct = ft Ct-1 + it tanh(Wci\nThis model has around 28M parameters. We also considered having different matrices for the gate and cell updates (denoted in the figures as 'W*hht-1 W*a1 at-1'). This model. has around 51M parameters.\nIt = I ht. ft = Ot = 0 1 +Woa Ct = ft Ct-1 + it tanh(Wchht_ Wczzt-1 + Wcaat_\nZt-1=C(A(I(Xt-1,Xt-1),at-1))\nwhere A indicates an augmenting operation: the frame of dimension nC = 3 nH = 210 nW = 160 is augmented with nA (number of actions) full-zero or full-one matrices of dimension nH nW, producing a tensor of dimension (nC + nA) nH nW. As the output of the first convolution can be written as\nnH nW nC h=1w=1 i=1\nwhere dH and dW indicate the filter strides, with this augmentation the action has a loca linear interaction. This model has around 19M parameters..\nIn Fig. 45] we compare the baseline structure with one that is action-independent as in[Oh et al 2015), using prediction length T = 15 and the 0%-100%PDT training scheme.\nAs we can see, having an an action-independent state transition generally gives worse performance in the games with higher error. An interesting disadvantage of such a structure is its inability to predict. the moving objects around the agent in Seaquest. This can be noticed in the videos in|Seaquest! which. show poor modelling of the fish. This structure also makes it more difficult to correctly update the. score in some games such as Seaquest and Fishing Derby..\nBaseline Training Base2816 2.5 Test iz 6 i2816 2 Zt-1 Zt-1-i2816 ht-1 ht-1-iz 1.5 ht-1-i2816 3 2 0.5 A 0 0 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 4 16 3.5 14 3 2 2.5 10 8 2 1.5 6 1 2 0.5 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (a) 100 16 Baseline 14 raining Base2816 est 80 5 i=2816 12 fZt-1 10 60 Zt-1-ii2816 ht-1 ht-1-i 8 ht-1-i2816 40 6 4 20 2 0 0 50 75 1 100 0 0.2 0.4 0.6 0.8 Time-steps Parameter Updates 18 95 16 90 14 85 12 80 1 10 8 6 4 2 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (b)\n25 40 0% PDT 35 Training 0%-20%-33% PDT Test 20 33% PDT 30 46% PDT Alt. Ernor 46% PDT 25 15 67% PDT 0%-100% PDT 20 100% PDT 10 Oh et al. 15 10 5 5 0 1 25 50 75 100 0 40 80 120 160 200 Time-steps Number of Frames. (a) (b) 140 260 120 220 10 100 180 80 140 60 100 preeeoon 40 roeeleon 60 20 0 20 0 80 160 240 320 400 0 80 160 240 320 40 Number of Frames. Number of Frames. (c)\nthat using more prediction-dependent transitions produces lower short-term accuracy and/or slower. short-term convergence. Finally, the figures show that using a training phase with only observation. dependent transitions that is too long, as in Oh et al.[(2015), can be detrimental: the models reaches at best a performance similar to the 46% PDT Alt. training scheme (the sudden drop in prediction. error corresponds to transitioning to the second training phase), but is most often worse..\nBy looking at the predicted frames we could notice that, in games containing balls and paddles, using only observation-dependent transitions gives rise to errors in reproducing the dynamics of these objects. Such errors decrease with increasing prediction-dependent transitions. In other games, using only observation-dependent transitions causes the model to fail in representing moving objects, excep for the agent in most cases. Training schemes containing more prediction-dependent transitions encourage the model to focus more on learning the dynamics of the moving objects and less on details that would only increase short-term accuracy, giving rise to more globally accurate but less sharp predictions. Finally, in games that are too complex, the strong emphasis on long-term accuracy produces predictions that are overall not sufficiently good.\nMore specifically, from the videos available a|PDTvsODT, we can see that using only observation. dependent transitions has a detrimental effect on long-term accuracy for Fishing Derby, Ms Pacman\nHighlighted names like these are direct links to folders containing videos. Each video consists of 5 randomly. selected 200 time-steps ahead predictions separated by black frames (the generated frames are shown on the left whilst the real frames are shown on the right -- the same convention will be used throughout the paper). Shown\nFigure 30: Prediction error (average over 10,o00 sequences) for different action-dependent state transitions on (a) Bowling and (b) Breakout. Parameter updates are in millions.\nFigure 2: Prediction error averaged over 10,o00 sequences on (a)-(b) Bowling and (c) Fishing Derby for different training schemes. The same color and line code is used in all figures. (a): Prediction error vs time-steps after the model has seen 200 million frames. (b)-(c): Prediction error vs number of frames seen by the model at time-steps 10 and 100.\n100 80 Baseline Training 70 Base2816 Test iz 5 75 12816 fZt-1 Zt-1-i2816 50 ht-1 50 h-1-iz * 4 40 ht-1-i2816 30 Err 25 20 10 0 0 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 90 150 80 130 o 100 70 60 50 90 40 70 30 50 20 30 10 0 10 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 1 1 Parameter Updates Parameter Updates (a) 1.5 Baseline Training 1.75 1.25 Base2816 Test iz 5 1.5 i2816 1 Zt-1 Err Zt-1-i2816 1.25 ht-1 .6 0.75 ht-1-it 1 ht-1-i2816 0.5 0.5 0.25 AY 0.25 0 0 25 50 75 1 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 2 5 1.75 10 4 degs- 1.5 1.25 0.5 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (b)\n4 18 0% PDT 16 Training 3.5 0%-20%-33% PDT Test 5 14 33% PDT 3 46% PDT AIt. 12 2.5 46% PDT 67% PDT 10 2 0%-100% PDT 100% PDT 8 1.5 Oh et al. 6 4 0.5 2 O 0 0 80 160 240 320 400 0 80 160 240 320 400 Number of Frames Number of Frames (a) 40 160 140 35 5 dlne-rmre ee 120 30 100 25 80 Error 20 60 15 Preeeion 40 10 20 5 0 0 80 160 240 320 400 0 80 160 240 320 400 Number of Frames Number of Frames (b)\nFigure 3: Prediction error on (a) Pong and (b) Seaquest for different training schemes\nQbert, Riverraid, Seaquest and Space Invaders. The most salient features of the videos are: consistent inaccuracy in predicting the paddle and ball in Breakout; reset to a new life after a few time-steps in Ms Pacman; prediction of background only after a few time-steps in Qbert; no generation of new objects or background in Riverraid; quick disappearance of existing fish and no appearance of new fish from the sides of the frame in Seaquest. For Bowling, Freeway, and Pong, long-term accuracy is generally good, but the movement of the ball is not always correctly predicted in Bowling and Pong and the chicken sometimes disappears in Freeway. On the other hand, using only prediction dependent transitions results in good long-term accuracy for Bowling, Fishing Derby, Freeway, Pong Riverraid, and Seaquest: the model accurately represents the paddle and ball dynamics in Bowling and Pong; the chicken hardly disappears in Freeway, and new objects and background are created and. most often correctly positioned in Riverraid and Seaquest..\nThe trading-off of long for short-term accuracy when using more prediction-dependent transitions is particularly evident in the videos of Seaquest: the higher the number of such transitions, the better the model learns the dynamics of the game, with new fish appearing in the right location more often However, this comes at the price of reduced sharpness, mostly in representing the fish.\nare 15 frames per seconds. Videos associated with the material discussed in this and following sections can alsc be foundathttps://sites.google.com/site/resvideos1729\nFigure 31: Prediction error for different action-dependent state transitions on (a) Fishing Derby and (b) Freeway.\nThis trade-off causes problems in Breakout, Ms Pacman, Qbert, and Space Invaders, so that schemes. that also use observation-dependent transitions are preferable for these games. For example, in. Breakout, the model fails at representing the ball, making the predictions not sufficiently good. Notice that the prediction error (see Fig. [15Jin Appendix[B.1.1) is misleading in terms of desired performance, as the 100%PDT training scheme performs as well as other mixing schemes for long-term accuracy. this highlights the difficulties in evaluating the performance of these models..\n140 45 Baseline Training 120 Base2816 40 Test 100 i#2816 35 Zt Zt-1-i2816 80 30 ht-1 ht-1-iz 60 ht-1-i2816 25 40 15 20 10 0 1 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 45 110 40 100 10 90 30 80 25 70 60 20 15 50 10 40 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 0 Parameter Updates Parameter Updates. (a) 5 Baseline Training 3.5 Base2816 Test 4 3 i2816 +Zt-1 2.5 3 Zt-1=ij2816 ht-1 2 ht-1-it ht-1-ij2816 2 1.5 2 0.5 0 0 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 12 3.5 10 10 3 8 2.5 2 6 1.5 1 2 0.5 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (b)\n18 T = 10 3.5 16 0% PDT T = 15 67%PDT 5 T = 20 14 100% PDT 3 12 2.5 10 2 8 1.5 6 4 0.5 2 0 0 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames . Number of Frames. (a) 40 160 35 140 5 120 30 100 25 ar 80 Errorr 20 60 15 40 10 20 5 0 0 40 80 120 160 200 240 0 40 80 120 160 200 241 Number of Frames.. Number of Framess. (b)\nFigure 4: Prediction error vs number of frames seen by the model (excluding warm-up frames) for (a) Pong and (b) Seaquest, using prediction lengths T = 10, 15, and 20, and training schemes O%PDT. 67%PDT. and 100%PDT\nIncreasing the prediction length T increases long-term accuracy when using prediction. dependent transitions.In Fig. 4] we show the effect of using different prediction lengths T < 2 on the training schemes 0%PDT, 67%PDT, and 100%PDT for Pong and Seaquest. In Pong, witl. the O%PDT training scheme, using higher T improves long-term accuracy: this is a game for whicl this scheme gives reasonable accuracy and the model is able to benefit from longer history. This i however not the case for Seaquest (or other games as shown AppendixB.1.1). On the other hand. with the 100%PDT training scheme, using higher T improves long-term accuracy in most games (th. difference is more pronounced between T = 10 and T = 15 than between T = 15 and T = 20) but decreases short-term accuracy. Similarly to above, reduced short-term accuracy corresponds t. reduced sharpness: from the videos available at|T 20] we can see for example that the moving fisl when caught in Fishing Derby is less sharp for higher T, as well as the fish in Seaquest and the bal. in Pong.\nTruncated backpropagation still enables increase in long-term accuracy. Due to memory con- straints, we could only backpropagate gradients over sequences of length up to 20. To use T > 20, we split the prediction sequence into subsequences and performed parameter updates separately for each subsequence. For example, to use T = 30, we split the prediction sequence into two successive subsequences of length 15, performed parameter updates over the first subsequence, initialised the state of the second subsequence with the final state from the first subsequence, and then performed parameter updates over the second subsequence. This approach corresponds to a form of truncated backpropagation through time (Williams & Zipser1995) - the extreme of this strategy (with T equal to the length of the whole training sequence) was used byZaremba et al.(2014).\nFigure 32: Prediction error for different action-dependent state transitions on (a) Ms Pacman and (b Pong.\n250 40 Baseline Training 35 Base2816 Test 200 it2816 30 Zt- +Zt-1=i2816 25 ht-1 Preellion ht-1-iz ht-1-i2816 20 100 E 50 10 0 5 25 50 75 100 0 0.2 0.4 0.6 0.8 Time-steps Parameter Updates 55 230 200 o 45 170 140 ernonr 25 110 preeeoon perelloon 80 5 50 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (a) 800 110 Baseline Training 700 100 Base2816 Test iz 5 600 90 i#2816 Zt- 80 +Zt-1-ij2816 ht-1 ht-1-iz 70 ht-1-i#2816 JOu un 60 Preeoon 200 50 100 40 0 30 25 50 75 100 0.2 0.4 0.6 0.8 Time-steps Parameter Updates 130 130 10 110 15 110 arrrrnne e 90 90 70 70 Preeloon 50 50 30 30 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (b)\n18 BPTT(15, 1) 3.5 16 0% PDT BPTT(15, 2) 100 33%PDT 5 BPTT(15, 5) 14 100% PDT 3 12 2.5 10 2 8 1.5 6 4 0.5 2 0 0 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (a) 40 160 140 35 5 120 30 100 25 80 Prnn ereeoorr 20 60 15 40 10 20 5 0 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (b)\nFigure 5: Prediction error vs number of frames seen by the model (excluding warm-up frames) for (a) Pong and (b) Seaquest, using BPTT(15, 1), BPTT(15, 2) and BTT(15, 5), and training schemes 0%PDT, 33%PDT, and 100%PDT.\nIn Fig.5] we show the effect of using 2 and 5 subsequences of length 15 (indicated by BPTT(15,2). and BTT(15,5)) on the training schemes 0%PDT, 33%PDT, and 100%PDT, for Pong and Seaquest. We can see that the O%PDT and 33%PDT training schemes display no difference in accuracy for. different values of T. On the other hand, with the 100%PDT training scheme, using more than one subsequence improves long-term accuracy (the difference is more pronounced between T = 15 and. T = 30 than between T = 30 and T = 75), but decreases short-term accuracy the difference being small at convergence between T = 15 and T = 30, but big between T = 30 and T = 75. The. decrease in accuracy with 5 subsequences is drastic in some games..\nFor Riverraid, using more than one subsequence improves long-term accuracy dramatically, as shown. in Fig. 6l as it enables correct prediction after the agent death. Interestingly, using t = 25 with prediction length T = 15 (black line), which produces the same history length - + T, does not give. the same amount of gain as when using two subsequences. This would seem to suggest that some. improvement is due to encouraging longer-term accuracy, indicating that this can be achieved even when not fully backpropagating the gradient..\nFrom the videos available atT > 20] we can see that with T = 75 the predictions in some Fishing. Derby videos are faded, whilst in Pong the model can suddenly switch from one dynamics to another for the ball and the opponent's paddle..\nFigure 33: Prediction error for different action-dependent state transitions on (a) Qbert and (b Riverraid.\nIn conclusion, using higher T through truncated backpropagation can improve performance. However in schemes that use many prediction-dependent transitions, a high value of T can lead to poor predictions.\n35 40 Baseline Training Base2816 30 Test 35 iz i2816 30 Zt-1 25 fZt-1-i{2816 hz-1 ht-1-iz 20 20 ht-1-i2816 15 15 Pre 10 10 5 0 5. 1 25 50 75 100 0 0.2 0.4 0.6 0.8 Time-steps Parameter Updates 40 70 35 10 60 50 25 40 20 30 15 20 10 5 10 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (a) 140 40 Baseline Training 120 Base2816 35 Test 5 100 i2816 Zt-1 fZt-1-i#2816 80 25 ht-1 ht-1-i 60 ht-1-i2816 20 40 15 20 10 5 0 25 50 75 100 0 1 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 45 150 40 140 35 130 30 120 25 110 2 100 15 90 10 80 5 70 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (b)\n1600 140 BPTT(15, 1), = 10 0% PDT 1400 BPTT(15, 2), t 10 33% PDT BPTT(15, 5), T= 10 5 120 1200 100% PDT BPTT(15, 1), t = 25 FErno 1000 100 reeeleon 800 80 600 400 60 200 0 40 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 280 440 390 240 10 15 340 200 290 160 240 190 120 140 80 90 P 40 40 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames\nFigure 6: Prediction error error vs number of frames seen by the model for Riverraid, using BPTT(15 1), BPTT(15, 2) and BTT(15, 5), and training schemes 0%PDT, 33%PDT, and 100%PDT. The black line is obtained with the100%PDT training scheme"}, {"section_index": "7", "section_name": "EVALUATION THROUGH HUMAN PLAY", "section_text": "Whilst we cannot expect our simulators to generalise to (sequences of) actions never chosen by the. DQN and that are not present in the training data, such as moving the agent up and down the alley in Bowling, it is reasonable to expect some degree of generalisation in the action-wise simplest environments such as Breakout, Freeway and Pong.\nWe tested these three games by having humans using the models as interactive simulators. We. generally found that models trained using only prediction-dependent transitions were more fragile to. states of the environment not experienced during training, such that the humans were able to play. these games for longer with simulators trained with mixing training schemes. This seems to indicate. that models with higher long-term test accuracy are at higher risk to overfit to the training policy.\nIn Fig.7(a), we show some salient frames from a game of Pong played by a human for 500 time-steps. (the corresponding video is available at[Pong-HPlay). The game starts with score (2,0), after which. the opponent scores five times, whilst the human player scores twice. As we can see, the scoring is. updated correctly and the game dynamics is accurate. In Fig.7(b), we show some salient fames from. a game of Breakout played by a human for 350 time-steps (the corresponding video is available at. Breakout-HPlay). As for Pong, the scoring is updated correctly and the game dynamics is accurate These images demonstrate some degree of generalisation of the model to a human style of play.."}, {"section_index": "8", "section_name": "EVALUATION OF STATE TRANSITIONS STRUCTURES", "section_text": "In Appendix [B.1.2|and [B.1.3|we present an extensive evaluation of alternative action-dependen state transitions to the baseline (Eqs. (1)-(5)), including convolutional transformations for the actior fusion, and gate and cell updates, and different ways of incorporating action information. We alsc present a comparison between action-dependent and action-independent state transitions.\nFigure 34: Prediction error for different action-dependent state transitions on (a) Seaquest and (b) Space Invaders.\n9 3.5 8 Baseline Training 3 Test 7 2C C-DA 2.5 6 2C-DA ht-1-ig2816-2C 5 2 ht-1-i#2816-C-DA ht-1-i2816-2C-DA ht-1-i2816-2CA 4 1.5 3 1 2 0.5 0 0 1 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 4 14 3.5 12 10 100 3 10 2.5 8 2 6 1.5 4 1 2 0.5 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 Parameter Updates Parameter Updates (a) 120 14 Baseline C 12 100 2C C-DA 10 80 2C-DA ht-1-ij2816-2C 8 ht-1-ij2816-C-DA 60 hz-1-i2816-2C-DA ht-1-i|2816-2CA 6 40 4 20 2 0 0 25 50 75 100 0 0.2 0.4 0.6 0.8 Time-steps Parameter Updates 18 110 16 100 100 12 90 10 80 8 70 4 60 2 0 50 0 0.2 0.4 0.6 0.8 0 1 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (b)\n-2 II I 000 5 0015 002 5 002 5 0035 N 00I 0D 3 004 104 004 - 005 00b 10 108 108 (a) (b)\nFigure 7: Salient frames extracted from (a) 500 frames of Pong and (b) 350 frames of Breakou generated using our simulator with actions taken by a human player (larger versions can be found in. Figs.47and48)."}, {"section_index": "9", "section_name": "3.2 3D ENVIRONMENTS", "section_text": "Both TORCS and the 3D maze environments highlight the need to learn dynamics that are temporally. and spatially coherent: TORCS exposes the need to learn fast moving dynamics and consistency. under motion, whilst 3D mazes are partially-observed and therefore require the simulator to build an internal representation of its surrounding using memory, as well learn basic physics, such as rotation. momentum, and the solid properties of walls.\nWhen using actions from the test set (see Fig. 49|in AppendixB.2|and the corresponding video at. TORCS), the simulator was able to produce accurate predictions for up to several hundreds time-steps As the car moved around the racing track, the simulator was able to predict the appearance of new features in the background (towers, sitting areas, lamp posts, etc.), as well as model the jerky motion of the car caused by our choices of random actions. Finally, the instruments (speedometer and rpm were correctly displayed.\nThe simulator was good enough to be used interactively for several hundred frames, using actions provided by a human. This showed that the model had learnt well how to deal with the car hitting the wall on the right side of the track. Some salient frames from the game are shown in Fig. 8|(the corresponding video can be seen atTORCS-HPlay)\n3D Mazes. We used an environment that consists of randomly generated 3D mazes, containing textured surfaces with occasional paintings on the walls: the mazes were all of the same size, but\nSome action-dependent state transitions give better performance than the baseline in some games. For. example, we found that increasing the state dimension from 1024 to the dimension of the convolved frame, namely 2816, might be preferable. Interestingly, this is not due to an increase in the number of. parameters, as the same gain is obtained using convolutions for the gates and cell updates. These. results seem to suggest that high-dimensional sparse transition structures could be a promising. direction for further improvement. Regarding different ways of incorporation action information,. we found that using local incorporation such as augmenting the frame with action information and. indirect action influence gives worse performance that direct and global action influence, but that. there are several ways of incorporating action information directly and globally that give similar. performance.\nTORCS. The data was generated using an artificial agent controlling a fast car without opponents Specifics of the data and models are given in Appendix[B.2\nFigure 35: Prediction error (average over 10,O00 sequences) for different convolutional action dependent state transitions on (a) Bowling and (b) Breakout. Parameter updates are in millions\n120 80 Baseline 70 Training 100 C Test 2C C-DA 60 80 2C-DA 50 ht-1-i|2816-2C ht-1-ig2816-C-DA Preeedlon 60 ht-1-ij2816-2C-DA 40 ht-1-i|2816-2CA 30 40 20 20 10 0 0 1 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 90 150 80 130 10 70 110 60 50 90 70 30 50 20 30 10 0 10 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (a) 2 Baseline 1.75 Training c Test 0.75 2C C-DA 1.5 2C-DA Ennr ht-1-ij2816-2C 1.25 ht-1-ij2816 C-DA Preelion 0.5 hz-1-i#2816-2C-DA 1 ht-1-ij2816-2CA 0.25 0.5 0 0 25 50 75 100 0.2 0.4 0.6 0.8 0 Time-steps Parameter Updates 2 5 1.75 Sl deae 4 1.5 2 0.5 0 0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 1 0 Parameter Updates Parameter Updates (b)\nFigure 8: Salient frames highlighting coherence extracted from 700 frames of TORCS generated using our simulator with actions taken by a human player.\nFigure 9: Predicted (left) and real (right) frames at time-steps 1, 25, 66, 158 and 200 using actions from the test data.\ndiffered in the layout of rooms and corridors, and in the locations of paintings (see Fig.11(b) for al example of layout). Specifics of the data and models are given in Appendix|B.3\nWhen using actions from the test set, the simulator was able to very reasonably predict frames ever after 200 steps. In Fig.9|we compare predicted frames to the real frames at several time-steps (the corresponding video can be seen at|3DMazes). We can see that the wall layout is better predicte when walls are closer to the agent and that the depth of corridors, and far away-walls are not as lon. as they should be. The lighting on the ceiling is correct on all the frames shown.\nWhen using the simulator interactively with actions provided by a human, we could test that the simulator had learnt consistent aspects of the maze: when walking into walls, the model maintained their position and layout (in a rare case, we were able to walk through a painting on the wall - paintings are rare in the dataset and hence it is not unreasonable that they would not be maintained when stress testing the model in this way). When taking 360 spins, the wall configurations were the same as previously generated and not regenerated afresh, and shown in Fig.10|(see also|3DMazes- HPLay). The coherence of the maze was good for nearby walls, but not at the end of long-corridors.\nThe search for exploration strategies better than e-greedy is an active area of research. Various solutions have been proposed, such as density based or optimistic exploration (Auer et al.] 2002) Oh et al.(2015) considered a memory-based approach that steers the agent towards previously unobserved frames. In this section, we test our simulators using a similar approach, but select a group of actions rather than a single action at a time. Furthermore, rather than a fixed 2D environment, we consider the more challenging 3D mazes environment. This also enables us this present a qualitative analysis, as we can exactly measure and plot the proportion of the maze visited over time. Our aim is to be quantitatively and qualitatively better than random exploration (using dithering of O.7, as this lead to the best possible random agent).\nWe used a 3D maze simulator to predict the outcome of sequences of actions, chosen with a hard coded policy. Our algorithm (see below) did N Monte-Carlo simulations with randomly selected sequences of actions of fixed length d. At each time-step t, we stored the last 10 observed frames in an episodic memory buffer and compared predicted frames to those in memory.\nFigure 36: Prediction error for different convolutional action-dependent state transitions on ( Fishing Derby and (b) Freeway.\n150 45 Baseline Training 125 40 Test 2C C-DA 35 100 2C-DA Ennr ht-1-i#2816-2C 30 ht-1-i#2816-C-DA 75 hz-1-i2816-2C-DA ht-1-i{2816-2CA 25 50 25 15 0 10 1 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 50 160 45 140 10 40 120 30 100 25 80 20 60 15 10 40 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0 0.8 1 Parameter Updates Parameter Updates (a) 5 Baseline 3.5 Training Test 2 C-DA 3 2C-DA ht-1-i2816-2C 2.5 3 ht-1-i#2816-C-DA ht-1-i2816-2C-DA 2 ht-1-i2816-2CA 2 1.5 1 0 25 50 75 100 0 0.2 0.4 0.6 0.8 Time-steps Parameter Updates 12 3.5 10 10 3 8 2.5 2 6 1.5 4 1 2 0.5 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 Parameter Updates Parameter Updates (b)\nFigure 10: Salient frames highlighting wall-layout memory after 360 spin generated using our simulator with actions taken by a human player.\nFigure 11: (a) Average ratio over 10 mazes (shaded is the 68% confidence interval) of area visited by the random agent and an agent using our model. (b) Typical example of paths followed by (left) the random agent and (right) our agent (see the Appendix for more examples)..\nThis is a good local exploration strategy that leads to faster movement through corridors. To transform this into a good global exploration strategy, our explorer would have to be augmented with a better memory in order to avoid going down the same corridor twice. These sorts of smooth local exploration strategies could also be useful in navigation problems."}, {"section_index": "10", "section_name": "4 PREDICTION-INDEPENDENT SIMULATORS", "section_text": "A prediction-independent simulator uses a state transition of the form st = f(st-1, at-1). The advantage of such a simulator is that, to make a prediction for any point in the future, high-dimensiona images do not need to be predicted at all intermediate time-steps. In Atari for example, this avoid having to project from the state space of dimension 1024 to the observation space of dimension 100,800 (2101603) and vice versa through the encoding and decoding functions C and D - in the used structure this enables saving around 200 million flops at each time-step.\nFor the state transition, we found that a working structure was to use Eqs. (1)-(5) with zt = h and. with different parameters for the warm-up and prediction phases. As for the prediction-dependent. simulator, we used a warm-up phase of length t = 10, but we did backpropagate the gradient back to time-step five in order to learn the encoding function C..\nOur analysis on Atari (see Appendix [C) suggests that the prediction-independent simulator is much more sensitive to changes in the state transition structure and in the training scheme than the prediction dependent simulator. We found that using prediction length T = 15 gave much worse long-term accuracy than with the prediction-dependent simulator. This problem could be alleviated with the use of prediction length T = 30 through truncated backpropagation.\nFigure 37: Prediction error for different convolutional action-dependent state transitions on (a) Ms Pacman and (b) Pong\nRandomExplorer 0.25 Simulator-based Explorer 0.2 0.15 0.1 0.05 0 100 200 300 400 500 600700 800 900 (a) (b)\n200 40 Baseline Training 35 Test 160 2C 5 C-DA 30 2C-DA 120 ht-1it2816-2C 25 E ht-1-i2816-C-DA Prerlion ht-1-i2816-2C-DA ht-1-i|2816-2CA 80 15 40 10 0 5 25 50 75 100 0 0.2 0.4 0.6 0.8 Time-steps Parameter Updates 55 230 200 2 45 170 140 25 110 15 80 5 50 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 Parameter Updates Parameter Updates (a) 1000 110 Baseline 100 Training c Test 800 2C 5 C-DA 90 2C-DA 80 600 ht-1-i2816-2C Prr erreonn ht-1-i2816-C-DA ht-1-i2816-2C-DA 70 ht-1-i2816-2CA 400 60 50 200 40 0 30 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 130 130 1 110 15 110 90 90 70 70 50 50 30 30 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (b)\n15 P.-Independent S. Bowling 3.5 - P.-Dependent S. 12.5 Freeway Pong 3 10 2.5 2 7.5 1.5 5 2.5 0.5 0 0 0 40 80 120 160 200 240 0 40 80 120 160 200 24 Number of Frames. Number of Frames. (a) 80 170 P.-Independent S.. Breakout 70 150 =: P.-Dependent S.. Fishing Derby 60 Ms Pacman H 130 Qbert 50 110 Seaquest Space Invaders 40 90 30 70 20 50 10 30 0 10 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames. Number of Frames (b)\nFigure 12: Prediction error vs number of frames seen by the model (excluding warm-up frames) for the prediction-dependent and prediction-independent simulators using BPTT(15, 2) for (a) Bowling, Freeway, Pong and (b) Breakout, Fishing Derby, Ms Pacman, Qbert, Seaquest, Space Invaders (the prediction-dependent simulator is trained with the O%-100%PDT training scheme).\nFig. 12 shows as comparison of the prediction-dependent and prediction-independent simulators using T = 30 through two subsequences of length 15 (we indicate this as BPTT(15, 2), even though in the prediction-independent simulator we did backpropagate the gradient to the warm-up phase).\nWhen looking at the videos available at PI-Simulators we can notice that the prediction-independen. simulator tends to give worse long-term prediction. In Fishing Derby for example, in the long. term the model tends to create fish of smaller dimension in addition to the fish present in the rea. frames. Nevertheless, for some difficult games, the prediction-independent simulator achieves bette. performance than the prediction-dependent simulator. More investigation about alternative stat transitions and training schemes would need to be performed to obtain the same overall level of. accuracy as for the prediction-dependent simulator."}, {"section_index": "11", "section_name": "5 DISCUSSION", "section_text": "Figure 38: Prediction error for different convolutional action-dependent state transitions on (a) Qber and (b) Riverraid.\nIn this paper we have introduced an approach to simulate action-conditional dynamics and demon. strated that is highly adaptable to different environments, ranging from Atari games to 3D car racing environments and mazes. We showed state-of-the-art results on Atari, and demonstrated the feasibility of live human play in all three task families. The system is able to capture complex and long-term interactions, and displays a sense of spatial and temporal coherence that has, to our knowledge, not been demonstrated on high-dimensional time-series data such as these..\n60 35 Baseline Training 50 30 Test 2C C-DA 40 2C-DA Ernr 25 ht-1-i2816-2C ht-1-i2816-C-DA hz-1-ig2816-2C-DA 20 ht-1-i|2816-2CA 15 10 10 0 5 1 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 40 80 35 70 10 60 25 50 40 15 30 5 20 0 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 Parameter Updates Parameter Updates (a) 150 40 Baseline Training 125 35 Test 2C 5 C-DA 30 100 2C-DA ht-1-ij2816-2C 25 ht-1-ig2816-C-DA ppreerlion 75 hz-1-i2816-2C-DA ht-1-i;2816-2CA 20 50 15 25 10 5 0 1 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 40 150 35 140 10 100 130 120 110 100 15 90 80 5 70 0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (b)\nWe have presented an in-deep analysis on the effect of different training approaches on short and. long-term prediction capabilities, and showed that moving towards schemes in which the simulato. relies less on past observations to form future predictions has the effect on focussing model resource.. on learning the global dynamics of the images, leading to dramatic improvements in the long-tern. predictions. However, this requires a distribution of resources that impacts short-term performance which can be harmful to the overall performance of the model for some games. This trade-off is. also causing the model to be less robust to states of the environment not seen during training. T. alleviate this problem would require the design of more sophisticated model architectures than the. ones considered here. Whilst it is also expected that more ad-hoc architectures would be less sensitive. to different training approaches, we believe that guiding the noise as well as teaching the model tc. make use of past information through the objective function would still be beneficial for improving. long-term prediction.\nComplex environments have compositional structure, such as independently moving objects and other phenomena that only rarely interact. In order for our simulators to better capture this compositional structure, we may need to develop specialised functional forms and memory stores that are better suited to dealing with independent representations and their interlinked interactions and relationships More homogeneous deep network architectures such as the one presented here are clearly not optimal for these domains, as can be seen in Atari environments such as Ms Pacman where the system has trouble keeping track of multiple independently moving ghosts. Whilst the LSTM memory and our training scheme have proven to capture long-term dependencies, alternative memory structures are required in order, for example, to learn spatial coherence at a more global level than the one displayed by our model in the 3D mazes in oder to do navigation.\nIn the case of action-conditional dynamics, the policy-induced data distribution does not cover the. state space and might in fact be nonstationary over an agent's lifetime. This can cause some regions. of the state space to be oversampled, whereas the regions we might actually care about the most. - those just around the agent policy's state distribution - to be underrepresented. In addition, this. induces biases in the data that will ultimately not enable the model learn the environment dynamics. correctly. As verified from the experiments in this paper, both on live human play and model-basec. exploration, this problem is not yet as pressing as might be expected in some environments. Howevei. our simulators displayed limitations and faults due to the specificities of the training data, such as fo example predicting agent's death based on the recognition of a particular sequence of actions alway. co-occurring with death in the training data rather than on the recognition of the real causes..\nFinally, a limitation of our approach is that, however capable it might be, it is a deterministic mode designed for deterministic environments. Clearly most real world environments involve noisy stat transitions, and future work will have to address the extension of the techniques developed in thi paper to more generative temporal models."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47:235-256, 2002. C. Beattie, J. Z. Leibo, D. Teplyashin, T. Ward, M. Wainwright, H. Kuttler, A. Lefrancq, S. Green, V. Valdes.. A. Sadik, J. Schrittwieser, K. Anderson, S. York, M. Cant, A. Cain, A. Bolton, S. Gaffney, H. King,. D. Hassabis, S. Legg, and S. Petersen. Deepmind lab. CoRR, abs/1612.03801, 2016. URLhttp: / /arxiv org/abs/1612.03801 M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The Arcade Learning Environment: An evaluation. platform for general agents. Journal of Artificial Intelligence Research, 47:253-279, 2013. S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer. Scheduled sampling for sequence prediction with recurrent. neural networks. In Advances in Neural Information Processing Systems 28 (NIPS), pp. 1171-1179. 2015. A. Graves. Generating sequences with recurrent neural networks. 2013. URLhttp: / /arxiv. org/abs/. 1308.0850 S. Hochreiter and J. Schmidh. 0(8):1735_1780.1997\nThe authors would like to thank David Barber for helping with the graphical model interpretation. Alex Pritzel for preparing the DQN data, Yori Zwols and Frederic Besse for helping with the mplementation of the model, and Oriol Vinyals, Yee Whye Teh, Junhyuk Oh, and the anonymous reviewers for useful discussions and feedback on the manuscript.\n10 3.5 9 Whht-1 8 W'at-1 Whht1 O W\" at-1 + W\"at-1 3 8 W*zt-1 O W\"at-1 Whh-1 O W*zt1 O W\"at-1 2.5 7 Whhz-1 O Wa at-1 6 W*hht-1 O W*Zt-1 O W*\"at W*hht-1 O W*a1 at-1 2 5 As Input 43 CA 1.5 2 0.5 0 0 1 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 4 3.5 10 13 3 11 2.5 2 9 1.5 1 1 5 0.5 0 3 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (a) 120 14 Whht-1 W'at Whht-1 O Wa at-1+ Wa2at-1 12 100 Wzt-1 O W\"at-1 Whht-1 O W=zt-1 O W\"at-1 10 80 Whht-1 8 Wai at-1 W*hht-1 O W*zZt-1 O W*at-1 W*hht-1 O W*1 ay-1 8 60 As Input CA 6 40 4 20 2 0 1 25 50 75 100 0 0.2 0.4 0.6 0.8 Time-steps Parameter Updates 16 120 14 110 10 100 12 100 10 90 8 80 6 70 4 2 60 0 50 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 Parameter Updates Parameter Updates (b)\nFigure 40: Prediction error (average over 10,o00 sequences) for different ways of incorporating the action on (a) Bowling and (b) Breakout. Parameter updates are in millions..\nM. Lengyel and P. Dayan. Hippocampal contributions to control: The third way. In Advances in Neural. Information Processing Systems 20 (NIPS), pp. 889-896, 2008. M. L. Littman, R. S. Sutton, and S. Singh. Predictive representations of state. In Advances in Neural Information Processing Systems 14 (NIPS), pp. 1555-1561. 2002. M. McCloskey. Intuitive physics. Scientific American, 248(4):122-130, 1983. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K.. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):. 529-533,02 2015. URLhttp://dx.doi.0rg/10.1038/nature14236 V. Mnih, A. Puigdomenech Badia, M. Mirza, A. Graves, T. P Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016. Y. Niv. Reinforcement learning in the brain. Journal of Mathematical Psychology, 53(3):139-154, 2009. J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. P. Singh. Action-conditional video prediction using deep networks in. Atari games. In Advances in Neural Information Processing Systems 28 (NIPS), pp. 2863-2871. 2015. URL http://arxiv.0rg/abs/1507.08750 J. K. O'Regan and A. Noe. A sensorimotor account of vision and visual consciousness. Behavioral and brain. sciences, 24(05):939-973, 2001. P.-Y. Oudeyer, F. Kaplan, and V. V. Hafner. Intrinsic motivation systems for autonomous mental development. Evolutionary Computation, IEEE Transactions on, 11(2):265-286, 2007. V. Patraucean, A. Handa, and R. Cipolla. Spatio-temporal video autoencoder with differentiable memory. CoRR,. abs/1511.06309,2015. URL|http://arxiv.0rg/abs/1511.06309 J. Pearl. Causality. Cambridge University Press, 2009.. N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using LSTMs.. In Proceedings of the 32nd International Conference on Machine Learning (ICML), pp. 843-852, 2015. W. Sun, A. Venkatraman, B. Boots, and J. A. Bagnell. Learning to filter with predictive state inference machines.. CoRR, abs/1512.08836, 2015. URLhttp://arxiv.0rg/abs/1512.08836 R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT Press, 1998. E. Talvitie. Model regularization for stable sample rollouts. In Proceedings of the Thirtieth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-14), pp. 780-789, 2014. N. Wahlstrom, T. B. Schon, and M. P. Deisenroth. From pixels to torques: Policy learning with deep dynamical. models. CoRR, abs/1502.02251, 2015. URLhttp://arxiv.0rg/abs/1502.02251 M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent dynamics. model for control from raw images. In Advances in Neural Information Processing Systems 28 (NIPS), pp. 2728-2736. 2015. R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational. complexity. Bibliometrics, pp. 433-486, 1995. B. Wymann, E. Espie, C. Guionneau, C. Dimitrakakis, R. Coulom, and A. Sumner. Torcs: The open racing car simulator, v1.3.5. 2013. URLhttp: //www.torcs.org B. Xu, N. Wang, T. Chen, and M. Li. Empirical evaluation of rectified activations in convolutional network.. 2015. W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. CoRR, abs/1409.2329, 2014.. URLhttp://arxiv.0rg/abs/1409.2329\n110 80 Whht-1 8 W'at-1 Whht-1 O Wa at-1 + Wa2at-i 70 90 W*zt-1 O W\"at-1 5 Whht-1 8 W*zt_1 O W\"at-1 Whht-1 8 Wa1 at-1 70 W*hht-1 O W*zZt-1 8 W*aat- W*hht_1 O W*a1 at-1 As Input CA 40 50 6 .6 30 30 2 20 10 10 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 90 160 80 140 70 120 60 100 40 80 60 10 40 0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (a) 2 Whht-1 O W'at-1 1.75 Whht-1 O W\" at-1 + W\"2at-1 W*zt-1 O W\"at-1 5 1.5 Whh-1 O W~zt1 8 W\"at-1 1.5 Whh-1 O Wa at-1 Err W*hht1 O W*zZz-1 O W**at-1 1.25 W*hht-1 O W*a1at-1 As Input CA 0.5 0.5 0 0 1 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 1.75 10 dees- 1.5 1.25 0.5 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (b)"}, {"section_index": "13", "section_name": "A DATA. PREPROCESSING AND TRAINING ALGORITHM", "section_text": "When generating the data, each selected action was repeated for 4 time-steps and only the 4th frame was recorded for the analysis. The RGB images were preprocessed by subtracting mean pixel values (calculated separately for each color channel and over an initial set of 2048 frames only) and by dividing each pixel value by 255.\nAs stochastic gradient algorithm, we used centered RMSProp (Graves! 2013) with learning rate41e-5 epsilon 0.01, momentum 0.9, decay 0.95, and mini-batch size 16. The model was implemented ir Torch, using the default initialization of the parameters. The state s1 was initialized to zero."}, {"section_index": "14", "section_name": "B PREDICTION-DEPENDENT SIMULATORS", "section_text": "As baseline for the single-step simulators we used the following state transition:\nwith vectors ht and vt of dimension 1024 and 2048 respectively"}, {"section_index": "15", "section_name": "B.1 ATARI", "section_text": "Ne used a trained the scores are given in the table on the right) to generate training and test datasets consisting of 5,000,000 and 1,000,000 (210160) RGB images respectively, with actions chosen according to an e = 0.2-greedy policy. Such a large number of training frames was necessary to prevent our simulators from strongly overfitting to the training data. This would be the case with, for example, one million training frames, as shown in Fig. 13](the corresponding video can be seen at[MSPacman). The ghosts are in frightened mode at time-step 1 (first image), and have returned to chase mode at time-step 63 (second image). The simulator is able to predict the exact time of return to the chase mode without sufficient history, which suggests that the sequence was memorized.\nThe encoding consisted of 4 convolutional layers with 64, 32, 32 and 32 filters, of size 8 8, 6 6 6 6, and 4 4, stride 2, and padding 0, 1, 1, 0 and 1, 1, 1, 0 for the height and width respectivel Every layer was followed by a randomized rectified linear function (RReLU) (Xu et al.]2015. with parameters l = 1/8, u = 1/3. The output tensor of the convolutional layers of dimensio 32 11 8 was then flattened into the vector zt of dimension 2816. The decoding consisted of on. fully-connected layer with 2816 hidden units followed by 4 full convolutional layers with the invers. symmetric structure of the encoding transformation: 32, 32, 32 and 64 filters, of size 4 4, 6 6 6, and 8 8, stride 2, and padding 0, 1, 1, 0 and 0, 1, 1, 1. Each full convolutional layer (excep. the last one) was followed by a RReLU..\nB.1.1 SHORT-TERM VERSUS LONG-TERM ACCURACY\n4We found that using a higher learning rate value of 2e-5 would generally increase convergence speed bu. cause major instability issues, suggesting that gradient clipping would need to be used.\nFigure 41: Prediction error for different ways of incorporating the action on (a) Fishing Derby anc (b) Freeway.\nGame Name DON Score Bowling 51.84 Breakout 396.25 Fishing Derby 19.30 Freeway 33.38 Ms Pacman 2963.31 Pong 20.88 Qbert 14,865.43 Riverraid 13.593.49 Seaquest 17,250.31 Space Invaders 2952.09\nIn Fig.[14] we show one example of successful prediction at time-steps 100 and 200 for each game\n120 50 Whht-1 8 W\"at-1 Whht-1 O Wa1 at1+ Wa2atj 45 100 W*zt-1 O W\"at- 5 Whht-1 O Wz_1 O W\"at-1 40 80 Whht-1 8 Wa at-1 Ennr W*hht1 O W*zZz-1 O W**at-1 W*hhz-1 O W* ay-1 35 Preedeion 60 As Input at CA 40 20 2 20 0 15 1 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 55 120 50 110 o . 45 100 35 90 30 80 70 15 60 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 0 1 Parameter Updates Parameter Updates (a) 6 Whht-1 O Waat 3.5 Whht1O W\" at1+ Wa2at-1 5 W*zt-1 O W\"at-1 Whht-1 8 W*zt1 O W\"at-1 deas- 3 Whh-1 8 Wa a1 W*hht-1 O W*z-1 O W**at-1 W*hh-1 O W*a1 at-1 3 As Input 2 CA 1 0.5 0 25 50 75 100 0 0.2 0.4 0.6 0.8 Time-steps Parameter Updates 14 3.5 12 10 10 2 8 ror 6 1.5 6 4 0 2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (b)\nFigure 13: Prediction that demonstrates overfitting of the model when trained on one million frames\nBreakout. Breakout is a difficult game to model. A simulator trained with only prediction. dependent transitions predicts the paddle movement very accurately but almost always fails to represent the ball. A simulator trained with only observation-dependent transitions struggles much. less to represent the ball but does not predict the paddle and ball positions as accurately, and the ball. also often disappears after hitting the paddle. Interestingly, the long-term prediction error (bottom. right of Fig.15(b)) for the 100%PDT training scheme is the lowest, as when not representing the ball. the predicted frames look closer to the real frames than when representing the ball incorrectly. A. big improvement in the model ability to represent the ball could be obtained by pre-processing the. frames with max-pooling as done for DQN, as this increases the ball size. We believe that a more sophisticated convolutional structure would be even more effective, but did not succeed in discovering. such a structure\nFishing Derby. In Fishing Derby, long-term accuracy is disastrous with the O%PDT training scheme and good with the 100%PDT training scheme. Short-term accuracy is better with schemes using more observation-dependent transitions than in the 100% or O%-100%PDT training schemes especially at low numbers of parameter updates.\nFreeway. With Bowling, Freeway is one of the easiest games to model, but more parameter updates are required for convergence than for Bowling. The O%PDT training scheme gives good accuracy although sometimes the chicken disappears or its position is incorrectly predicted - this happens extremely rarely with the 100%PDT training scheme. In both schemes, the score is often wrongly updated in the warning phase.\nPong. With the O%PDT training scheme, the model often incorrectly predicts the direction of the ball when hit by the agent or by the opponent. Quite rarely, the ball disappears when hit by the agent With the 100%PDT training scheme, the direction the ball is much more accurately predicted, but the ball more often disappears when hit by the agent, and the ball and paddles are generally less sharp.\nFigure 42: Prediction error for different ways of incorporating the action on (a) Ms Pacman and (b Pong.\n390 390 510 510 520 520 520 520 530 530 530 530 530 530 550 560\nBowling. Bowling is one of the easiest games to model. A simulator trained using only observation dependent transitions gives quite accurate predictions. However, using only prediction-dependent tran sitions reduces the error in updating the score and predicting the ball direction.\nMs Pacman. Ms Pacman is a very difficult game to model and accurate prediction can only be. obtained for a few time-steps into the future. The movement of the ghosts, especially when in frightened mode, regulated by the position of Ms Pacman according to complex rules. Furthermore the DQN e = 0.2-greedy policy does not enable the agent to explore certain regions of the state space As a result, the simulator can predict well the movement of Ms Pacman, but fails to predict long-term the movement of the ghosts when in frightened mode or when in chase mode later in the episodes. .\n140 40 Whht-1 O W'at-1 120 Whht-1 O Wa at1+ Waat-1 35 W*zt-1 O W\"at-1 5 100 Whht-1 O W*zt1 8 Waat1 Whht-1 O Wa at1 Ernr W*hht-1 O W*zZt-1 O W*\"at-1 80 W*hht-1 8 W*a1 at-1 25 Preeeellon As Input 60 CA 40 20 5 0 1 25 50 75 100 0 0.2 0.4 0.6 0.8 Time-steps Parameter Updates 55 230 o 45 170 35 25 140 perellinn 15 110 5 80 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (a) 1000 115 Whht-1 O W'at Whht-1 O W\" at-1 + Wa2at-1 105 800 W*zt-1 O W\"at1 5 Whhq-1 O W*zt1 O W\"at-1 95 Whht-1 O Wa1at-1 600 W*hht-1 O W*Zt-1 8 W*aay-i W*hht-1 O W*a1at- 85 As Input CA 75 400 65 200 55 0 45 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Time-steps Parameter Updates 130 140 120 130 19 120 10 110 110 100 100 90 90 80 80 reeleon 70 70 60 60 50 50 0 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (b)\n06 66 66 66 ACTIVSION ACTIVISION 50 340 340 08150 03150 E T T TTTT TTTT TTTE 1280 1300 1380 1380 1545 D000 0545 0000 0000 EX EX OXYGEN tttt5 tttt RiTil/isio ATiVisio\nFigure 14: One example of 200 time-step ahead prediction for each of the 10 Atari games. Displayec are predicted (left) and real (right) frames at time-steps 100 and 200..\nRiverraid. In Riverraid, prediction with the O%PDT training scheme is very poor, as this scheme causes no generation of new objects or background. With all schemes, the model fails to predict the frames that follow an agent death - that's why the prediction error increases sharply after around time-step 13 in Fig. 18(b). The long-term prediction error is lower with the 100%PDT training scheme, as with this scheme the simulator is more accurate before a death and is sometimes able to predict the subsequent frames after a death. The problem of incorrect after-death prediction disappears when using BBTT(15,2) with prediction-dependent transitions.\nSeaquest. In Seaquest, with the O%PDT training scheme, the existing fish disappears after a fev. time-steps and no new fish ever appears from the sides of the frame. The more predicted frames ar used in the sequence the less sharply the fish is represented, but its dynamics and appearance fron. the sides of the frame is more precisely predicted.\nSpace Invaders Space Invaders is a very difficult game to model and accurate prediction can only. be obtained for a few time-steps into the future. The O%PDT training scheme is unable to predict. accurately beyond very short-term. The 100%PDT training scheme struggles to represent the bullets\nIn Figs. [25|29|we show the effect of using different prediction lengths T > 20 through truncated. backpropagation with the training schemes 0%PDT, 33%PDT, and 100%PDT for all games\nFigure 43: Prediction error for different ways of incorporating the action on (a) Qbert and (b) Riverraid.\nQbert. Qbert is a game for which the O%PDT training scheme is unable to predict accurately. beyond very short-term, as after a few frames only the background is predicted. The more prediction dependent transitions are used, the less sharply the agent and the moving objects are represented..\n45 35 Whht-1 O W'at-1 40 Whht1 O Wa at1 + Waat1 30 W\"zt-1 O W\"at-1 35 Whht1 O W*zt1 O W\"at-1 Whh-1 O W4 at-1 25 W*hht-1 O W*Zt-1 O W*\"at-1 W*hht-1 O W*a1 at-1 As Input 20 CA 15 15 10 10 5 25 50 75 100 0.2 0.4 0.6 0.8 1 1 Time-steps Parameter Updates 40 75 35 65 10 55 25 20 35 15 25 10 5 15 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 0 0 1 Parameter Updates Parameter Updates (a) 140 45 Whht-1 8 Waat-1 120 Whht-1 O Wa at-1 + W\"at-1 40 W*zt-1 O W\"at-1 5 Whhz_1 8 W*zt1 8 W\"at 100 Whht-1 O Wa1at-1 rror W*hht-1 O W*zZt-1 O W*\"at-1 80 W*hht-1 O W*a at_ 30 As Input 60 CA 40 20 15 0 10 c: 25 50 75 100 0.2 0.4 0.6 0.8 Time-steps Parameter Updates 50 160 45 150 o 100 140 35 130 TTne 30 120 25 62 110 100 15 90 10 80 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates (b)\n22.5 3.5 20 0% PDT Training 3 0%-20%-33% PDT Test 17.5 5 33% PDT 47% PDT AIt. 15 o 47% PDT 12.5 67% PDT 2 0%-100% PDT reddee 10 100% PDT 1.5 Oh et al. 7.5 1 5 2.5 0 0 25 50 75 100 0 80 160 240 320 400 Time-steps Number of Frames 4 40 3.5 35 10 3 30 dees- -rrinme e 2.5 25 2 20 1.5 15 10 5 0 0 80 160 240 320 400 0 80 160 240 320 400 Number of Frames Number of Frames (a) 160 20 0% PDT 140 17.5 Training 0%-20%-33% PDT Test 5 120 33% PDT 15 47% PDT Alt. 100 47% PDT 12.5 67% PDT 80 0%-100% PDT 10 100% PDT Ennr 60 Oh et al. 7.5 40 5 20 2.5 0 0 1 25 50 75 100 0 80 160 240 320 400 Time-steps Number of Frames 50 170 150 10 40 100 130 30 110 90 10 70 50 0 0 80 160 240 320 400 0 80 160 240 320 400 Number of Frames Number of Frames (b)\nFigure 44: Prediction error for different ways of incorporating the action on (a) Seaquest and (b) Space Invaders.\nFigure 15: Prediction error (average over 10,000 sequences) for different training schemes on (a Bowling and (b) Breakout. Number of frames are in millions..\n210 120 0% PDT Training 180 0%-20%-33% PDT 100 Test 33% PDT 150 47% PDT Alt. 80 47% PDT 120 67% PDT Preelion - 0%-100% PDT 60 90 100% PDT Oh et al. 40 60 20 30 0 25 50 75 100 0 80 160 240 320 400 Time-steps Number of Frames 140 260 120 220 10 180 80 140 60 100 preeeion 40 20 60 0 20 80 160 400 0 240 320 80 160 240 320 400 Number of Frames Number of Frames (a) 3.5 0% PDT - Training 1.75 3 0%-20%-33% PDT Test 5 33% PDT 1.5 2.5 47% PDT AIt. Errr 47% PDT 1.25 2 67% PDT prelion 0%-100% PDT 1.5 100% PDT Oh et al. 0.75 reeleon 1 0.5 0.5 2 0.25 0 0 25 50 75 100 0 80 160 240 320 400 Time-steps Number of Frames 2 5 1.75 Sl deao 4 1.5 1.25 3 0.75 0.5 0 0 80 160 240 320 400 0 0 80 160 240 320 400 Number of Frames Number of Frames (b)\n5 Bowling 6 Freeway 4 Pong 5 3 4 3 2 1 2 1 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates 12 14 - 10 12 10 8 8 6 6 4 4 2 2 0 0 0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 1 0.8 1 Parameter Updates Parameter Updates. 175 100 Breakout 90 150 Fishing Derby 80 Ms Pacman 125 Qbert 70 Riverraid 60 100 Seaquest Space Invaders 50 75 40 50 30 20 25 10 0 0 0 0.2 0.4 0.6 0.8 0.2 0.4 0.6 1 0 0.8 Parameter Updates. Parameter Updates 150 200 i! 180 130 101 V1 160 110 140 90 120 70 100 80 50 60 30 40 10 20 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Parameter Updates Parameter Updates\nFigure 45: Prediction error (average over 10,000 sequences) with (continuous lines) action-dependen and (dashed lines) action-independent state transition. Parameter updates are in millions\nFigure 16: Prediction error for different training schemes on (a) Fishing Derby and (b) Freeway\n200 45 0% PDT Training 40 0%-20%-33% PDT 160 Test 33% PDT 47% PDT Alt. 35 47% PDT 120 67% PDT 30 0%-100% PDT 100% PDT 25 80 Oh et al. 20 40 15 10 0 25 50 75 100 1 80 160 240 320 400 Time-steps Number of Frames 60 250 1 50 s deed 180 40 Tlne 145 mr nont 110 Prerdin 20 75 10 40 0 80 160 240 320 400 0 80 160 240 320 400 Number of Frames Number of Frames (a) 9 8 0% PDT 3.5 Training 0%-20%-33% PDT Test 7 33% PDT 3 47% PDT Alt. 6 47% PDT 2.5 67% PDT 0%-100% PDT 2 4 100% PDT Oh et al. 1.5 1 2 1 0 0 25 50 75 100 80 160 320 400 0 240 Time-steps Number of Frames. 18 3.5 16 10 100 14 3 12 2.5 10 2 A 1.5 6 1 4 0.5 2 0 0 0 80 160 240 320 400 0 80 160 240 320 400 Number of Frames. Number of Frames (b)\n0 0 ACTIVISION ACTIVISION ACTIVISION ACTIVISION ACTIVISION 28 ACTIVISION ACTIVISION ACTIVISION ACTIVISION ACTIVISION ACTIVISION ACTIVISION ACTIVISION ACTIVISION ACTIVISION\nFigure 46: Salient frames extracted from 2000 frames of Freeway generated using our simulator with actions chosen by a human player."}, {"section_index": "16", "section_name": "B.1.4 HUMAN PLAY", "section_text": "In Fig.46] we show the results of a human playing Freeway for 2000 time-steps (the corresponding. video is available at|Freeway-HPlay). The model is able to update the score correctly up to (14,0). At that point the score starts flashing and to change color as a warn to the resetting of the game. The model is not able to predict the score correctly in this warning phase, due to the bias in the data (DQN always achieves score above 20 at this point in the game), but flashing starts at the right time as does the resetting of the game.\nFigs.47and48|are larger views of the same frames shown in Fig."}, {"section_index": "17", "section_name": "B.2 3D CAR RACING", "section_text": "We generated 10 and one million (180 180) RGB images for training and testing respectively, witl an agent trained with the asynchronous advantage actor critic algorithm (Fig. 2 in (Mnih et al.|2016)) The agent could choose among the three actions accelerate straight, accelerate left, and accelerate right, according to an e-greedy policy, with e selected at random between O and O.5, independently. for each episode. We added a 4th 'do nothing' action when generating actions at random. Smaller e. lead to longer episodes (~1500 frames), while larger e lead to shorter episodes (~200 frames)..\nWe could use the same number of convolutional layers, filters and kernel sizes as in Atari, with no padding.\nFig. 49|shows side by side predicted and real frames for up to 200 actions. We found that this qualit. of predictions was very common.\nWhen using our model as an interactive simulator, we observed that the car would slightly slow down. when selecting no action, but fail to stop. Since the model had never seen occurrences of the agen\nFigure 17: Prediction error for different training schemes on (a) Ms Pacman and (b) Pong\n360 50 0% PDT 45 Training 300 0%-20%-33% PDT Test 5 40 33% PDT 47% PDT Alt. 35 240 Ernor 47% PDT 30 67% PDT .6 180 0%-100% PDT 25 at 100% PDT Errr 20 Oh et al. p 120 15 10 60 pree 5 0 25 50 75 100 0 80 160 240 320 400 Time-steps Number of Frames 90 310 80 1 70 0 260 de 60 S g 210 50 Tn 40 160 30 eiinnn 20 110 0 60 0 80 160 240 320 400 0 80 160 240 320 400 Number of Frames Number of Frames. (a) 1200 120 0% PDT Training 110 1000 0%-20%-33% PDT Test 5 33% PDT 100 47% PDT AIt. 800 47% PDT 90 67% PDT 600 0%-100% PDT 80 100% PDT Oh et al. 70 400 60 200 50 40 0 25 50 75 100 0 80 160 240 320 400 Time-steps Number of Frames. 220 320 280 190 10 15 240 160 130 ror 100 redilion 120 70 80 40 40 0 80 160 240 320 400 0 80 160 240 320 40 Number of Frames Number of Frames. (b)\nFigure 47: Salient frames extracted from 500 frames of Pong generated using our simulator with actions chosen by a human player\ncompletely releasing the accelerator for more than a few consecutive actions, it makes sense it would fail to deal with this case appropriately."}, {"section_index": "18", "section_name": "B.3 3D MAZES", "section_text": "Unlike Atari and TORCS, we could rely on agents with random policies to generate interesting sequences. The agent could choose one of five actions: forward, backward, rotate left, rotate right or do nothing. During an episode, the agent alternated between a random walk for 15 steps, and spinning on itself for 15 steps (roughly, a complete 360 spin). This encourages coherent learning of the predicted frames after a spin. The random walk was with dithering of O.7, meaning that new actions were chosen with a probability of O.7 at every time-step. The training and test datasets were made of 7,600 and 1,100 episodes, respectively. All episodes were of length 900 frames, resulting ir 6,840,000 and 990,000 (48 48) RGB images for training and testing respectively.\nWe adapted the encoding by having only 3 convolutions with 64 filters of size 6 6, stride 2, and padding 0, 1, and 2. The decoding transformation was adapted accordingly"}, {"section_index": "19", "section_name": "B.4 MODEL-BASED EXPLORATION", "section_text": "Figure 18: Prediction error for different training schemes on (a) Qbert and (b) Riverraid\n2 2 2\nWe observed that increasing the number of Monte-Carlo simulations beyond 100 made little to no difference, probably because with ng possible actions the number of possible Monte-Carlo simulations nd is so large that we quickly get diminishing returns with every new simulation.\n150 40 0% PDT Training 35 125 0%-20%-33% PDT Test 5 33% PDT 47% PDT AIt. 100 Err 47% PDT 67% PDT 25 75 0%-100% PDT 100% PDT 20 Oh et al. 50 15 25 10 0 5 1 25 50 75 100 80 160 240 320 400 0 Time-steps Number of Frames 40 160 35 140 101 120 dees- 100 Tlne 25 80 20 60 15 40 20 5 0 0 80 160 240 320 400 0 80 160 240 320 400 Number of Frames Number of Frames (a) 280 40 0% PDT Training 240 35 0%-20%-33% PDT Test 33% PDT 200 47% PDT AIt. 30 47% PDT 160 67% PDT 25 0%-100% PDT preeen 120 100% PDT 20 Oh et al. 80 15 40 10 0 5 25 50 75 100 0 80 160 240 320 400 Time-steps Number of Frames 40 210 35 190 10F 8 30 170 rrmne 25 150 20 130 Preeoon 15 110 10 90 160 240 320 400 0 80 160 240 80 320 400 0 Number of Frames Number of Frames (b)\n0 0 D - - 002 0 02 5 003 0035 003 5 0045L 00451 104 4 0 05 L 0 D b 00B L 008\nFigure 48: Salient frames extracted from 350 frames of Breakout generated using our simulator with actions taken by a human player..\nIncreasing significantly the sequence length of actions beyond d = 6 lead to a large decrease in performance. To explain this, we observed that after 6 steps, our average prediction error was less than half the average prediction error after 30 steps (0.16 and 0.37 respectively). Since the average minimum and maximum distances did not vary significantly (from 0.23 to 0.36, and from 0.24 to 0.4 respectively), for deep simulations we ended up with more noise than signal in our predictions and our decisions were no better than random.\nFig. 50|shows some examples of trajectories chosen by our explorer. Note that all these trajectorie are much smoother than for our baseline agent.\nMore specifically, in Fig.51|we compare (with T = 15) the state transition\nFigure 19: Prediction error for different training schemes on (a) Seaquest and (b) Space Invaders\nC(Xt-1) Up to t - 1 = T - 1 Encoding: Zt-1 From t - 1 = T , n t -1 Action fusion: Vt = Whht-1 Waat-1, Gate update: it =o(Wivt + Wizzt-1), ft =o(Wfvt + Wfzzt Ot = 0(W0Vt + WozZt-1), Cell update: Ct = f ct-1 + it tanh(Wcvt + Wczzt-1)\n16 3.5 T = 10 14 0% PDT T = 15 3 67%PDT T = 20 12 100% PDT 2.5 2 8 1.5 6 4 0.5 2 0 0 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 4 40 3.5 35 10 3 30 2.5 25 2 20 1.5 15 10 5 0.5 0 0 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames. (a) 140 20 T = 10 0% PDT 17.5 120 T = 15 67%PDT = 20 15 100% PDT 100 12.5 80 preerelonn 10 60 7.5 40 5 20 2.5 0 0 1 25 50 75 100 0 40 80 120 160 200 24 Time-steps Number of Frames. 50 180 160 40 140 30 120 100 20 80 10 0 40 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames. Number of Frames. (b)\nFigure 49: Salient frames, predicted (left) and rea1 (right), for TORCS from a 200 time-steps videc\nwhere the vectors h-1 and v have dimension 1024 and 2048 respectively and with different matrice W for the warm-up and the prediction phases (the resulting model has around 40M parameters we. refer to this structure as 'Base-zt-1 = ht-1' in the figures), with the following alternatives:.\nBase-zt-1 = 0: Remove the action-independent transformation of ht-1, i.e\nAs we can see, the 'Base-zt-1 = 0' state transition performs quite poorly for long-term prediction compared to the other transitions. With this transition, the prediction-independent simulator performs. much worse than the prediction-dependent simulator with the baseline state transition (Appendix. B.1.1). The best performance is obtained with the 'ht-1-i2816-zt-1 = 0' structure, which however. has a large number of parameters..\nFigure 20: Prediction error (average over 10,O00 sequences) for different prediction lengths T < 20 on (a) Bowling and (b) Breakout. Number of frames are in millions and exclude warm-up frames.\nure 49: Salient frames, predicted (left) and real (right), for TORCS from a 200 time-steps vide\nUptot-1=-1 Fromt-1 = T,\nUp to t - 1 = From t - 1 = T , Vt = Whht-1 8 Waat-1, it =0(Wivt+Wihht-1), ft=0(Wfvt+Wfhht_ Ot = 0(W0Vt + Wohht-1), Ct = ft Ct-1 +it O tanh(Wcvt + Wcsht-1) +if 8 tanh(zt-1)\nwith shared W matrices for the warm-up and the prediction phases, without RReLU after the last convolution of the encoding, and with vectors h-1 and v of dimensionality 2816 This model has around 95M parameters..\n320 95 10 85 280 0% PDT 67%PDT 20 75 240 100% PDT 200 65 PPreereion 160 4 120 35 80 15 0 5 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 270 145 125 230 10 105 190 85 150 er nonr 65 i 110 45 Prrrelon 25 70 5 30 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (a) 1.8 2 T = 10 1.75 0% PDT 1.6 T = 15 67%PDT T = 20 1.4 100% PDT 1.5 1.2 1.25 1 1 8 0.8 0.75 0.6 0.5 0.4 0.2 0 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 2 1.75 10 4 1.5 1.25 0.75 0.5 0 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (b)\nFigure 50: Examples of paths followed by random baseline (left), and explorers based on our simulator (right).\nIn Figs. [52|and[53] we show the effect of using different prediction lengths T on the structure 'Base Zt-1 = h-1'. As we can see, using longer prediction lengths dramatically improves long-term Overall, the best performance is obtained using two subsequences of length T = 15\nFigure 21: Prediction error for different prediction lengths T < 20 on (a) Fishing Derby and (b Freeway.\n12 3 Base Zt-1 = ht-1 Bowling 10 Base Zt-1 =0 2.5 Freeway ht-1-i#2816-Zt-1= 0 Pong 2 8 6 1.5 1 AM MAxy 2 0 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 40 3.5 35 100 3 30 2.5 25 2 20 1.5 15 10 1 0.5 5 .........y.......Y...... 0 0 40 80 120 160 0 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (a) 200 80 Base Zt-1 = hy-1 175 70 Breakout Base-Zt-1 = 0 Fishing Derby ht-1-i2816-Zt-1 150 Ms Pacman Qbert 125 50 Seaquest Space Invaders Prerlinn 100 40 75 30 50 20 25 10 0 ................................. 0 1 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 80 200 1 70 180 10 100 60 160 50 140 40 120 rnn 100 Preeeelon Preeeeon 20 80 10 60 0 .................................... 40 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (b)\n220 55 200 T = 10 50 0% PDT T = 15 180 67%PDT T = 20 100% PDT 160 140 120 100 30 8 60 40 20 15 0 10 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 60 280 55 i/ 50 160 0 120 2 20 80 15 40 40 80 120 160 200 240 0 0 40 80 120 160 200 240 Number of Frames Number of Frames (a) 9 8 T = 10 0% PDT 3.5 T = 15 67%PDT 7 T = 20 3 100% PDT 6 2.5 5 2 4 1.5 3 2 1 1 0 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 18 3.5 14 3 12 2.5 10 2 8 1.5 6 1 4 0.5 2 0 0 40 80 120 160 200 240 40 80 120 160 200 240 Number of Frames Number of Frames (b)\nFigure 51: Prediction error (average over 10,o00 sequences) vs number of frames seen by the model. (excluding warm-up frames) for the prediction-independent simulator with different action-dependent state transitions for (a) Bowling, Freeway, Pong, and (b) Breakout, Fishing Derby, Ms Pacman, Qbert Seaquest, Space Invaders. 59\nFigure 22: Prediction error for different prediction lengths T 20 on (a) Ms Pacman and (b) Pong\n8 3 7 1. Bowling - 20 2.5 Freeway 25 Pong 6 2 5 En 1.5 2 0.5 0 1 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 20 4 3.5 17.5 10 100 3 15 2.5 12.5 2 10 1.5 7.5 6 reeon Preerlion 5 0.5 2.5 0 0 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (a) 150 80 T = 15 70 Breakout 125 T = 20 Fishing Derby T = 25 5 Ms Pacman Qbert 100 Ennr 50 Seaquest Space Invaders Prrrlion 75 40 50 30 20 25 10 0 0 25 50 75 100 1 40 80 120 160 200 240 Time-steps Number of Frames 80 170 70 150 100 60 130 110 40 90 30 70 20 50 10 0 30 40 120 160 200 240 0 0 80 40 80 120 160 200 240 Number of Frames Number of Frames (b)\n280 60 = 10 0% PDT 240 15 50 67%PDT 2( 100% PDT 200 40 Ernr 160 pereeloon 30 120 Ernor 20 80 10 40 0 0 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 95 330 85 290 100 75 deee- 250 65 55 210 45 170 Error 35 Preeoon 130 25 15 90 5 50 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (a) 1400 140 T = 10 0% PDT 1200 T = 15 67% PDT T = 20 5 120 100% PDT 1000 100 800 Prreelion 600 80 400 60 200 0 40 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 280 360 320 240 10 15 280 step 200 -lmae 240 160 200 JO 160 120 h Preeallon edelenn 120 80 80 40 40 0 40 80 120 160 200 240 0 40 80 120 160 200 24 Number of Frames Number of Frames (b)\nFigure 52: Prediction error vs number of frames seen by the model for the prediction-independent simulator with different prediction lengths T < 25 for (a) Bowling, Freeway, Pong, and (b) Breakout Fishing Derby, Ms Pacman, Qbert, Seaquest, Space Invaders.\nFigure 23: Prediction error for different prediction lengths T < 20 on (a) Qbert and (b) Riverraid\n3 BBTT(15, 1) Bowlinge 7 BBTT(15, 2) 2.5 Freeway Pong 6 2 5 6 4 1.5 3 2 M 1 0 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames. 4 20 3.5 17.5 10 3 15 2.5 12.5 2 10 1.5 7.5 5 MTY 0.5 2.5 A WA.. 0 0 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames. Number of Frames. (a) 150 80 BBTT(15, 1) 70 Breakout 125 BBTT(15, 2) Fishing Derbye 60 Ms Pacman Qbert 100 50 Seaquest Space Invaders. 75 40 30 50 20 25 10 0 1 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 80 170 70 150 60 130 50 110 40 90 SN 30 70 20 50 10 30 0 10 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (b)\n100 40 T = 10 0% PDT = 15 35 T = 67%PDT 80 T = 20 100% PDT 30 60 25 Preeeelon 20 40 15 20 10 0 5 25 50 75 100 40 80 120 160 200 240 Time-steps Number of Frames 45 135 40 115 100 10 35 95 30 25 75 20 Tor 55 E 15 35 10 5 15 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (a) 180 50 T = 10 45 0% PDT 150 T = 15 67%PDT T = 20 5 40 100% PDT 120 Ernrr 35 30 90 25 60 20 15 30 0 5 25 50 75 100 40 80 120 160 200 240 0 Time-steps Number of Frames 50 220 45 200 100 40 35 160 30 140 25 Err 120 20 15 100 10 80 40 80 120 160 200 240 0 40 80 120 160 200 240 0 Number of Frames Number of Frames (b)\nFigure 53: Prediction error vs number of frames seen by the model for the prediction-independent simulator with BPTT(15, 1) and BPTT(15, 2) for (a) Bowling, Freeway, Pong, and (b) Breakout Fishing Derby, Ms Pacman, Qbert, Seaquest, Space Invaders\nFigure 24: Prediction error for different prediction lengths T 20 on (a) Seaquest and (b) Space Invaders.\n18 3.5 16 BPTT(15, 1) 0% PDT 3 BPTT(15, 2) 33%PDT 14 BPTT(15, 5) 100% PDT 2.5 2 2 8 1.5 6 4 0.5 2 0 0 1 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 4 40 3.5 35 3 2.5 25 20 2 1.5 15 1 10 5 0.5 0 0 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (a) 140 20 BPTT(15, 1) 17.5 0% PDT 120 BPTT(15, 2) 33%PDT BPTT(15, 5) 15 100% RPT 100 12.5 80 10 60 7.5 40 Preeelon 5 20 2.5 0 0 25 50 75 100 0 40 80 120 160 200 24 Time-steps Number of Frames 50 180 160 100 140 30 120 100 20 80 10 60 0 40 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (b)\nFigure 25: Prediction error (average over 10,000 sequences) for different prediction lengths T > 20 on (a) Bowling and (b) Breakout. Number of frames are in millions and exclude warm-up frames\n350 105 BPTT(15, 1) 0% PDT 300 BPTT(15, 2) 33%PDT 85 BPTT(15,5) 100% PDT 250 65 200 150 45 100 25 50 0 5 1 25 50 75 100 40 80 0 120 160 200 240 Time-steps Number of Frames 145 270 125 230 10 105 190 N 85 150 65 110 45 Prerlion 25 70 50 30 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (a) 2 2 BPTT(15, 1) 1.75 0% PDT BPTT(15, 2) 33% PDT BPTT(15, 5) 1.5 1.5 100% PDT 1.25 1 1 Prerlion 0.5 0.5 0.25 0 0 1 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 2 5 1.75 1.5 1.25 1 0.75 0.5 2 0.25 0 0 0 40 80 120 160 200 240 0 40 80 120 160 200 240 Number of Frames Number of Frames (b)\nFigure 26: Prediction error for different prediction lengths through truncated BPTT on (a) Fishing Derby and (b) Freeway.\n250 55 BPTT(15, 1) 50 0% PDT BPTT(15, 2) 33%PDT 200 BPTT(15, 5) 45 100% PDT Ernr 40 150 35 100 30 25 20 50 15 10 0 1 25 50 75 100 0 40 80 120 160 200 240 Time-steps Number of Frames 60 270 55 50 220 JXX 170 40 35 120 30 70 20 15 20 40 80 120 160 200 240 40 80 120 160 200 240 Number of Frames Number of Frames (a) 6 BPTT(15, 1) 3.5 0% PDT BPTT(15, 2) 33%PDT BPTT(15, 5) 3 100% PDT Ennnr 2.5 2 1.5 1 1 0 0 25 50 75 100 40 80 120 160 200 240 Time-steps Number of Frames 18 16 3.5 14 3 12 2.5 10 2 8 1.5 6 Preeloon 1 4 2 0 0 120 240 40 0 40 80 160 200 0 80 120 160 200 240 Number of Frames Number of Frames (b)\nFigure 27: Prediction error for different prediction lengths through truncated BPTT on (a) Ms Pacmar and (b) Pong."}]
rJRhzzKxl
[{"section_index": "0", "section_name": "KNOWLEDGE ADAPTATION: TEACHING TO ADAPT", "section_text": "Sebastian Ruder\nSebaslankuoer Insight Centre for Data Analytics National University of Ireland, Galway\njohn.breslin@insight-centre.org\nDomain adaptation is crucial in many real-world applications where the distribu-. tion of the training data differs from the distribution of the test data. Previous Deep Learning-based approaches to domain adaptation need to be trained jointly. on source and target domain data and are therefore unappealing in scenarios where models need to be adapted to a large number of domains or where a domain is evolving, e.g. spam detection where attackers continuously change their tactics.. To fill this gap, we propose Knowledge Adaptation, an extension of Knowledge. Distillation (Bucilua et al.2006, Hinton et al.]2015) to the domain adaptation. scenario. We show how a student model achieves state-of-the-art results on unsu- pervised domain adaptation from multiple sources on a standard sentiment anal- ysis benchmark by taking into account the domain-specific expertise of multiple. teachers and the similarities between their domains.. When learning from a single teacher, using domain similarity to gauge trustwor-. thiness is inadequate. To this end, we propose a simple metric that correlates well. with the teacher's accuracy in the target domain. We demonstrate that incorporat-. ing high-confidence examples selected by this metric enables the student model to. achieve state-of-the-art performance in the single-source scenario.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In many real-world applications such as sentiment classification (Pang & Lee|2008), a model traine on one domain may not work well when directly applied to another domain due to the difference i1 the data distribution between the domains. At the same time, labeled data in new domains is scarc or non-existent and manual labeling of large amounts of target domain data is expensive. Domai adaptation allows models to reduce the domain discrepancy and adapt to new domains. While fine tuning is a commonly used method for supervised domain adaptation, there is no cheap equivalen in the unsupervised case as existing Deep Learning-based approaches need to be trained jointly o source and target domain data. This is prohibitive in scenarios with a large number of domains such as sentiment classification on the plethora of real-world review categories, blog types, or com munities (Hamilton et al.] 2016). Additionally, re-training a model on source data is unfeasible fo evolving domains, such as spam detection where attackers continuously adapt their strategy, scen classification where the scene changes over time (Hoffman et al.|2014), or a conversational agen for a user with a rapidly evolving style, such as a child or second language learner.\nRather than re-training, we would like to be able to leverage our trained model in the source domain. to inform the predictions of a new model trained on the target domain. This objective aligns organ- ically with the idea of Knowledge Distillation (Bucilua et al.|2006f Hinton et al.f2015), which we extend as Knowledge Adaptation to the domain adaptation scenario. While Knowledge Distillation concentrates on training a student model on the predictions of a (possibly larger) teacher model. Knowledge Adaptation focuses on determining what part of the teacher's expertise can be trusted. and applied to the target domain.\nParsa Ghaffari Aylien Ltd.. Dublin. Ireland\nParsa Ghaffari"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this context, determining when to trust the teacher is key. This circumstance is paralleled in real. world teacher-student and adviser-advisee relationships: Children learn early on to trust familiar advisers but to moderate that trust depending on the adviser's recent history of accuracy or inaccu-. racy (Corriveau & Harris2009), while adults may surround themselves with advisers, e.g. to make. a financial investment and gradually learn whose expertise to trust (Johnson & Grayson2o05).\nWe demonstrate how domain similarity metrics can be used as a measure of relative trust in a teache for unsupervised domain adaptation with multiple source domains and show state-of-the-art results. for a student model that learns from multiple domain-specific teachers.\nWhen learning from a single teacher in the single-source scenario, using a general measure of do-. main similarity is inadequate as the student has no other, more relevant teacher to turn to for advice. in case its teacher is untrustworthy. To this end, we propose a simple measure, which correlates well. with the teacher's accuracy in the target domain and allows the student to gauge the teacher's con-. fidence in its predictions. We demonstrate that by incorporating high-confidence examples selected. by this metric in the training process, the student model is able to outperform the state-of-the-art in. single-source unsupervised domain adaptation. Crucially, our models are the first Deep Learning-based models for domain adaptation that perform. adaptation without expensive re-training on the source domain data. They are thus able to make use of readily available trained source domain models and are particularly apt for scenarios where.\nDistilling knowledge.Bucilua et al.(2006) first proposed a method to compress the knowledge of a source model, which was later improved byHinton et al. (2015). Romero et al.(2015) showec how this method can be adapted to train deep and thin models, while|Kim & Rush|(2016) apply the technique to sequence-level models. In addition,Hu et al.[(2016) use it to constrain a student mode] with logic rules. Our goal differs from the previous methods due to the difference in data distribu tions between source and target data, which necessitates to learn from the teacher's knowledge only insofar as it is useful for the target domain. Similar in spirt to Knowledge Distillation is the KL- divergence based objective by[Yu et al.[(2013) and Li et al.(2014) for adapting an acoustic mode and the Adaptive Mixture of Experts model (Nowlan & Hinton1990), which also learns which expert to trust for a given example. Both, however, require labeled examples, which are scarce for domain adaptation, while our model is entirely unsupervised.\nDomain adaptation. Domain adaptation has a long history of research: Blitzer et al.(2006) pro. posed a structural correspondence learning algorithm. Daume III(2007) introduced a kerne1 function that maps source and target domain data to a space that encourages in-domain similarity, whilePan. et al. (2010) proposed a spectral feature alignment algorithm to align domain-specific words into. meaningful clusters, while Long & Wang(2015) use multi-task learning to avoid negative transfer..\nDeep learning-based domain adaptation. Deep learning-based approaches to domain adaptation are more recent and have focused mainly on learning domain-invariant representations: Glorot et al. (2011) first employed stacked Denoising Auto-encoders (SDA) to extract meaningful representa- tions.Chen et al.(2012) in turn extended SDA to marginalized SDA by addressing SDA's high computational cost and lack of scalability to high-dimensional features, whileZhuang et al.(2015) proposed to use deep auto-encoders for transfer learning.Ajakan et al.(2016) added a Gradient Re- versal Layer that hinders the model's ability to discriminate between domains. Finally,Zhou et al. (2016) transferred the source examples to the target domain and vice versa using Bi-Transferring Deep Neural Networks, while |Bousmalis et al.(2016) propose Domain Separation Networks that employ domain-specific and general-domain encoders. All of these approaches, however, require to jointly train the model on source and target data for every new target domain.\nDomain adaptation from multiple sources. For domain adaptation from multiple sources, Man- sour(2009) proposed a distribution weighted hypothesis with theoretical guarantees.Duan et al. (2009) proposed a method to learn a least-squares SVM classifer by leveraging source classifiers, while Chattopadhyay et al. (2012) assign pseudo-labels to the target data. Finally, Wu & Huang. (2016) exploit general sentiment knowledge and word-level sentiment polarity relations for multi-. source domain adaptation."}, {"section_index": "3", "section_name": "3.1 PROBLEM DEFINITION", "section_text": "In the following, we describe domain adaptation within the knowledge adaptation framework: We are provided with one or multiple source domains Ds. and a target domain DT. For each of the source domains, we are provided with a teacher model T; that was trained on examples Xs, = {, , xn'} and their labels {y, : , yi'} from Ds,. In the target domain DT, we only have access to the examples {1, ... , x'} without knowledge of their labels. Note that we omit source and target domain indexes in the following for simplicity in cases where examples are unambigous Our task is now to train a student model S that performs well on unseen examples from the target domain DT.\nOur teacher and student models are simple multilayer perceptrons (MLP). The basic MLP consists of an input layer, one or multiple intermediate layers, and an output layer. Each intermediate layer l learns to embed the output of the previous layer x into a latent representation he = fe(Wex + be. where We and be are the weights and bias of the lth layer, while fe is the activation, typically ReLU the output layer.\nIn the single source setting, the teacher T has an output softmax PT = softmax(zT) where zT are the logits of the teacher's output layer. T is trained to minimize the loss LT = H(yi, PT) where H refers to the cross-entropy and y; is the label of the ith training example in the source domain Ds..\nThe student S similarly models an output probability Ps = softmax(zs) where zT are the logits o1 the student's output layer. In the context of knowledge distillation (Hinton et al.|2015), the student S is trained so that its output Ps is similar to the teacher's output PT and to the true labels. In practice, the output probability of the teacher is smoothed with a temperature t to soften the signa and provide more information during training. The same temperature t is applied to the output oi the student network for the comparison:\nFor unsupervised domain adaptation, true labels in the target domain D are not available. Thus the student S is trained solely to mimic the teacher's softened output with the following loss. which is similar to treating source input modalities as privileged information (Lopez-Paz et al.] 2016):\nThe teacher-student paradigm lends itself naturally to the scenario with multiple source domains. Intuitively, the trust that a student should place in a teacher should be proportional to the degree of similarity between the teacher's domain and the student's domain..\nTo this end, we consider three measures of domain similarity, which have been successfully used in domain adaptation research: Jensen-Shannon divergence (Remus|2012) and Renyi divergence (Van Asch & Daelemans2010), which are both based on Kullback-Leibler divergence and are computed with regard to the domains' term distributions; and Maximum Mean Discrepancy (Tzeng et al. 2014), which we compute with respect to the teacher's latent representation. These measures are computed between the target domain DT and every source domain Ds (additional information with regard to our choice and use of domain similarity measures can be found in the appendix[A.1)\nP = softmax PT = softmax\nLs = H(PT, PS)\nCross-entropy Temperature Cross-entropy Teacher model Teacher model Student model. D Dq (a) Teacher model (b) Student model Cross-entropy Temperature (c) Student model with multiple teachers\nLMUL =H() sim(Ds,, DT) : PT, , PJ). i=1\nGeneral measures of domain similarity are useful in the multi-source setting, where we can rely or. multiple teachers and choose to trust one more than the others. In the scenario with a single teacher it is not helpful to know whether we can trust the teacher in general. We rather want a measure that. allows us to determine if we can trust the teacher for a specific example..\nTo arrive at such a measure, we revisit the representations the teacher learns from the input data In order to make accurate predictions, the teacher model learns to separate the representation ol different output classes in its hidden representation (we use a one-layer MLP in our experiments as. detailed in 4.2f in deeper networks, this would be an intermediate layer). Even though the teacher. model is trained on the source domain, this separation still holds - albeit with decreased accuracy -. in the target domain. This can be seen in Figure[2] where examples in the target domain that were predicted as positive and negative by the teacher form distinct clusters (refer to 4.1 for details with. regard to the data and task). Importantly, many of these predictions are incorrect..\nTeacher model Student model\nFigure 1: Training procedures for a) the teacher model, b) the student model, and c) the student model with multiple teachers. The teacher is trained on examples xS and their true labels yS in. the source domain DS, while the student is trained on the softened predictions of one or multiple. teachers of examples xT in the target domain DT\nThe student model with multiple teachers is then trained to imitate the sum of the teacher's individual predictions weighted with the normalized similarity sim(Ds, DT) of their respective source domain Ds to the target domain DT:\n2.5 True positivess 2.0 X False positives . True negatives 1.5 X False negatives * Centroids 1.0 0.5 X 0.0 -0.5 -1.0 -1.5 1.0 1.5 2.0\nFigure 2: PCA visualization of a teacher's latent represen-. tations of target domain examples for the K->D domain pair (see d4.1|for details). A darker color reflects a higher. MCD value. Best viewed in close-up..\nAs evidenced in Figure2] incorrect predictions are frequent along the decision boundary and infre quent along the cluster edges, where examples are less ambiguous. More precisely, the accuracy of the teacher's predictions on the target domain is proportional to the absolute difference in similarity of the teacher's representation h with the cluster centroids, which we refer to as Maximum Cluster Difference (MCD) and define as follows:\nMCDn = cos(cp,h) - cos(Cn,h)\nEvidence of the efficacy of this measure for obtaining the trustworthiness of a teacher for an example can be found in the PCA visualization' Jin Figure[2] where incorrect predictions are far less common for (more darkly colored) examples with higher MCD values. Additionally, the MCD score of a tar- get domain example and the accuracy of the teacher's prediction correlate with an average Pearson's. r of 0.33 and p < 0.05 across all domain pairs of the data described in 4.1 we furthermore plot. the teacher's accuracy for the top n target domain examples with the highest MCD values in Figure. 3 while the measure becomes less accurate as n increases, it is very accurate for low n..\n1.00 0.95 0.90 0.85 0.80 0.75 0 500 1000 1500 2000 Top n examples with highest MCD\nFigure 3: Accuracy of the teacher's. predictions on the top n target domain examples with the highest MCD value for the K->D domain pair.\nwhere cp and cn are the centroids of the positive and negative cluster respectively as predicted by the teacher, i.e. the mean representation of all examples assigned to the cluster by the teacher. Note. that while we are focusing on binary classification involving two clusters, the measure is equally applicable to the multi-class setting, as demonstrated in Appendix|A.2.\nFor this reason, rather than weighing all examples with MCD, we propose to add n unlabeled training examples with the highest MCD with their teacher-assigned label as pseudo-supervised examples on which we train the student with the following objective:.\nLs = H((1 ) : Yteacher + AP, PT)\nwhere yteacher is the indicator array containing 1 at the index argmax(Pr) and O at all other indexes,. while X determines the contribution of the soft targets. This can be seen as a representation-based variant of instance adaptation (Jiang & Zhai!2007), which uses MCD as a measure of confidence as it correlates better with teacher accuracy than teacher prediction probability. In practice, we alternate unsupervised training with the objective in equation|2|and pseudo-supervised training with the objective in equation [5] although other curricula are imaginable.\n1 A visualization using t-SNE revealed the same cluster. However, PCA showed a clearer decision boundary"}, {"section_index": "4", "section_name": "4.1 DATA SET", "section_text": "We use the Amazon product reviews sentiment analysis dataset of|Blitzer et al.[(2006), a common benchmark for domain adaptation. The dataset consists of 4 different domains: Book (B), DVDs (D) Electronics (E) and Kitchen (K). We follow the conventions of past work and evaluate on the binary classification task where reviews with more than 3 stars are considered positive and reviews with 3 stars or fewer are considered negative. Each domains contains 1,O00 positive, 1,O00 negative, and approximately 4,000 unlabeled reviews. For fairness of comparison, we use the raw bag-of-words unigram/bigram features pre-processed with tf-idf as input (Blitzer et al.]2006).\nFor single-source adaptation, we replicate the set-up of previous methods and train our teache models on all 2,000 labeled examples, of which we reserve 200 as dev set. For domain adaptatior from multiple sources, we follow the conventions of Bollegala et al.(2011) and limit the total numbe of training examples for all teachers to 1,600, i.e. given three source domains, each teacher is only. trained on about 533 labeled samples. We also train a general teacher on the same 1,600 example. of the three domains. In both scenarios, the student is evaluated on all 2,O00 labeled samples of th target domain. As we have not found a universally applicable way to optimize hyperparameters o1. perform early stopping for unsupervised domain adaptation, we choose to use a small number o. unlabeled examples as a labeled validation set similar to (Bousmalis et al.]2016)."}, {"section_index": "5", "section_name": "4.2 HYPERPARAMETERS", "section_text": "As it is easier for the student to assign trust when learning from multiple teachers, we first conduct experiments on the sentiment analysis benchmark for domain adaptation from multiple sources. For. each experiment, one of the four domains is used as the target domain, while the remaining ones are. treated as source domains.\nDomain similarity. We first evaluate the performance of our student depending on different mea. sures of domain similarity, with which we interpolate the predictions of the teachers. As evidenced in Table [2|provided in the appendix, Jensen-Shannon divergence generally performs best. We thus use this measure for the remainder of the experiments..\nOurmodels. For multi-source domain adaptation, we first consider a teacher-only baseline (Teacher-only), where teacher sentiment probabilities are combined, weighted with Jensen-Shannon divergence, and the most likely sentiment is chosen. We further train our student on a) the source domain-specific teachers as detailed in 3.3] b) the general teacher trained on all source domains as described in d4.1] and on c) the combination of source domain and general teachers\nComparison models. We compare our models against the following methods: domain adaptatior with structural correspondence learning (SCL) (Blitzer et al.|2006); domain adaptation based or spectral feature alignment (SFA) (Pan et al.]2010); adaptations of SCL and SFA via majority vot ing to the multi-source scenario (SCL-com and SFA-com); cross-domain sentiment classification b constructing a sentiment-sensitive thesaurus (SST) (Bollegala et al.]2011); multiple-domain sen timent analysis by identifying domain dependent/independent word polarity (IDDIWP) (Yoshid et al.|2011); three general-purpose multiple source domain adaptation methods (DWHC, Mansou (2009)), (DAM,Duan et al.(2009)), (CP-MDA,Chattopadhyay et al.(2012); cross-domain senti ment classification by transferring sentiment along a sentiment graph with hinge loss and logistic loss respectively (SDAMS-SVM and SDAMS-Log) (Wu & Huang2016). Numbers are used as reported byWu & Huang(2016).\nResults. All results are depicted in Table[1] Evaluating the combination of the source teacher models. directly on the target domain (Teacher-only) produces the worst results, which underscores the need. for methods that allow adaptation to the target domain. Training the student model on the soft targets\nBoth student and teacher models are one-layer MLPs with 1,00o hidden dimensions. We use a vocabulary size of 10,000, a temperature of 5, a batch size of 10, and Adam (Kingma & Ba2015 as optimizer with a learning rate of O.001. For every experiment, we report the average of 10 runs..\nBook DVD Electronics Kitchen SCL (Blitzer et al.]2006 0.7457 0.7630 0.7893 0.8207 SFA (Pan et al.T2010) 0.7598 0.7848 0.7808 0.8210 SCL-com 0.7523 0.7675 0.7918 0.8247 SFA-com 0.7629 0.7869 0.7864 0.8258 SST (Bollegala et al.2011) 0.7632 0.7877 0.8363 0.8518 IDDIWP (Yoshida et al.] 2011 0.7524 0.7732 0.8167 0.8383 DWHC (Mansour 2009 0.7611 0.7821 0.8312 0.8478 DAM (Duan et al. 2009 0.7563 0.7756 0.8284 0.8419 CP-MDA (Chattopadhyay et al.2012 0.7597 0.7792 0.8331 0.8465 SDAMS-SVM (Wu & Huang2016) 0.7786 0.7902 0.8418 0.8578 SDAMS-Log (Wu & Huang[|2016) 0.7829 0.7913 0.8406 0.8629 Teacher-only 0.7565 0.7765 0.7960 0.8210 Student (source teachers) 0.7918 0.7968 0.8203 0.8523 Student (general teacher) 0.8014 0.8062 0.8365 0.8675 Student (source teachers + general) 0.8010 0.8088 0.8311 0.8647\nTable 1: Average results for domain adaptation from multiple sources for the comparison models and ours on the sentiment analysis benchmark. For the results in each column, the domain in the column header is used as target domain and the remaining three domains are used as source domains\nof the teachers allows us to improve upon the teacher-only baseline significantly, thereby demon. strating the appropriateness of the teacher-student paradigm to the domain adaptation scenario. The. student model outperforms comparison methods that rely on source model predictions by combining. (Mansour2009) or predicting (Duan et al.]2009) them. This showcases the usefulness of learning. from soft targets in the domain adaptation scenario. Training on a general teacher model as well. as on a combination of the general teacher and the source domain teachers allows us to improve results even further. Both models improve over existing approaches to domain adaptation from mul-. tiple sources and outperform approaches that rely on sentiment analysis-specific information (Wu &. Huang2016) in all but the electronics domain."}, {"section_index": "6", "section_name": "4.4 SINGLE-SOURCE DOMAIN ADAPTATION", "section_text": "We additionally evaluate the ability of the student to only learn from a single teacher. This scenari. is more challenging as the student cannot consider other teachers that might provide more relevan predictions. For each target domain, each of the three other domains is used as source domain yielding 12 domain pairs.\nOur models. On these domain pairs, we firstly evaluate our student-teacher (TS) model. For training a model that incorporates high-confidence predictions of the teacher (TS-MCD), we cross-validate the interpolation parameter X in equation 5 and the number of examples with the highest MCI scores n. We find that a low X (around O.2) generally yields the best results in the domain adaptatior setting, as the high-confidence predictions are helpful to guide the student's learning during training Additionally, using the top 500 unlabeled target domain examples with the highest MCD scores fo pseudo-supervised training of the student produces the best results.\nComparison models. For the single-source case, we similarly compare against SCL (Blitzer et al.. 2006) and SFA (Pan et al.]2010), as well as against multi-label consensus training (MCT), which. combines base classifiers trained with SCL (Li & Zong 2008) and against an approach that links. heterogeneous input features with points via non-negative matrix factorization (PJNMF) (Zhou et al.. 2015). We additionally compare against the following deep learning-based approaches: stacked. denoising auto-encoders (SDA) (Glorot et al.2011); marginalized SDA (mSDA) (Chen et al.[2012); transfer learning with deep auto-encoders (TLDA) (Zhuang et al.|2015); and bi-transferring deep neural networks (BTDNN) (Zhou et al.]2016). Numbers are used as reported byZhou et al.(2016)\nResults. The results can be seen in Figure 4 The student trained on the source domain teacher (TS) achieves convincing results and outperforms the state-of-the-art on three domain pairs - twice with the Book domain as source domain. showing that knowledge acquired from the Book domain\nmight perhaps be more easily transferable to a student model. For many domain pairs, the student. still falls significantly short compared to the performance of the state-of-the-art, which highlights. that solely relying on a single teacher's predictions is insufficient to bridge the discrepancy between. the domains. Instead, additional methods are necessary to provide evidence for the student when to trust the teacher's predictions. Leveraging the teacher's knowledge by incorporating high-confidence. examples selected by MCD into the training (TS-MCD) improves the performance of the student in. almost all cases significantly. This allows the student to outperform the state-of-the-art on 8 out. of 12 domain pairs without expensive joint training on source and target data and with the sole. dependence of a single model trained on the source domain, which is typically readily available..\n82 82 SCL SCL MCT MCT SFA SFA 80 PJNMF PJNMF SDA SDA mSDA mSDA 78 TLDA 18 TLDA ACeunney BTDNN ACenrey BTDNN TS TS TS-MCD 76 TS-MCD 76 74 74 B->D E->D K->D D->B E->B K->B 86 SCL 88 SCL MCT MCT SFA SFA 84 PJNMF 86 PJNMF SDA SDA mSDA mSDA 82 84 % TLDA % TLDA raey BTDNN ACernney BTDNN TS 82 TS 80 TS-MCD TS-MCD 78 80 76 78 D->F >F B-k D->K F-K"}, {"section_index": "7", "section_name": "5 CONCLUSION", "section_text": "In this work, we have proposed Knowledge Adaptation, an extension of the Knowledge Distillatior. idea to the domain adaptation scenario. This method - in contrast to prevalent domain adaptatior methods - is able to perform adaptation without re-training. We firstly demonstrated the benefi of this paradigm by showing that a student model that takes into account the predictions of multi ple teachers and their domain similarities is able to outperform the state-of-the-art for multi-source unsupervised domain adaptation on a standard sentiment analysis benchmark. We additionally in troduced a simple measure to gauge the trustworthiness of a single teacher and showed how this. measure can be used to achieve state-of-the-art results on 8 out of 12 domain pairs for single-sourc unsupervised domain adaptation."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank John Glover and Chris Hokamp for fruitful discussions. This publication has emanated from research conducted with the financial support of the Irish Research Council (IRC) under Grant Number EBPPG/2014/30 and with Aylien Ltd. as Enterprise Partner as well as from re- search supported by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289.\nFigure 4: Average results for single-source domain adaptation for the comparison models and our. models on the sentiment analysis benchmark. B: Book. D: DVD. E: Electronics. K: Kitchen"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "John Blitzer. Mark Dredze. and Fernando Pereira.. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. Annual Meeting-Association for Com putational Linguistics, 45(1):440, 2007. 1SSN 0736587X. doi: 10.1109/IRPS.2011.5784441.\nKonstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhat Domain Separation Networks. NIPS, 2016.\nCristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. Proceeding of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. KDD '06, pp. 535, 2006. doi: 10.1145/1150402.1150464. URLhttp: //p0rta1. acm. 0rg/ citation.cfm?doid=1150402.1150464\nRita Chattopadhyay, Qian Sun, Jieping Ye, Sethuraman Panchanathan, W E I Fan, and I A N David. son. Multi-Source Domain Adaptation and Its Application to Early Detection of Fatigue. ACM Transactions on Knowledge Discovery from Data (TKDD), 6(4), 2012..\nMinmin Chen, Zhixiang Xu, Kilian Q. Weinberger, and Fei Sha. Marginalized Denoising Autoen coders for Domain Adaptation. Proceedings of the 29th International Conference on Machine Learning (ICML-12), pp. 767--774, 2012. ISSN 0960-3174. doi: 10.1007/s11222-007-9033-z\nKathleen Corriveau and Paul L Harris. Choosing your informant: weighing familiarity and recen accuracy. Developmental science, 12(3):426-437. 2009\nLixin Duan, Ivor W. Tsang, Dong Xu, and Tat-Seng Chua. Domain Adaptation from Multipl Sources via Auxiliary Classifiers. In Proceedings of the 26th Annual International Conference oi Machine Learning, 2009\nDanushka Bollegala, David Weir, and John Carroll. Using Multiple Sources to Construct a Sentiment Sensitive Thesaurus for Cross-Domain Sentiment Classification. In Proceedings of the 49th An nual Meeting of the Association for Computational Linguistics: Human Language Technologies Volume 1, pp. 132-141, 2011.\nZhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. Harnessing Deep Neural Networks with Logic Rules. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 1-18, 2016. URLhttp://arxiv.org/abs/1603.06318\nJing Jiang and ChengXiang Zhai. Instance Weighting for Domain Adaptation in NLP. Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, (October):264-271, 2007. ISSN 0736-587X. doi: 10.1145/1273496.1273558. URLhttp://ac1antho1ogy. info/papers/instance-weighting-for-domain-adaptation-in-nlp\nDevon Johnson and Kent Grayson. Cognitive and Affective Trust in Service Relationships. Journa of Business research, 58(4):500-507, 2005. doi: 10.1016/S0148-2963(03)00140-1.\nYoon Kim and Alexander M Rush. Sequence-Level Knowledge Distillation. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP-16). 2016\nJinyu Li, Rui Zhao, Jui Ting Huang, and Yifan Gong. Learning small-size DNN with output. distribution-based criteria. Proceedings of the Annual Conference of the International Speech. Communication Association, INTERSPEECH, (September):1910-1914, 2014. ISSN 19909772\nShoushan Li and Chengqing Zong. Multi-domain Adaptation for Sentiment Classication: Using. Multiple Classifier Combining Methods. In International Conference on Natural Language Pro cessing and Knowledge Engineering (NLP-KE'08). IEEE, 2008. ISBN 9781424427802.\nMingsheng Long and Jianmin Wang. Learning Multiple Tasks with Deep Relationship Networks Arxiv, pp.1-9,2015. URLhttp://arxiv.0rg/abs/1506.02117\nDavid Lopez-Paz, Leon Bottou, Bernhard Scholkopf, and Vladimir Vapnik. Unifying distillatior and privileged information. ICLR, 2016. URL http://arxiv.org/abs/1511.03643\nYishay Mansour. Domain Adaptation with Multiple Sources. NIPs, 2009\nBo Pang and Lillian Lee. Opinion Mining and Sentiment Analysis. Foundations and trends in information retrieval, 2(1-2):1-135, 2008. 1SSN 1554-0669. doi: 10.1561/1500000001.\nRobert Remus. Domain adaptation using Domain Similarity- and Domain Complexity-based Instance Selection for Cross-Domain Sentiment Analysis. In IEEE ICDM SENTIRE-2012 2012. URL http://ieeexplore.ieee.org/xpls/abs{_}al1.jsp?arnumber= 6406510\nAdriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for Thin Deep Nets. ICLR, pp. 1-13, 2015. URL http:// arxiv.0rg/pdf/1412.6550.pdf\nFangzhaoWu andYongfeng Huang. Sentiment Domain Adaptation with Multiple Sources. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), pp. 301-310, 2016. URLhttps://pdfs.semanticscholar.org/09f0/ 885d1727a0b82300e94856e0be2f2f72561c.pdf\nGuangyou Zhou, Tingting He, Wensheng Wu, and Xiaohua Tony Hu. Linking Heterogeneous Input Features with Pivots for Domain Adaptation. Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015), pp. 1419-1425, 2015.\nFuzhen Zhuang, Xiaohu Cheng, Ping Luo, Sinno Jialin Pan, and Qing He. Supervised Representa tion Learning: Transfer Learning with Deep Autoencoders. IJCAI International Joint Conference on Artificial Intelligence, pp. 4119-4125, 2015. 1SSN 10450823."}, {"section_index": "10", "section_name": "A.1 DOMAIN SIMILARITY MEASURES", "section_text": "where M = (P + Q),i.e. the average distribution of P and Q, and DkL is the KL divergence\nRenyi divergence similarly generalizes KL divergence by assigning different weights to the proba bility distributions of the source and target domain and is defined as follows:\nThese domain similarity measures are typically based on the term distributions of the source and target domains, i.e. the probability distribution P of a domain is the term distribution t E R|V|1 where t; is the relative probability of word w; appearing in the domain and |Vis the size of the vocabulary of the domain. The intuition behind using term distributions is that similar domains\nDong Yu, Kaisheng Yao, Hang Su, Gang Li, and Frank Seide. KL-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition. ICASsP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, pp. 7893- 7897, 2013. ISSN 15206149. doi: 10.1109/ICASSP.2013.6639201.\nJensen-Shannon divergence is a smoothed, symmetric variant of KL divergence. The Jensen-. Shannon divergence between two different probability distributions P and Q can be written as.\n1 Djs(P|[Q) DkL(P|[M)+ DkL(Q|[M)\nn Pi DkL(P|lQ)= Pi qi i=1\n1 n Dr(P||Q) = log ) V =1\nTable 2: Comparison of the impact of different domain similarity measures on the student's perfor. mance when used for interpolating the predictions of the source domain teacher models. For the results in each column, the domain in the column header is used as target domain and the remaining. three domains are used as source domains.\nusually have more terms in common than dissimilar domains. While term distributions are efficient to compute and have proven effective in previous work (Van Asch & Daelemans. 2010 Wu & Huang 2016), they only capture shallow occurrence statistics.\nAnother form of similarity metrics such as MMD are based on representations. MMD measures the distance between a source and target distribution with respect to a particular representation $. The. MMD between the source data Xs and the target data XT is defined as follows:.\nThe representation $ is usually obtained by embedding the source data and target data in a Repro ducing Kernel Hilbert Space via a specifically chosen kernel, e.g.Bousmalis et al.[(2016) use a linear combination of RBF kernels. Similarly to Tzeng et al.(2014), we use the hidden representa tion of a neural network as basis for $, as we are interested in how well the teacher's representation captures difference in domain.\nIn our experiments, MMD does not outperform the more traditional term distribution-based similar ity measure, which we attribute to two reasons: 1) Due to the limited amount of data, our teache model is not deep enough to capture the difference in domain in its single hidden layer;Tzeng et al (2014) in contrast identify the fully-connected layer fc7 in the AlexNet architecture as the layei minimizing MMD. 2) The teacher is only trained on the source domain data. Its representation is thus not sensitive to detect the domain shift to the target domain. Training a separate model to min imize MMD alleviates this, but incurs additional computational costs and requires retraining on the source data during adaptation, which we set out to avoid to enable efficient adaptation.\nAnother commonly used measure of domain similarity is A-distance.Ben-David et al.(2007) show. that computing the A-distance between two domains reduces to minimizing the empirical risk of a. classifier that tries to discriminate between the examples in those domains. Previous work (Blitzer. et al.]2007) uses the Huber loss and a linear classifier for computing the A-distance. In our exper-. iments, A-distance did not outperform Jensen-Shannon divergence, while its reliance on training a classifier is a downside in our scenario with multiple or changing target domains, where we would. prefer more efficient measures of domain similarity."}, {"section_index": "11", "section_name": "A.2 MULTI-CLASS MCD", "section_text": "Maximum cluster difference can be easily extended to the multi-class setting. For n classes, we compute n cluster centroids for the clusters whose members have been assigned the same class by. the model. We then create a set C containing all n(n - 1)/2 unique pairs of cluster centroids. Finally, we compute the sum of pair-wise differences of the model's representation h with regard to. the cluster centroid pairs:\n1 1 L $(xT)l. MMD(Xs,XT) X s XT xsEXs xTEXT\nMCDmulti = Icos(c1,h)-cos(c2,h)|. C1,C2 EC"}]
HJcLcw9xg
[{"section_index": "0", "section_name": "THE PREIMAGE OF RECTIFIER NETWORK ACTIVITIES", "section_text": "Stefan Carlsson, Hossein Azizpour and Ali Razavian\nStockholm. Sweden\nThe preimage of the activities of all the nodes at a certain level of a deep net work is the set of inputs that result in the same node activity. For fully connectec multi layer rectifier networks we demonstrate how to compute the preimages oj activities at arbitrary levels from knowledge of the parameters in a deep rectifying network by disregarding the effects of max-pooling. If the preimage set of a cer tain activity in the network contains elements from more than one class it mean. that these classes are irreversibly mixed. This implies that preimage sets which are piecewise linear manifolds are building blocks for describing the input man ifolds specific classes, i.e. all preimages should ideally be from the same class We believe that the knowledge of how to compute preimages will be valuable ir understanding the efficiency displayed by deep learning networks and could po tentially be used in designing more efficient training algorithms"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The activity of the nodes at each level of a deep network contains all the information that will be used for classification. Ideally, if the activities are generated by two inputs from the same class. they should be similar and if the classes are distinct the activities should be distinct. The map from. the input to any layer of a deep network can however easily be shown to be many to one. This. means that classes can potentially get mixed at any level of the network by mapping to the same. activity. This mixing cannot be undone at later stages and must therefore be avoided. Given a certain. activity it is therefore essential to know the set of inputs to the network that result in this activity This set should obviously not contain exemplars from more than one class. This means that they. are potential building blocks for designing efficient classifiers. In this paper we will demonstrate. that the set of inputs resulting in a specific activity at any level of a deep rectifier network can be. completely characterised and we will give a procedure for computing them. For a specific activity. at any level they are known as the preimage of the function mapping the input to the activity. In. this procedure we disregard the effects of pooling the outputs of node activities. This can be seen. as complementary to the work in|Mahendran & Vedaldi (2015f 2016) where specific preimages are computed numerically by a regularised optimisation procedure that tries to map the image to the. natural image manifold.\nFor multi layer rectifier networks where each layer consists of linear mappings followed by a rectify. ing linear unit (ReLU), the set of possible functions that map inputs to node activities can be shown to be piecewise linearGlorot et al.(2011);Montufar et al.(2014). We will demonstrate that for a. specific activity at any level, the equivalence class of inputs that can generate this activity consists. of piecewise linear manifolds in the input space. For efficient classification by the network, these. manifolds must only contain a single class. They therefore constitute building blocks for efficient approximation of the distribution of classes in the input space..\nn y AiXi+l i=1"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Multi layer networks with rectifier linear units (ReLU) as non-linear elements starts with the n limensional input vector x and produces successive outputs of the form:.\nWhere [x]+ denotes the ReLU function max(0, x). By augmenting the input vector with a one cT = (x1 :.. xn, 1) we can absorb the bias b into the vector as a weight an+1 and write.\nn+1 aixi+= w'x i=1\nwhere we have collected the weights a; into the vector w We will consider networks that are fully connected at each level l. I.e. we will consider the set of mappings between layers:.\nFor each point x'+1 in the activity output space of level l + 1 we can define the preimage set P(xl+1. of activities x' at level l that maps to this activity. We can illustrate this as in Figure|1|with examples of preimages of the mapping between successive layers for a simple 2-node network. If there was no non linear rectifying element, the mapping would be just the linear plus bias transformation from layer l to layer l + 1 and and it could be read out from the figure by just noting the respective coordinates in the orthogonal l-system and the skewed l + 1-system..\n(l+1) L. X1 W1,3 W1.2X\n(l+1) 1 L 2 W2.1 W2.2X W2.3\n(l+1) (l+1) (l+1) x1 W1,1 W1,2 wi,3 l+1 (l+1) (l+1) (l+1) x2 W2,1 W2,2 W2,3 1 0 0 1\nBy concatenating this these transformations from the first layer we can express the activity at leve (l + 1) directly as am affine mapping from the input space (x1, x2).\n2 k x -> x(1)\nThis is a somewhat generalised model compared to the more standard convolutional networks with multiple kernels at each level. It can however be easily specialised to the convolutional case which we will will do later.\n(l+1) (l+1) (l+1) (l) (l) (l) (1) (1) (1) +1 w1,1 W1,2 wi,3 wi,1 wi,2 wi,3 w1,1 wi,2 w1,3 x1 l+1 (l+1) (l+1) (l+1) (l) (l) (l) (1) (1) (1) x2 W2,1 W2,2 W2,3 W2,1 W2,2 W2,3 W2,1 W2,2 W2,3 x2 0 0 1 0 0 1 0 0 1 1\n(l+1) (l+1) (l+1) (l) (l) (1) (1) (1) (1) wi,1 Wi,2 w1,3 wi,1 wi,2 wi,3 wi,1 Wi,2 wi,3 X1 l+1 (l+1) (l+1) (l+1) (l) (l) (l) (1) (1) (1) x 2 W2,1 W2,2 W2,3 W2,1 W2,2 W2,3 W2,1 W2,2 W2,3 X2 1 0 0 1 0 0 1 0 0 1 1\nIn fig|2[the coordinate systems depicts these mappings from the input space (x1, x2) to the node activities and levels 1, 2 and 3 and how the preimages at the output level can be traced as elements in input space forming piecewise linear manifolds Note that disjoint piecewise linear regions in points in the grey shaded area get mapped to the same point x{) network's ability to map non-linear input regions into linear output regions which is essential for successful classification.\ninput space (x1, x2) gets mapped to distinct points on the two output axis (x) while all the , x\nFigure 1: Mapping from activity x at level l to level l + 1 and the associated preimage sets P(x). =0. In the all positive quadrant, the mapping is just linear while in the other quadrants, the input gets mapped to O output of either x1 (l+1) (l+1) Or x2 or both. This generates preimage sets depending on the quadrant..\nr(l+1) =[Wx(l)\n- vx<0 V\n() (1+1) (1+1) P(x) (1+1) X (I+1) P(x) () X\nWe collect the linear mappings at level l in the matrix W. The number of rows of this matrix is the dimensionality of the output which we can be varying but we will assume that W always has full rank in order to focus on problems induced by the non linear ReLU element. If we denote by [x]+ the output vector with component wise application of the ReLU function on the vector x, we then can write:\nfor the mapping of activities from layer l to layer l + 1. For each element x(/+1) the preimage set of this mapping will be the set:\n[x:xl+1= [Wx]+} x+1\nFigure 2: Preimages at various levels of a rectifier network with input (x1, x2) and output activity (x1), x2 are irreversibly mixed.\nFor the case p = 0 we have a trivial linear mapping from the previous layer to only positive values of the output. This means that the preimage is just the point x(). In the general case where p > 0 the preimage will contain elements x such that w x < 0 for i1, 2, ... ip. In order to identify these we will define the null spaces of the linear mappings wi:.\nII={x:wx=0 j=1...n}\nThese null spaces are sets of hyperplanes in input space. Obviously, any input element x that is mapped to the negative side of the hyperplane generated by the mapping w' will get mapped to this hyperplane by the ReLU function. In order to identify this mapping we will define a set of basis vectors for elements of the input space from the one dimensional linear subspaces generated by the intersections:\n= InIIn...nII-1NIIi+1N...nII,\ni = 1NI2N...NI-1NII+1N...NII\nEach one dimensional subspace ; is generated by intersecting the hyperplanes associated with the nullspaces of the remaining linear mapping kernels. The fact that these intersections generate one dimensional subspaces can be seen most easily using e.g. Grassmann-Cayley algebra Carlsson (1993) or by just noting that each intersection of two n-dimensional hyperplanes gives rise to a linear manifold with dimension one lower at each intersection For each subspace ; we can now define a basis unit vector e; such that each element of ; can be expressed as x = Q;e;. We can also define the direction and length of e, by requiring that wf'e, = 1 The assumed full rank of the mapping W guarantees that the system e1, e2 . . . en is complete in the input space. We can therefore express any vector as:\nx2 (2) X2 (3) X2 (1) X2 (3) x1 2 X1 1 X1 - X1\nn x Qiei 1\nSince e, is in the nullspace of eve.. remaining kernel except i we have.\nwe=0 ij\nn wfx = 7\n(l+1) =[wTx(l)]\ni = 1NI2N...NI-1NII+1N...NI,\nand define basis vectors for these as e; Any element in the input space can now be expressed as a linear combination:.\nx Qi1i1 + Qi,ei2 eip Qj1ej1 Qj2Ej2 Oin ..Qjqejq\nFigure[3jillustrates the associated hyperplanes I, I2, I in the case of three nodes and the respec tive unit vectors e1, e2, e3 with positive directions indicated by arrows. For the all positive octant i.e. all wf'x > 0 the linear mapping is just full rank and the preimage is just the associated inpu x1, x2, x3). For three other octants the preimages for three selected points are illustrated:\nX = Qili1 + Qili2 +...Qiplip\nThe preimage then consists of all points on the linear manifold:\nx - Qj1j1 -Qj2j2-...Qjqej\nFor a multi level network , preimages for elements that are mappings between successive levels. will therefore consist of pieces of linear manifolds in the input space at that level of dimensions. determined by the number of nodes with positive output for that element. By mapping back tc the original input space, preimages for specific elements at a certain level will be piecewise linear manifolds, the elements of which all map to that specific element. This is exactly what is illustratec in figure2|for the case of 2-dimensional inputs and a network with three levels of two nodes at eacl level.These piecewise linear manifolds can therefore me considered as fundamental building blocks. for mapping input distributions to node outputs at any level of the network..\nThe subspace coordinates a; are therefore a convenient tool for identifying the preimage of the mapping between the successive layers in a rectifier network. Since for j = i1, i2,...ip we will have Q; > 0 and for j = j1, j2,... Jq we will have Q; < 0.\nWe can therefore finally formulate the procedure for identifying the preimage of a mapping between successive layers in a rectifying network as:\n7 = I nIIn...nII-1nI+1N...nIIn\n1. For wT x > 0, wT x > 0, wT x < 0, the preimage of a point on the plane II3 consist of all points on the indicated arrow. 2. For wT x > 0, wT x < 0, wT x > 0, the preimage of a point on the plane II consist of all. points on the indicated arrow.. 1 H2 and H3 consist of all points on the indicated grey shaded area.\n3 12 T X 3 X1\nFigure 3: Hyperplanes H1, IH3 of nullspaces for transformation kernels and the associated unit vectors e1, e2, e3 from pairwise intersections (III3) (II, II3) and (II, II2) respectively. The preimages of various points in the output are indicated as arrows or the shaded area\nConvolutional networks where the mar ings W consists of convolutional matrices:\n0 0 WL 0 0 0 WT 0 0 0 0 WT 0 0 0 0 0 WT 0\nW1 0 0 0 0 0 WT 0 0 0 0 WT 0 0 0 1 W 0 0 0\nare the standard realisations of multilayer networks. Using a heuristic argument, it can be seen that the preimages of these networks are in general \"well-behaved\" in the sense that under very general assumptions that are typically valid when training these networks the preimages generated for a specific activity associated with an image will be semantically equivalent to the given image. For any layer, the accumulated kernel w(l) mapping from the input space will be the concatenation with the specific kernel w'(1) of that layer and all specific kernels from the previous layers. I.e. they are generated by repeated convolutions:\n*\nof the specific kernels w'(i) from the previous layers. In general the kernels associated with lower levels of the network will be associated with features such as edges in various orientation. They\n0 0 WT X1 0 0 0 WT x2 0 0 WT 0 0 0 0 0 0 0 0 WT 0\nX1,x2...Xn a,...ab...,b\ni.e. a \" step edge \" where the step occurs at the location (i) of the deleted kernels. Sharp step edges. in an image will lead to high likelihood of the convolution being negative. The preimages associated with an image with step edges therefore consists of the original image with step edges overlaid on already existing edges. This will in general not change the semantic content of the image. At higher. levels, negative convolutions and nullspaces will be associated with more complex image structures and we can expect more complex variations in the preimage set such as the blurrings associated with. the results in|Mahendran & Vedaldi(2015 2016) This suggests that the preimages associated with. images in standard convolutional networks are not adversarial but rather fall in the same class as that of the image in question. It will be a focus of further work to outline in more detail the structure. of the preimage class of a certain input image in order to find out if it represents e.g variations due. to external factors and/or if it contains truly adversarial input exemplars. An interesting possibility. would be that the set of preimages can be considered as a model set for any class of images and that. the ultimate goal of training a network will be to have the set of preimages of the output nodes of a. certain class coincide with the image manifold associated with that class.."}, {"section_index": "3", "section_name": "IMPLICATIONS FOR MODELLING IMAGE MANIFOLDS", "section_text": "It is generally believed that a valid model for the distribution of images of various identifiable classes is that they lie on relatively low dimensional manifolds in the high dimensional image input space. One can also think about the \"general image manifold\" as consisting of all possible images that contain identifiable visual structures. The manifolds associated with specific object classes would then be contained in this general manifold. Due to external factors associated with the imaging situation like viewpoint, illumination , shading etc. the elements of a specific class will be distributed in input image space on the specific object manifold."}, {"section_index": "4", "section_name": "4.2 CONSTRAINTS ON KERNELS FOR AVOIDING THE MIXING OF PREIMAGES", "section_text": "Given a specific assumption of the distribution of classes on the image manifold we can postulate various properties of the kernels of rectifier networks at various levels that are necessary in order that the preimages associated with elements of different classes should not overlap. If we make the hypothesis that external factors influence different classes in a similar way we can expect the variation of the class manifolds due to external factors to be similar. Fig4|depicts a simplified hypothetical case of the input image manifold where the variation due to external factors has been reduced to just one-dimension.\nFigure 5|illustrates the placement of the hyperplanes associated with kernels at various levels in a network that would be necessary in order for the preimages of the classes not to mix with each other Due to the assumed similarity of the variation of the individual class manifolds, the hyperplanes have to be orthogonal to the external factor variation. This highly simplified sketch just illustrates the basic idea that the covariation of class manifolds would induce severe constraints on the kernel. at various levels in a network\nare therefore in general not responsive to slowly varying signal intensity inputs. Typically they will therefore have the constant vector (1. 1.... 1) as nullvector. Any nullyector of a kernel at a certain level will also be a nullvector of the kernels associated with higher levels. If we study the nullvector (x1 ... xn) associated with the complementary set of kernels when deleting a specific kernel (i) so that no overlap occurs between kernels (i - 1) and (i + 1)\nInput image manifolds for various object classes Variation due to external factors\nFigure 4: The \"general image manifold' and manifolds of individual object classes due to externa factor and intra class variation assuming a high degree of covariance between classes due to externa factors\nFigure 5: Hyperplanes associated with kernels at various levels in a rectifier network that will avoid mixing of preimages from different classes.\nThe covariation of different object classes would have the important implication that the training of the kernels at these levels of the network would benefit from exemplars of multiple classes. This would give an explanation for the relative success of deep learning methods compared to previous. approaches that in general relied on training of individual classes.."}, {"section_index": "5", "section_name": "4.3 SEPARATING CLASSES", "section_text": "A B B\nFigure 6: Illustration how different classes A and B on the same manifold can be split to different manifolds and how the same class B on different manifolds can be merged to the same manifold\nIn summary we have discussed how the properties of a network regarding it's ability to model inpu manifolds and achieve efficient classification can be described with the concept of the preimage o the activities of the rectifier network at a certain level. For a specific class the set of preimage expressed in input image space resulting from the totality of activities at the output of the networl constitutes the network's model of the manifold of that class. The efficient training of a networl can be seen as obtaining the correct model of every class to be discriminated. The preimage con cept allows us to describe how the network is constrained by properties of these manifolds and the requirement of obtaining efficient classification at the end."}, {"section_index": "6", "section_name": "CONCLUSIONS AND FURTHER WORK", "section_text": "We have described a procedure to compute the preimage of the activity at a certain level in a deep network. I.e the set of inputs to the level that result in the same output activity. By concatenating this procedure we can compute the preimage at input level, i.e. the set of input exemplars that will eventually result in the same activity. Since inputs in the same preimage of any level activity are irreversibly mixed, they should ideally correspond to exemplars in classes to be discriminated. They therefore constitute building blocks for capturing the manifolds of classes in input space. The fact that deep networks can be seen as tools for efficient low dimensional piecewise linear approximation has been pointed out in other works recentlyBasri & Jacobs[(2016) and is an important component in understanding how deep networks achieves their unprecedented efficiencyBrahma et al.(2016); An et al.(2015)\nAt the output level of a network, the representation of of various classes have to be disentangled in order for the final fc layer to perform a linear separation of the classes. We have empirically found that classes at the final layer of a network are highly concentrated on a small set sometimes individual nodes. The whole network can be seen as a dimensionality reducing device that starts out in high dimensional image input space and ends up in very low dimensional output layers. This dimensionality reduction is achieved by the ReLU units at various layers as illustrated in fig. 3 where the preimage, being a linear manifold is mapped from a higher to a lower dimension. At various levels of the network. different classes can located on the same manifold or the same class can be distributed on different manifolds. This means that we need procedures for ReLU networks that splits different classes on the same manifold or merges a class that is represented on several manifolds to one and the same. This can be achieved by ReLU networks and fig. 6Jillustrates for the simple 2 node network how the preimages of classes can be split or merged by proper selection of kernels at successive levels.\nA B B\nIt will also be interesting to investigate how knowledge of preimages in deep networks can be used. to enhance the efficiency of the training of the network. In order to do this we will have to consider the specialisation to convolutional layers and what it implies. It will also be relevant to investigate. the possible nature of adversarial exemplars Szegedy et al.(2013];Goodfellow et al.(2014);Nguyen et al. (2015) in classification and if they are related to the concept of pre image of activities associated. with specific classes.\nWe would like to thank Andrew Zisserman for pointing out the work of Mahendran & Vedaldi (2015} 2016). Stefan Carlsson, Hossein Azizpour and Ali Razavian are supported by the School of Computer Science and Communication, KTH. In Razavian's case via a grant held by Atsuto Maki\nSenjian An, Farid Boussaid, and Mohammed Bennamoun. How can deep rectifier networks achieve. linear separability and preserve distances? In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 514-523, 2015..\nRonen Basri and David W. Jacobs. Efficient representation of low-dimensional manifolds usin deep networks. CoRR, abs/1602.04723, 2016.\nPratik Prabhanjan Brahma, Dapeng Wu, and Yiyuan She. Why deep learning works: A manifold disentanglement perspective. IEEE Trans. Neural Netw. Learning Syst., 27(10):1997-2008, 2016.\nXavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. I Aistats, volume 15, pp. 275, 2011.\nIan J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572, 2014\nAravindh Mahendran and Andrea Vedaldi. Visualizing deep convolutional neural networks using natural pre-images. International Journal of Computer Vision (IJCV). 2016.\nChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfel. low, and Rob Fergus. Intriguing properties of neural networks. CoRR, abs/1312.6199, 2013\nIt will also be the objective of further work to investigate empirically if the assumed models of image manifolds and their relation to preimages are valid. This will involve the actual computation of preimage manifolds which essentially involves the computatio of nullspaces of weight matrices at various levels in order to define basis vectors for the manifolds. A deeper analysis of the preimage problem will also have to deal with the pooling that takes place in a deep learning network."}]
B1GOWV5eg
[{"section_index": "0", "section_name": "LEARNING TO REPEAT: FINE GRAINED ACTION REPETITION FOR DEEP REINFORCEMENT LEARNING", "section_text": "Sahil Sharma, Aravind S. Lakshminar\nIndian Institute of Technology, Madra. Chennai. 600036. India\nReinforcement Learning algorithms can learn complex behavioral patterns for se. quential decision making tasks wherein an agent interacts with an environment. and acquires feedback in the form of rewards sampled from it. Traditionally, such algorithms make decisions, i.e., select actions to execute, at every single time step. of the agent-environment interactions. In this paper, we propose a novel frame-. work, Fine Grained Action Repetition (FiGAR), which enables the agent to decide. the action as well as the time scale of repeating it. FiGAR can be used for im-. proving any Deep Reinforcement Learning algorithm which maintains an explicit. policy estimate by enabling temporal abstractions in the action space. We em. pirically demonstrate the efficacy of our framework by showing performance im. provements on top of three policy search algorithms in different domains: Asyn. chronous Advantage Actor Critic in the Atari 2600 domain, Trust Region Policy. Optimization in Mujoco domain and Deep Deterministic Policy Gradients in the. TORCS car racing domain."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Reinforcement learning (RL) is used to solve goal-directed sequential decision making problem. wherein explicit supervision in the form of correct decisions is not provided to the agent, but only. evaluative feedback in the form of the rewards sampled from the environment. RL algorithms mode. goal-directed sequential decision making problems as Markov Decision Processes (MDP) [Sutton & Barto (1998)]. However, for problems with an exponential or continuous state space, tabular RL al. gorithms that maintain value or policy estimates for every state become infeasible. Therefore, there. is a need to be able to generalize decision making to unseen states. Recent advances in representa. tion learning through deep neural networks provide an efficient mechanism for such generalizatior. [LeCun et al. (2015)]. Such a combination of representation learning through deep neural networks. with reinforcement learning objectives has shown promising results in many sequential decisior. making domains such as the Atari 2600 domain Bellemare et al. (2013); Mnih et al. (2015); Schau et al. (2015); Mnih et al. (2016)], Mujoco simulated physics tasks domain [Todorov et al. (2012) Lillicrap et al. (2015)], the Robosoccer domain [Hausknecht et al. (2016)] and the TORCS domair. [Wymann et al. (2000); Mnih et al. (2016)]. Often, MDP settings consist of an agent interacting. with the environment at discrete time steps. A common feature shared by all the Deep Reinforce. ment Learning (DRL) algorithms above is that they repeatedly execute a chosen action for a fixed. number of time steps k. If at represents the action taken at time step t, then for the said algorithms. a1 = a2 =:::= ak, k+1 = ak+2 = = a2k and in general aik+1 = aik+2 = .:. = a(i+1)k i > 0. Action repetition allows these algorithms to compute the action once every k time steps anc. hence operate at higher speeds, thus achieving real-time performance. This also offers other advan. tages such as smooth action policies. More importantly, as shown in Lakshminarayanan et al. (2017. and Durugkar et al. (2016), macro-actions constituting the same action repeated k times could be. interpreted as introducing temporal abstractions in the induced policies thereby enabling transitions. between temporally distant advantageous states."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Ix29 x28 Ix16 Ix29 Ix29 Ix27 Ix29 3 4 5 6 8 (a) Freeway x9 7x9 Ix7 /x8 Yx9 x18 15090 15090 15090 15090 15090 15130 15170 OXYGEN OXYGEN OXYGEN ACTIVISIDN AETIVISION 1 2 3 4 5 6 (b) Sea Quest\nFigure 1: FiGAR induces temporal abstractions in learnt policies. The arrows indicate the action executed between the frames and the numbers depict the number of time steps for which the action was repeated. The thunder bolt corresponds to the firing action. An arrow alongside a thunderbolt corresponds to the action (arrow+fire). In the figure (a), the agent learns to execute down operation (which is equivalent to a no-op in this particular state, in this game) until a traveling car passes by. and then executes temporally elongated actions to complete the task, skillfully avoiding the red car in the 7th frame. In figure (b) the agent catches a glimpse of a pink opponent towards bottom right. in the 2nd frame and executes temporally elongated actions to intercept and kill it (in the 6th frame)..\nThe time scale for action repetition has largely been static in DRL algorithms until now [Mnih et al. (2015; 2016); Schaul et al. (2015)]. Lakshminarayanan et al. (2017) are the first to explore dynamic time scales for action repetition in the DRL setting and show that it leads to significant improvement in performance on a few Atari 2600 games. However, they choose only two time scales and the experiments are limited to a few representative games. Moreover the method is limited to tasks with a discrete action space.\nexperiments are limited to a few representative games. Moreover the method is limited to tasks with a discrete action space. We propose FiGAR, a framework that enables any DRL algorithm regardless of whether its action space is continuous or discrete, to learn temporal abstractions in the form of temporally extended macro-actions. FiGAR uses a structured and factored representation of the policy whereby the pol- icy for choosing the action is decoupled from that for the action repetition selection. Note that deciding actions and the action repetitions independently enables us to find temporal abstractions without blowing up the action space, unlike Vezhnevets et al. (2016) and Lakshminarayanan et al. (2017). The contribution of this work is twofold. First, we propose a generic extension to DRL algo- rithms by coming up with a factored policy representation for temporal abstractions (see figure 1 for sequences of macro actions learnt in 2 Atari 2600 games). Second, we empirically demonstrate Fi- GAR's efficiency in improving policy gradient DRL algorithms with improvements in performance over several domains: 31 Atari 2600 games with Asynchronous Advantage Actor Critic [Mnih et al. (2016)], 5 tasks in MuJoCo Simulated physics tasks domain with Trust Region Policy Optimiza tion [Schulman et al. (2015)] and the TORCS domain with Deep Deterministic Policy Gradients [Lillicrap et al. (2015)]."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Our framework is centered on a very general idea of only deciding when necessary. There have been similar ideas outside the RL domains. For instance, Gu et al. (2016) and Satija & Pineau (2016) explore Real Time Neural Machine Translation where the action at every time step is tc decide whether to output a new token in the target language or not based on current context.\nTransition Point Dynamic Programming (TPDP) [Buckland & Lawrence (1994)] algorithm is a mod fication to the tabular dynamic programming paradigm that can reduce the learning time and mem. ry required for control of continuous stochastic dynamic systems. This is done by determinin set of transition points in the underlying MDP. The policy changes only at these transition poir tates. The algorithm learns an optimal set of transition point states by using a variant of Q-Learnin o evaluate whether or not to add/delete a particular state from the set of transition points. FiGAI earns the transition points in the underlying MDP on the fly with generalization across the stat. pace unlike TPDP which is tabular and infeasible for large problems..\nThe Dynamic Frameskip Deep Q-network [Lakshminarayanan et al. (2017)] proposes to use multi ple time scales of action repetition by augmenting the Deep Q Network (DQN) [Mnih et al. (2015)] with separate streams of the same primitive actions corresponding to each time scale. This way, the time scale of action repetition is dynamically learned. Although this framework leads to a significant improvement in the performance on a few Atari 2600 games, it suffers from not being able to support multiple time scales due to potential explosion of the action space and is restricted to discrete action spaces. Durugkar et al. (2016) also explore learning macro-actions composed using the same action repeated for different time scales. However, their framework is limited to discrete action spaces and performance improvements are not significant.\nLearning temporally extended actions and abstractions have been of interest in RL for a long time. Vezhnevets et al. (2016) propose Strategic Attentive Writer (STRAW) for learning macro-action and building dynamic action-plans directly from reinforcement learning signals. Instead of out. putting a single action after each observation, STRAW maintains a multi-step action plan. The. agent periodically updates the plan based on observations and commits to the plan between the re planning steps. Although the STRAW framework represents a more general temporal abstractior. than FiGAR, FiGAR should be seen as a framework that can compliment STRAW whereby th decision to repeat could now be hierarchical at plan and base action levels..\nFiGAR is a framework that has a structured policy representation where the time scale of execution. could be thought as parameterizing the chosen action. The only other work that explores param-. eterized policies in DRL is Hausknecht & Stone (2016) where discrete actions are parameterized. by continuous values. In our case, discrete/continuous actions are parameterized by discrete values The state spaces in Atari are also more sophisticated than the kind explored in Hausknecht et al.. (2016).\nFiGAR is also very naturally connected to the Semi-MDPs (SMDPs) framework. SMDPs are MDPs with durative actions. The assumption in SMDPs is that actions take some holding time to com- plete [Duff (1995); Mahadevan et al. (1997); Dietterich (2000)]. Typically, they are modeled with two distributions, one corresponding to the next state transition and the other corresponding to the holding time which denotes the number of time steps between the current action from the policy until the next action from the policy. The rewards over the entire holding time of an action is the credit assigned for picking the action. In our framework, we naturally have durative actions due to the policy structure where the decision consists of both the choice of the action and the time scale of its execution. Therefore, we convert the original MDP to an SMDP trivially. In fact, we give more structure to the SMDP because we are clear that we repeat the chosen action during the holding time, while what happens during the holding time is not specified in the SMDP framework. One can think of the part of the policy that outputs the probability distribution over the time scales as a holding time distribution. Therefore, our framework naturally fits into the SMDP definition with the action repetition rate characterizing the holding time. We also sum up the rewards over the holding time with an an appropriate discounting factor as in an SMDP framework.\nActor critic algorithms execute policy gradient updates by maintaining parametric estimates for the policy e. (a[s) and the value function Ve.(s) [Sutton & Barto (1998)]. The value function estimates are used to reduce the variance in the policy gradient updates..\nAsynchronous Advantage Actor Critic (A3C) [Mnih et al. (2016)] learns policies based on an asyn chronous n-step returns. The k learner threads execute k copies of the policy asynchronously anc the parameter updates are sent to a central parameter server at regular intervals. This ensures tha temporal correlations are broken between subsequent updates since the different threads possibly explore different parts of the state space in parallel. The objective function for policy improvemen in A3C is:\nThe policy and value functions are parameterized by Deep Neural Networks\nTRPO [Schulman et al. (2015)] is a policy optimization algorithm. Constrained optimization of a. surrogate loss function is proposed, with theoretical guarantees for monotonic policy improvement The TRPO surrogate loss function L for potential next policies () is:.\nLeoa(0) =n() + >`p\"(s)> as)As, a) S a\nwhich ensures that the policy improvement can be done in non-trivial step sizes and at the same time the new policy does not deviate much from the current policy due to the KL-divergence constraint."}, {"section_index": "4", "section_name": "3.3 DEEP DETERMINISTIC POLICY GRADIENTS", "section_text": "According to the Deterministic Policy Gradient (DPG) Theorem [Lever (2014)], the gradient of the performance objective (J) of the deterministic policy () in continuous action spaces with respect to the policy parameters (0) is given by:"}, {"section_index": "5", "section_name": "4 FIGAR: FINE GRAINED ACTION REPETITION", "section_text": "FiGAR provides a DRL algorithm with the ability to model temporal abstractions by augmenting it. with the ability to predict the number of time steps for which an action chosen for execution is to be repeated. This prediction is conditioned on the current state in the environment..\nThe FiGAR framework can be used to extend any DRL algorithm (say Z) which maintains an explicit policy. Let Z' denote the extension of Z under FiGAR. Z' has two independent decoupled.\nL(0a) = log eg (at[st) (Gt - V(st)\nwhere G, is an estimate for the return at time step t. The A3C algorithm uses n-step returns for estimating Gt which is a biased estimate for Q(st, at). Hence one can think of Gt - V(st) as an estimate for A(st, at) which represents the advantage of taking action at in state St. The value\nn-1 (St) =) j=t\nwhere 0otd are the parameters of policy and 0 are parameters of . This surrogate loss function is optimized subject to the constraint:\nDRE(r,) <8\nVeJ (s)Vee(s)VaQ\"(s,a)|a=o(s)ds = Es~o[Veo(s)VaQ(s,a)|a=e(s).\nfor an appropriately defined performance objective J. The DPG model built according to this theo. rem consists of an actor which outputs an action vector in the continuous action space and a critic node1 Q(s, a) which evaluates the action chosen at a state. The DDPG algorithm (Lillicrap et al. (2015)] extends the DPG algorithm by introducing non-linear neural network based function ap. proximators for the actor and critic..\n1: function MAKEF1GAR(DRLAlgorithm Z, ActionRepetitionSet W) 2: St state at time t 3: at < action taken in st at time t 4: a action policy of Z 5: fe. (st) action network for realizing action policy a 6: L(a, St, at) A's objective function for improving a 7: x < construct action repetition policy for FiGAR-Z. 8: fe. (st) repetition network with output of size |W| for action repetition policy x 9: L(x, St, at) L evaluated at Tx 10: T(St,at) L(7x, St,at) * L(a,St,at) 1/ Total Loss 11: return T, fea, fex\npolicy components. The policy e. for choosing actions and the policy e. for choosing action repetitions. Algorithm 1 describes the generic framework for deriving DRL algorithm Z' from algorithm Z. Let W stand for the set of all action repetitions that Z' would be able to perform. In tradition DRL algorithms, W = {c}, where c is a constant. This implies that the action repetition is static and fixed. In FiGAR, The set of action repetitions from which Z' can choose is W =- {w1, w2, ... , w|w|}. The central idea behind FiGAR is that the objective function used to update the parameters 0aof e. maintained by Z will be used to update the parameters Ox of the action repetition policy e, of Z' as well (illustrated by the sharing of L in Algorithm 1). In the first sub- section, we desribe how Z' operates. In the next two sub-sections, we describe the instantiations of FiGAR extensions for 3 policy gradient DRL algorithms: A3C, TRPO and DDPG."}, {"section_index": "6", "section_name": "4.1 HOW FIGAR OPERATES", "section_text": "The following procedure describes how FiGAR variant Z' navigates the MDP that it is solving\n1. In the very first state so seen by Z', it predicts a tuple (ao, xo) of action to execute and number. of time steps for which to execute it. ao is decided based on e. (so) whereas xo is decided based. on , (so). Each such tuple is known as an action decision.. 2. We denote by s; the state of the agent after j such action decisions have been made. Similarly. x, and a, denote the action repetition and the action chosen after j such action decisions. Note. that xj E {w1, w2, ::. , w|w|}, the set of all allowed action repetitions.. 3. From time step 0 until xo , Z' executes ao.. 4. At time step xo, Z' again decides, based on current state s1 and policy components. (e. (s1), e, (s1)), the tuple of action to execute and the number of times for which to execute. it, (a1, x1). 5. It can seen that in general if Z' executes action ak for xk successive time steps, the next action is. k decided at time step t = x on the basis of (ea (Sk+1), . (Sk+1)), where Sk+1 is the state. seen at time step t. 4.2 F1GAR-A3C A3C uses fe.(s) and fe.(s) which represent the policy (a[s) and the value function V(s;\n1. In the very first state so seen by Z', it predicts a tuple (ao, xo) of action to execute and numbe of time steps for which to execute it. ao is decided based on e. (so) whereas xo is decided based on e. (so). Each such tuple is known as an action decision. 2. We denote by s; the state of the agent after j such action decisions have been made. Similarly x; and a; denote the action repetition and the action chosen after j such action decisions. Note that x E {w1, w2, :.. , wjw!}, the set of all allowed action repetitions.\nA3C uses fe.(sj) and fe.(s) which represent the policy (a[s) and the value function V(sj respectively. (a[s) is a vector of size equal to the action space of the underlying MDP while V(s) is a scalar. FiGAR extends the A3C algorithm as follows:\n1. With s; defined as in the previous sub-section, in addition to fea(s) and fe.(sj) , FiGAR A3C defines a neural network fe. (s). This neural network outputs a |W|-dimensional vecto representing the probability distribution over the elements of the set W. The sampled time scal from this multinomial distribution decides how long the action decided with fe. (s) is repeated The actor is now composed of both fe. (s) (action network) and fo. (s) (repetition network).\nNote that point 2 above implies that the action space has been extended by W and has a dimensio. of A]+ W]. It is only because of this factored representation of the FiGAR policy that the numbe of parameters do not blow up. If one were to extend the action space in a naive way by coupling the actions and the action repetitions, one would end up suffering the kind of action-space blow up as seen in [Lakshminarayanan et al. (2017); Vezhnevets et al. (2016)] wherein for being able t control with respect to |W different action repetition levels (or |W|-length policy plans in the cas. of STRAW) , one would need to model |A |W actions or action-values which would blow up the. final layer sizeWtimes"}, {"section_index": "7", "section_name": "4.3 F1GAR-TRPO", "section_text": "Although fe. (s) in A3C is generic enough to output continuous or discrete actions, we consider. A3C only for discrete action spaces. Preserving the notation from the previous subsection, we. describe FiGAR-TRPO where we consider the case of the output generated by the network fe. (s). to be A dimensional with each dimension being independent and describing a continuous valued action. The stochastic policy is hence modeled as a multi-variate Gaussian with diagonal co-variance matrix. The parameters of the mean as well as the co-variance matrix are together represented by 0. and the concatenated mean-covariance vector is represented by the function fe. (sj). FiGAR-TRPO is constructed as follows:\n1. In TRPO,the objective function Leota(0) is constructed based on trajectories drawn according tc the current policy. Hence, for FiGAR-TRPO the objective function is modified to be:. Lea,old,Ox,old(a 0a.old,0x,ol0 where Ox are the parameters of sub-network fe. which computes the action repetition distribu. tion. This implies that for FiGAR-TRPO the combination operator * defined in Algorithm 1 i. in some sense the scalar multiplication. ar controls the relative learning rate of the core-policy. parameters and the action repetition parameters. 2. The constraint in TRPO corresponding to the KL divergence between old and new policies i. modified to be: DRE(Ta,Ta) + KLDR(Tx,Tx) 8 T where na denotes the Gaussian distribution for the action to be executed and x denotes th multinomial softmax-based action repetition probability distribution. kL controls the relativ. divergence of and from the new corresponding policies. See Appendix C for an explana. tion of the loss function used."}, {"section_index": "8", "section_name": "4.4 F1GAR-DDPG", "section_text": "In this subsection, we present an extension of DDPG under the FiGAR framework. DDPG consists of fe. (s) which denotes a deterministic policy (s) and is a vector of size equal to the action space of the underlying MDP; and fe.(s;, a) which denotes the critic network whose output is a single number, the estimated state-action value function Q(s;, a;). FiGAR framework extends the DDPG algorithm as follows:\nL(0a, 0x) = (log fe.(a[s) + log fer(x[s)) A(s;,a, x\nwhere A(s;, a, x) represents the advantage of executing action a for x time steps at state s. This. implies that for FiGAR-A3C the combination operator * defined in Algorithm 1 is in fact scalar. addition. The objective function for the critic is the same except that estimated value function used in the target for the critic is changed as:.\nn-1 V(s;)=) n- k=j\nwhere we define yo = 0, yk = yk-1 + xk, k 1 and action ax was repeated xx times when state sk was encountered. Note that the return used in target is based on n decision steps, steps at which a potential change in actions executed takes place. It is not based on n time steps..\nKL*(a,a) + BKLDi KL ndx <\n1. fe, is introduced, similar to FiGAR-A3C. This implies that the complete policy for FiGAR-. DDPG (ea, e.) is computed by the tuple of neural networks: (fea, fe.) . Similar to DDPG. [Lillicrap et al. (2015)], FiGAR-DDPG has no loss function for the actor. The actor receives gradients from the critic. This is because the actors proposed policy is directly fed to the critic and. the critic provides the actor with gradients which the proposed policy follows for improvement.. In FiGAR-DDPG the total policy is a concatenation of vectors a and x. Hence the gradients. for the total policy are also simply the concatenation of the gradients for the policies a and x.. 2. To ensure sufficient exploration, the exploration policy for action repetition is an e-greedy version of the behavioral action repetition policy. The action part of the policy, (fe. (s)), continues to. use temporally correlated noise for exploration, generated by an Ornstein-Uhlenbeck process (see Lillicrap et al. (2015) for details). 3. The critic is modeled by the equation.\nThe experiments are designed to understand the answers to the following questions.\n. Is FiGAR able to learn control on several different kinds of Action Repetition sets W? 91822 3768 552 405 107 91 87 84 78 77 67 464642 37 322828 1C 26 36 61 71 85 nOp[ Figure 2: Percentage Improvement of FiGAR-A3C over A3C for Atari 2600\n107 91 87 84 78 77 67 4646423732282 2636 6171 85\nFigure 2: Percentage Improvement of FiGAR-A3C over A3C for Atari 2600\nIn the next three sub-sections, we experiment with the simplest possible action repetition set W = {1, 2, ... , |W|}. In the fourth sub-section, we understand the effects that changing the actio. repetition set W has on the policies learnt..\n(Sj,aj,xj) (si),fer(s)"}, {"section_index": "9", "section_name": "5.1 F1GAR-A3C 0N ATAR1 2600", "section_text": "W is perhaps the most important hyper-parameter and depicts our confidence in the ability of a DRI. agent to predict the future. Such a choice has to depend on the domain in which the DRL agent is. operating. We only wanted to demonstrate the ability of FiGAR to learn temporal abstractions and. hence instead of tuning for an optimal W], it was chosen to be 30, arbitrarily. The specific set of. time scales we choose is 1, 2, 3, ... , 30. FiGAR-A3C as well as A3C were trained for 100 million. decision steps. They were evaluated in terms of the final policy learnt. Treating the score obtained by the A3C algorithm as baseline (b), we calculated the percentage improvement (i) offered by FiGAR-. A3C (f) as: i = f-b. Figure 2 plots this metric versus the game names. The improvement for. Enduro and Atlantis is staggering and more than 900 and 35 respectively. Figure 2's y-axis has. been clipped at 100o% to make it more presentable. Appendix A contains the experimental details.. the raw scores obtained by both the methods. Appendix B contains experiments on validating our. Setup.\nFigure 3: Evaluation of Action Repetition Control for Atari 2600. See Appendix B (Table 7) for ar expanded version of figure\nTo answer the first question we posed, experiments were conducted to record the percentage of. times that a particular action repetition was chosen. Figure 3 presents the action repetition distri- bution across a selection of games, chosen arbitrarily. The values have been rounded to 2 decimal. places and hence do not sum to 1 in each game. Each game was played for 10 episodes using the. same policy used to calculate average scores in Figure 2. The two tables together show that FiGAR-A3C generally prefers lower action repetition but does. come up with temporal abstractions in policy space (specially in games like Pong and Crazy. Climber). Some such abstractions have been demonstrated in Figure 1. Such temporal abstrac-. tions do not always help general gameplay (Demon Attack). However, as can be seen from Figure. 2, FiGAR-A3C outperforms A3C in 26 out of 33 games. One could potentially think of FiGAR as a deep exploration framework by using the learnt policy. for nredictino nletelv discardingthe net1t1C\nOne could potentially think of FiGAR as a deep exploration framework by using the learnt policy e. for predicting actions at every time step and completely discarding the action-repetition policy\nThis set of experiments was performed with FiGAR-A3C on the Atari 2600 domain. The hyper parameters were tuned on a subset of games (Beamrider, Breakout, Pong, Seaquest and Space In vaders) and kept constant across all games\nDistnbuuonorAcuonRepetuonTorVanousGames 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 Atlantis Crazy Climber Demon Attack Demon Attack Pong Freeway Tutankham Wizard of Wor 28-30 0.07 0.00 0.02 0.04 0.10 0.18 0.01 0.02 25-27 0.03 0.00 0.01 0.03 0.07 0.07 0.01 0.04 22-24 0.02 0.00 0.03 0.02 0.05 0.06 0.01 0.04 19-21 0.01 0.01 0.05 0.05 0.04 0.05 0.01 0.04 16-18 0.02 0.01 0.05 0.06 0.09 0.07 0.01 0.08 13-15 0.02 0.01 0.08 0.06 0.06 0.06 0.01 0.11 10-12 0.08 0.38 0.12 0.06 0.13 0.09 0.01 0.19 7-9 0.18 0.01 0.14 0.19 0.13 0.09 0.02 0.08 46 0.07 0.04 0.35 0.15 0.16 0.18 0.74 0.12 1-3 0.51 0.55 0.16 0.36 0.19 0.15 0.16 0.28\nTo answer the first question we posed, experiments were conducted to record the percentage of. times that a particular action repetition was chosen. Figure 3 presents the action repetition distri- bution across a selection of games, chosen arbitrarily. The values have been rounded to 2 decimal places and hence do not sum to 1 in each game. Each game was played for 10 episodes using the. same policy used to calculate average scores in Figure 2.. The two tables together show that FiGAR-A3C generally.. ion. renetition. hut doe."}, {"section_index": "10", "section_name": "5.2 FIGAR-TRPO ON MUJOCO TASKS", "section_text": "In this sub-section we demonstrate that FiGAR-TRPO can learn to solve the Mujoco simulated physics tasks reasonably successfully. Similar to FiGAR-A3C, |W| is chosen to be 30 arbitrarily..\nDomain FiGAR-TRPO TRPO Ant 947.06 (28.35) -161.93 (1.00) Hopper 3038.63 (1.00) 3397.58 (1.00) Inverted Pendulum 1000.00 (1.00) 971.66 (1.00) Inverted Double Pendulum 8712.46 (1.01) 8327.75 (1.00) Swimmer 337.48 (10.51) 364.55 (1.00)\nThe full policy (fe., fe.) is trained jointly. The policies learnt after each TRPO optimization stej. (details in Appendix C) are compared to current best known policy to arrive at the overall bes. policy. The results in this sub-section are for this best policy. Table 1 compares the performanc. of TRPO and FiGAR-TRPO. The number in the brackets is the average action repetition chosen. As can be seen from the table, FiGAR learns either policies which are much faster to execute albei. at cost of slight loss in optimality or it learns policies similar to non-repetition case, performanc. being competitive with the baseline algorithm. This best policy was then evaluated on 100 episode to arrive at average scores which are contained in Table 1. TRPO is a difficult baseline on th. MuJoCo tasks domain. On the whole, FiGAR outperforms TRPO in 3 out of 5 domains, althoug. the gains are marginal in most tasks. Appendix C contains experimental details. A video showin. FiGAR-TRPO's learned behavior policies can be found at http : / /youtu. be/ JiaO2t Bt H-k"}, {"section_index": "11", "section_name": "5.3 F1GAR-DDPG ON TORCS", "section_text": "FiGAR-DDPG was trained and tested on the TORCS domain. |W| was chosen to be 15 arbitrarily FIGAR-DDPG manages to complete the race task flawlessly and manages to finish 20 laps of the circuit, after which the simulator stops. The total reward obtained by FiGAR-DDPG was 557929.68 as against 59519.70 obtained by DDPG. We also observed that FiGAR-DDPG learnt policies which. were smoother than those learnt by DDPG. A video showing the learned driving behavior of the FiGAR-DDPG agent can be found at https: //youtu.be/dx8J-sF-wX4. See Appendix D for experimental and architectural details."}, {"section_index": "12", "section_name": "5.4 EFFECT OF ACTION REPETITION SET ON FIGAR", "section_text": "This sub-section answers the third question raised at the beginning of this section in affirmative. We demonstrate that there is nothing sacrosanct about the set of action repetitions W = {1, 2, ... , 30} on which FiGAR-A3C performed well, and that the good performance carries over to other action repetition sets.\nTo demonstrate the generality of FiGAR with respect to W, we chose a wide variety of action. repetition sets W, trained and evaluated FiGAR-A3C variants which learn to repeat with respect to. their respective Action Repetition sets. Table 3 describes the various FiGAR-variants considered for these experiments in terms of their action repetition set W..\nNote that the hyper-parameters of the various variants of FiGAR-A3C were not tuned but rather the same ones obtained by tuning for FiGAR-30 were used. Table 2 contains a comparison of the raw scores obtained by the various FiGAR-A3C variants in comparison to the A3C baseline. It is clear that FiGAR is able to learn over any action repetition set W and the performance does not fall by a lot even when hyper-parameters tuned for FiGAR-30 are used for other variants. Appendix E\ne,, at evaluation time. Appendix F contains an empirical argument against such a usage of FiGAR and demonstrates that the temporal abstractions encoded by e, are indeed important for game play performance.\nTable 1: Evaluation of FiGAR on Mujoco\nDomain FiGAR-TRPO TRPO Ant 947.06 (28.35) -161.93 (1.00) Hopper 3038.63 (1.00) 3397.58 (1.00) Inverted Pendulum 1000.00 (1.00) 971.66 (1.00) Inverted Double Pendulum 8712.46 (1.01) 8327.75 (1.00) Swimmer 337.48 (10.51) 364.55 (1.00)\nTable 2: Comparison of FiGAR-A3C variants to the A3C baseline for 3 games: Sea Quest, Space. Invaders and Asterix. See Appendix E (Figure 7) for a bar graph visualization of this table.\nVariant Seaquest Space Invaders Asterix FiGAR-50 22904.50 1929.50 7730.00 FiGAR-30-50 17103.60 1828.90 11090.00 FiGAR-P 20005.40 2047.40 10937.00 FiGAR-30 18076.90 2251.95 11949.00 FiGAR-20-30 14683.00 2310.70 8182.00 FiGAR-20 19148.50 1929.50 7730.00 Baseline 2769.40 1268.75 2364.00\nTable 3: Description of FiGAR-A3C variants in terms of action repetition set W.\ncontains additional graphs showing the evolution of average game scores against number of trainir steps as well as a bar graph visualization of Table 2..\nWe propose a light-weight framework (FiGAR) for improving current Deep Reinforcement Learning. algorithms for policy optimization whereby temporal abstractions are learned in the policy space. The framework is generic and applicable to DRL algorithms concerned with policy gradients for. continuous as well as discrete action spaces such as A3C, TRPO and DDPG. FiGAR maintains a structured policy wherein the action probability distribution is augmented with a probability dis- tribution for choosing the time scale of repeating the chosen action. Our results demonstrate that FiGAR can be used to significantly improve the current policy gradient and Actor-Critic algorithms. thereby learning better control policies across several domains by discovering optimal sequences of. temporally elongated macro-actions.\nAtari, TORCS and MuJoCo represent environments which are largely deterministic with a minimal. degree of stochasticity in environment dynamics. In such highly deterministic environments we would expect FiGAR agents to build a latent model of the environment dynamics and hence be. able to execute large action repetitions without dying. This is exactly what we see in a highly. deterministic environment like the game Freeway. Figure 1 (a) demonstrates that the chicken is able. to judge the speed of the approaching cars appropriately and cross the road in a manner which takes. it to the goal without colliding with the cars and at the same time avoiding them narrowly..\nHaving said that, certainly the ability to stop an action repetition (or a macro-action) in genera would be very important, especially in stochastic environments. In our setup, we do not consider the ability to stop executing a macro-action that the agent has committed to. However, this is a necessary skill in the event of unexpected changes in the environment while executing a chosen macro-action Thus, stop and start actions for stopping and committing to macro-actions can be added to the basic dynamic time scale setup for more robust policies. We believe the modification could work for mor general stochastic worlds like Minecraft and leave it for future work.\niGAR-20 W ={1,2,... , 19,20} iGAR-30 W = {1,2,...,29,30} iGAR-50 W ={1,2,..:,49, 50} iGAR-30-50 W = {30 numbers drawn randomly from W = {1, 2, .:: , 50} w/0 replacement} iGAR-20-30 W = {20 numbers drawn randomly from W = {1, 2, .:. , 30} w/0 replacement} iGAR-P W ={p p < 50, p E P (Set of all Primes)}"}, {"section_index": "13", "section_name": "ACKNOWLEDGMENTS", "section_text": "We used the open source implementation of A3C at https://github.com/miyosuda/ async_deep_reinforce. We thank Volodymr Mnih for giving valuable hyper-parameter in-. formation. We thank Aravind Rajeswaran (University of Washington) for very helpful discussions regarding and feedback on the MuJoCo domain tasks. The TRPO implementation was a modifica-. tion of https://github.com/aravindr93/robustRL. The DDPG implementation was a modification of https://github.com/yanpanlau/DDPG-Keras-Torcs. We thank ILDS (http://web.iitm.ac.in/ilds/) for the compute resources we used for running. A3C experiments."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Ishan P Durugkar, Clemens Rosenbaum, Stefan Dernbach, and Sridhar Mahadevan. Deep reinforce ment learning with macro-actions. arXiv preprint arXiv:1606.04615, 2016\nMatthew Hausknecht and Peter Stone. Deep reinforcement learning in parametrized action space 4th International Conference on Learning Representations, 2016.\nMatthew Hausknecht, Prannoy Mupparaju, Sandeep Subramanian, Shivaram Kalyanakrishnan, and Peter Stone. Half field offense: An environment for multiagent learning and ad hoc teamwork. In AAMAS Adaptive Learning Agents (ALA) Workshop, May 2016..\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026-1034, 2015.\nGuy Lever. Deterministic policy gradient algorithms. 2014\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Belle- mare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wier- stra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning Nature, February 2015.\nMarc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi ronment: An evaluation platform for general agents. Journal of Artificial Intelligence Research. pp. 253-279, June 2013.\nKenneth M Buckland and Peter D Lawrence. Transition point dynamic programming. Advances in neural information nr n. 639-639. 1994\nAravind S. Lakshminarayanan, Sahil Sharma, and Balaraman Ravindran. Dynamic action repetition for deep reinforcement learning. AAAI, 2017.\nann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436-444 2015.\nSridhar Mahadevan, Nicholas Marchalleck, Tapas K Das, and Abhijit Gosavi. Self-improving fac tory simulation using continuous-time average-reward reinforcement learning. 1997..\nVolodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, 2016.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel. Trust regio policy optimization. CoRR, abs/1502.05477, 2015.\nRichard S. Sutton and Andrew G. Barto. Introduction to reinforcement learning. MIT Press, 1998\nBernhard Wymann, E Espie, C Guionneau, C Dimitrakakis, R Coulom, and A Sumner. Torcs, the open racing car simulator. Software available at http://torcs. sourceforge. net, 2000.\nHarsh Satija and Joelle Pineau. Simultaneous machine translation using deep reinforcement learn n1CM2016Wor 11 arning. 2016\nAlexander Vezhnevets, Volodymyr Mnih, Simon Osindero, Alex Graves, Oriol Vinyals, John Aga piou, et al. Strategic attentive writer for learning macro-actions. In Advances in Neural Informa- tion Processing Systems, pp. 3486-3494, 2016.\nWe used the LSTM-variant of A3C [Mnih et al. (2016)] algorithm for FiGAR-A3C experiments The async-rmsprop algorithm [Mnih et al. (2016)] was used for updating parameters with the same. hyper-parameters as in Mnih et al. (2016). The initial learning rate used was 10-3 and it was linearly. annealed to 0 over 100 million steps. The n used in n-step returns was 20. Entropy regularizatio. was used to encourage exploration, similar to Mnih et al. (2016). The for entropy regularizatior was found to be O.02 after hyper-parameter tuning, both for the action-policy fe. and the actior. repetition policy fe. .\nTable 4: Game Playing Experiments on Atari 2600\nName FiGAR-A3C A3C Alien 3138.50 (2864.91, 3412.08) 2709.20 (2499.41, 2918.9) Amidar 1465.70 (1406.18, 1525.21) 1028.34 (1003.11. 1053.5 Assault 1936.37 (1855.85, 2016.88) 1857.61 (1787.19.1928.0 Asterix 11949.00 (11095.62, 12802.37) 2364.00 (2188.12, 2539.8 Atlantis 6330600.00 (6330600.00, 6330600.00) 163660.00 (-46665.38, 37) Bank Heist 3364.60 (3342.10, 3387.09) 1731.40(1727.94. 1734.8 Beam Rider 2348.78 (2152.19, 2545.36) 2189.96 (2062.89. 2317.0 Bowling 30.09 (29.74, 30.43) 16.88 (15.23, 18.52) Breakout 814.50 (789.97, 839.02) 555.05 (474.89, 635.20) Centipede 3340.35 (3071.70, 3608.99) 3293.33 (2973.14, 3613.5 Chopper command 3147.00 (2851.02, 3442.97) 4969.00 (4513.12, 5424.8 Crazy Climber 154177.00 (148042.35, 160311.64) 166875.00 (161560.18, 17 Demon Attack 7499.30 (7127.85, 7870.74) 26742.75 (22665.02, 3082 Enduro 707.80 (599.16, 816.43) 0.77 (0.45, 1.09) Freeway 33.14 (33.01, 33.26) 17.68 (17.41, 17.94) Frostbite 309.60 (308.81, 310.38) 306.80 (304.67, 308.92) Gopher 12845.40 (11641.88, 14048.91) 9360.60 (8683.72, 10037. James Bond 478.0 (448.78, 507.21) 285.5 (268.62, 302.37) Kangaroo 48.00 (29.51, 66.48) 26.00 (12.81, 39.18) Koolaid 1669.00 (1583.58, 1754.42) 1136.0 (1065.36, 1206.64) Krull 1316.10 (1223.23, 1408.96) 1025.00 (970.77, 1079.22 Kung Fu Master 40284.00 ( 38207.21, 42360.78) 35717.00 (34288.21, 3714 Name this game 1752.60 (1635.77, 1869.42) 12100.80 (11682.64. 125) Phoenix 5106.10 (5056.43, 5155.76) 5384.10 (5178.12, 5590.0) Pong 20.32 (20.17, 20.46) 19.46 (19.32, 19.59) Q-bert 18922.50 (17302.94, 20542.05) 25840.25 (25528.49, 2615) Road Runner 22907.00 ( 22283.32, 23530.67) 59540.00 (58835.01, 602) Sea quest. 18076.90 (16964.16, 19189.63) 2799.60 (2790.22. 2808.9 Space Invaders 2251.95 (2147.13, 2356.76) 1268.75 (1179.25. 1358.2 Star Gunner 51269.00 (48629.42. 53908.57) 39835.00 (36365.24. 4330 Time Pilot 11865.00 (11435.25, 12294.74) 8969.00 (8595.57, 9342.4 Tutankhamun 276.95 (274.22, 279.67) 252.82(241.38, 264.25) Wizard of Wor 6688.00 (5783.48.7592.51) 3230.00.(2355.75.4104. 2\nSince the Atari 2600 games tend to be quite complex, jointly learning a factored policy from. random weight initializations proved to be less optimal as compared to a more stage-wise approach.. The approach we followed for training FiGAR-A3C was to first train the networks using the regular. A3C-objective function. This stage trains the action part of the policy fe. and value function fe.. for a small number of iterations with a fixed action repetition rate (in this stage, gradients are. not back-propagated for fe. and all action repetition predictions made are discarded). The next stage was to then train the entire architecture (fea, fex, fe) jointly. This kind of a non-stationary. training objective ensures that we have a good value function estimator fe. and a good action policy\nestimator fe. before we start training the full policy (fea, fex) jointly. Every time FiGAR decides. to execute action at for xt time steps, we say one step of action selection has been made. Since. the number of time steps for which an action is repeated is variable, training time is measured in terms of action selections carried out. The first stage of the training was executed for 20 million (a. hyper-parameter we found by doing grid search) action selections (called steps here onwards) and. the next stage was executed for 80 million steps. In comparison the baseline ran for 100 million. steps (action selections). Since a large entropy regularization was required to explore both components (f. and fr) of the.\n1lalOlAaDClOl art training the Tull poncy (Jea, Jer) Jonnty. VCVCIIIICT o execute action at for xt time steps, we say one step of action selection has been made. Si. ne number of time steps for which an action is repeated is variable, training time is measure erms of action selections carried out. The first stage of the training was executed for 20 millioi. yper-parameter we found by doing grid search) action selections (called steps here onwards). he next stage was executed for 80 million steps. In comparison the baseline ran for 100 mill. teps (action selections). ince a large entropy regularization was required to explore both components (fa and fx) of. olicy-space, this also ends up meaning that the policies learnt are more diffused than one wc. ike them to be. Evaluation was done after every 1 million steps and followed a strategy sim. e-greedy. With e = 0.1 probability, the action and action repetition was drawn from the out. listribution ((fe. and fe, respectively) and with probability 1 - e the action (and independer. he action selection) with maximum probability was selected. This evaluation was done for. pisodes or 10oo00 steps whichever was smaller, to arrive at an average score.. able 4 contains the raw scores obtained by the final FiGAR-A3C and A3C policies on 33 A. 600 games. The numbers inside the brackets depict the confidence interval at a confide. hreshold of 0.95, calculated by averaging scores over 100 episodes. Table 5 contains sco. or a competing method, STRAW [Vezhnevets et al. (2016)], which learns temporal abstracti. y maintaining action plans, for the subset of games on which both FiGAR and STRAW w. rained and tested. Note that the scores obtained by STRAW agents are averages over to. erforming replicas. We can infer from Tables 4 and 5 that FiGAR and STRAW and competi. vith each other, with FiGAR clearly out-performing STRAW in Breakout and STRAW clear. utperforming FiGAR in Frostbite.\nSince a large entropy regularization was required to explore both components (fa and fx) of tl oolicy-space, this also ends up meaning that the policies learnt are more diffused than one wou ike them to be. Evaluation was done after every 1 million steps and followed a strategy simila o e-greedy. With e = 0.1 probability, the action and action repetition was drawn from the outpi distribution ((fe. and fe., respectively) and with probability 1 e the action (and independentl he action selection) with maximum probability was selected. This evaluation was done for 10 episodes or 10oo00 steps whichever was smaller, to arrive at an average score. Table 4 contains the raw scores obtained by the final FiGAR-A3C and A3C policies on 33 Ata\nTable 5: Game Playing Experiments on Atari 2600 by STRAW [Vezhnevets et al. (2016)]\nFigure 4 demonstrates the evolution of the performance of FiGAR-A3C versus training progress.. It also contains corresponding metrics for A3C to facilitate comparisons. In the 100 episode long. evaluation phase we also keep track of the best episodic score. We also plot the best episode's score versus time to get an idea of how bad the learnt policy is compared to the best it could have been"}, {"section_index": "15", "section_name": "ARCHITECTURE DETAILS", "section_text": "We used the same low level architecture as Mnih et al. (2016) which in turn uses the same low level architecture as Mnih et al. (2015), except that the pre-LSTM hidden layer had size 256 instead of 512 as in Mnih et al. (2016). Similar to Mnih et al. (2016) the Actor and Critic share all but one layer. Hence all but the final layer of fea, fe. and fe. are the same. Each of the 3 networks has a different final layer with fe. and fe. having a softmax-non linearity as output non-linearity, to model the multinomial distribution and the fe. (critic)'s output being linear.\nName STRAW STRAW-e Alien 2626 3230 Amidar 2223 2022 Breakout 344 386 Crazy Climber 143803 153327 Frostbite 4394 8108 Q-bert 20933 23892\nFigure 4: Training progress plotted versus time for Atari 2600"}, {"section_index": "16", "section_name": "APPENDIX B: ADDITIONAL EXPERIMENTS FOR ATARI 260C", "section_text": "These additional experiments are geared at understanding the repercussions of the evaluation strat egy chosen by us.\nNote that in Appendix A, we state that for evaluating the policy learnt by the agent, we simply. chose to sample from the output probability distributions with probability 0.1 and chose the optimal action/action repetition with probability 0.9. This choice of 0.1 might seem rather arbitrary. Hence we conducted experiments to understand how well the agent performs as we shift more and more from choosing the maximal action(0.1-greedy policy) towards sampling from output distributions. (stochastic policy).\nFigure 5 demonstrates that the performance of FiGAR-A3C does not deteriorate significantly, i comparison to A3C, even if we always sample from policy distributions, for most of the games. Ir the cases that there is a significant deterioration, we believe it is due to the diffused nature of th policy distributions (action and action repetition) learnt. Hence, although our choice of evaluatioi scheme might seem arbitrary, it is in fact reasonable.\nAsterix Frostbite Bowling Chopper Command Phoenix 30000 3000 100 18000 14000 25000 2500 16000 80 12000 14000 20000 2000 12000 10000 60 15000 1500 10000 8000 40 BOOC 6000 10000 1000 6000 20 4000 AM 4000 5000 500 2000 2000 0 0 0 Wizard Of Wor. Assault Alien 0 Demon Attack Kung Fu Master. 35000 4000 120000 10000 100000 30000 3500 100000 8000 80000 25000 3000 2500 80000 20000 6000 60000 2000 60000 15000 1500 40000 4000 40000 10000 1000 20000 2000 20000 5000 500 M 0 0 OE 0 Qbert Kangaroo Road Runner 35000 Koolaid Amidar 800 70000 3000 3000 30000 700 60000 2500 2500 25000 600 VV-V 50000 500 2000 2000 20000 400 40000 30000 wwwwwww 1500 1500 15000 300 1000 VWw 10000 200 20000 WWWA 1000 5000 100 10000 500 500 O 0 0 0 Seaquest 0 James Bond Atlantis Name This Game. Centipede 6000 40000 10000 800 40000 5000 35000 35000 30000 f000000 14000 4000 6888888 2080 30000 25000 25000 3000 20000 10000 00000 20000 15000 800C 2000 BOO0OO0 6000 15000 10000 2000000 4000 10000 1000 5000 0000000 2000 5000 0 OL Tutankham 0 Freeway Krull 35 400 Star Gunner Beam Rider 14000 80000 12000 1186152615105 12000 350 70000 300 10000 10000 3% 60000 50000 8000 8000 40000 6000 6000 150 30000 4000 Wrwwmmwh 4000 100 20000 2000 50 10000 2000 OA 0 0 0 888888 Breakout Enduro Space Invaders. Bank Heist 20000 Time Pilot 1400 4500 3500 1200 4000 8500 3000 1000 2500 15000 3000 800 2500 2000 2000 1500 10000 600 300 400 1500 18% 1000 1000 5000 200 500 500 Gopher 0 Crazy Climber Pong 50000 450000 30 40000 400000 20 350000 10 A3C-Average 30000 300000 250000 MMV A3C-Best 20000 200000 10 FiGAR-A3C-Average 10000 150000 100000 -20 0 50000 30 FiGAR-A3C-Best\nFigure 5: Average performance plotted against the probability with which we sample from final policy distribution for Atari 2600. Points toward the left side of a sub-graph depict average perfor- mance for a greedy version of a policy and those towards the right side depict performance for the stochastic version of the policy."}, {"section_index": "17", "section_name": "PERFORMANCE VERSUS SPEED TRADEOFF", "section_text": "The previous discussion leads to a novel way to trade-off game-play performance versus speed Figure 3 demonstrated that although FiGAR-A3C learns to use temporally elongated macro-actions. it does favor shorter actions for many games. Since the action repetition distribution e. is diffusec (as will be shown by Table 6), sampling from the distribution should help FiGAR choose large action repetition rates probably at the cost of optimality of game play.\nAsterix Frostbite Bowling Chopper Command Phoenix 14000 325 35 10000 12000 6000 320 30 9000 10000 5000V 315 8000 8000 25 4000 7000 6000 310 6000 305 20 4000 3000 5000 2000 300 15 2000 4000 Wizard Of Wor Assault Demon Attack Alien Kung Fu Master 8000 2400 30000 3400 7000 2200 25000 45000 320 6000 3000 2000 20000 40000 5000 4000 1800 15000 35000 3000 2000 10000 1600 2000 30000 1000 5000 Kangaroo 1800 Qbert Road Runner Koolaid Amidar 20000 60000 1800 1600 15000 50000 1600 1400 10000 40000 1400 1200 5000 30000 1200 1000 0 20000 1000 800 James Bond Seaquest 1e7 Atlantis Name This Game Centipede 450 20000 1.2 14000 4500 400 1.0 12000 15000 4000 350 0.8 10000 300 8000 3500 10000 0.6 250 6000 0.4 3000 200 4000 150 5000 0.2 2000 2500 100 0.0 0 2000 Freeway Krull Tutankham Star Gunner Beam Rider 1315215 3500 290 55000 3000 3500 2500 280 50000 2000 270 45000 3000 1500 10 260 40000 2500 1000 50 500 250 35000 2000 0 240 30000 Breakout Enduro Space Invaders Bank Heist Time Pilot 900 800 2400 3500 13000 800 2200 12000 2688 700 600 3000 1800 11000 600 1688 2500 10000 400 9000 500 1400 200 1200 2000 8000 400 1000 7000 300 0 800 1500 Gopher Pong 6000 Crazy Climber 15000 180000 21 14000 13000 170000 20 12000 160000 19 11000 150000 10000 140000 18 9000 A3C-Average 130000 17 8000 120000 16 FiGAR-A3C-Average 7000\nTable 6 demonstrates that this is exactly what FiGAR does. It was generated by playing 10 episodes. or 1ooo00 steps, whichever is lesser and recording the fraction of times each action repetition was chosen. The policy used in populating table 6 was the stochastic policy (described in previous sub- section). Contrast Table 6 to Table 7 which is an expanded version of Figure 3..\nTable 6: Distribution of Action Repetitions chosen when the policy (both e. and e.) is completely stochastic\nBoth Figure 3 and Table 7 were created using the 0.1-greedy policy described in previous sub section. The reason that we compare the stochastic policy with the 0.1-greedy version instead of the fully-greedy version (wherein the optimal action and action repetition is always chosen) is that such a policy would end up being deterministic would not be good for evaluations..\n11 dson that we connpale the stocnastc poncy wrtll te O.I- VCISIC 11I3CC fully-greedy version (wherein the optimal action and action repetition is always chosen) is that sucl a policy would end up being deterministic would not be good for evaluations. It can hence be seen that FiGAR learns to trade-off optimality of game-play for speed by choosing whether to sample from policy probability distributions (e. and e) with probability 1 and thus behave stochastically, or behave 0.1-greedily, and sample from the distributions with only a smal. probability. Table 6 can be compared to Figure 3 to understand how stochasticity in final polic affects action repetition chosen. A clear trend can be seen in all games wherein the stochastic variant of final policy learns to use longer and longer actions, albeit at a small cost of some loss ir the optimality of game-play (as shown by Figure 5).\nAn expanded version of Figure 3 is presented as Table 7 for comparison with Table 6. As explained in Appendix A, the policy used for populating Table 7 is such that it picks a greedy action (or action repetition) with probability 0.9 and stochastically samples from output probability distributions with. probability 0.1.\nName 1-3 4-6 7-9 10-12 13-15 16-18 19-21 22-24 25-27 28-30 Alien 0.33 0.15 0.13 0.11 0.13 0.07 0.03 0.02 0.014 0.01 Amidar 0.19 0.14 0.10 0.08 0.08 0.07 0.06 0.08 0.09 0.12 Assault 0.29 0.26 0.21 0.11 0.04 0.03 0.02 0.01 0.01 0.01 Asterix 0.40 0.25 0.15 0.08 0.04 0.04 0.02 0.02 0.01 0.01 Atlantis 0.25 0.16 0.11 0.09 0.08 0.06 0.05 0.07 0.06 0.08 Bank Heist 0.950 0.04 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Beam Rider 0.17 0.16 0.14 0.11 0.09 0.07 0.06 0.05 0.06 0.09 Bowling 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.91 Breakout 0.28 0.20 0.13 0.09 0.06 0.05 0.04 0.05 0.04 0.07 Centipede 0.19 0.27 0.34 0.17 0.03 0.00 0.00 0.00 0.00 0.00 Chpr Cmd 0.12 0.14 0.11 0.08 0.11 0.12 0.10 0.08 0.08 0.06 Crzy Clmbr 0.34 0.06 0.03 0.51 0.02 0.01 0.01 0.01 0.01 0.01 Dmn Attk 0.18 0.21 0.16 0.13 0.10 0.08 0.06 0.04 0.03 0.02 Enduro 0.66 0.34 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Pong 0.16 0.15 0.13 0.10 0.10 0.08 0.08 0.07 0.08 0.07 Freeway 0.14 0.12 0.11 0.10 0.09 0.09 0.08 0.08 0.08 0.12 Frostbite 0.33 0.16 0.08 0.07 0.05 0.03 0.03 0.02 0.07 0.14 Gopher 0.41 0.15 0.23 0.07 0.04 0.03 0.02 0.02 0.01 0.01 James Bond 0.12 0.11 0.10 0.10 0.10 0.09 0.11 0.09 0.09 0.10 Kangaroo 0.10 0.10 0.11 0.10 0.11 0.10 0.10 0.10 0.09 0.09 Koolaid 0.14 0.14 0.11 0.11 0.10 0.08 0.08 0.09 0.08 0.07 Krull 0.92 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.00 Kung Fu 0.32 0.15 0.10 0.10 0.08 0.06 0.05 0.05 0.05 0.04 NTG 0.10 0.10 0.12 0.11 0.10 0.11 0.09 0.10 0.09 0.09 Phoenix 0.32 0.15 0.11 0.07 0.06 0.05 0.06 0.06 0.07 0.05 Pong 0.15 0.15 0.14 0.10 0.09 0.08 0.07 0.07 0.07 0.08 Q-bert 0.40 0.30 0.06 0.03 0.02 0.02 0.01 0.01 0.01 0.14 Road Runner 0.99 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.00 Sea Quest 0.40 0.26 0.10 0.05 0.04 0.04 0.04 0.03 0.02 0.01 Spc Invdr 0.33 0.16 0.11 0.07 0.06 0.05 0.04 0.04 0.06 0.09 Star Gunner 0.42 0.31 0.14 0.06 0.03 0.01 0.01 0.01 0.01 0.00 Time Pilot 0.14 0.16 0.15 0.12 0.09 0.07 0.07 0.06 0.06 0.08 Tutankham 0.34 0.18 0.08 0.08 0.07 0.06 0.06 0.05 0.05 0.04 Wzd of Wor 0.11 0.11 0.11 0.11 0.12 0.11 0.09 0.09 0.08 0.07\nTable 7: Distribution of Action Repetitions chosen when the policy (both e and e ) is 0.1-greed\nName 1-3 4-6 7-9 10-12 13-15 16-18 19-21 22-24 25-27 28-30 Alien 0.50 0.08 0.11 0.07 0.12 0.07 0.02 0.02 0.01 0.01 Amidar 0.49 0.08 0.06 0.04 0.04 0.04 0.04 0.07 0.03 0.11 Assault 0.45 0.26 0.15 0.06 0.02 0.02 0.02 0.01 0.01 0.01 Asterix 0.50 0.33 0.09 0.04 0.01 0.01 0.01 0.00 0.00 0.00 Atlantis 0.51 0.07 0.18 0.08 0.02 0.02 0.01 0.02 0.03 0.07 Bank Heist 0.96 0.04 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Beam Rider 0.34 0.31 0.13 0.04 0.05 0.03 0.01 0.02 0.02 0.06 Bowling 0.01 0.91 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 Breakout 0.29 0.23 0.12 0.09 0.05 0.04 0.02 0.03 0.03 0.11 Centipede 0.02 0.03 0.94 0.02 0.00 0.00 0.00 0.00 0.00 0.00 Chpr Cmd 0.29 0.23 0.12 0.03 0.06 0.09 0.06 0.04 0.06 0.03 Crzy Clmbr 0.55 0.04 0.01 0.38 0.01 0.01 0.01 0.00 0.00 0.00 Dmn Attk 0.16 0.35 0.14 0.12 0.08 0.05 0.05 0.03 0.01 0.02 Enduro 0.91 0.09 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Freeway 0.15 0.18 0.09 0.09 0.06 0.07 0.05 0.06 0.07 0.18 Frostbite 0.47 0.20 0.13 0.01 0.03 0.01 0.01 0.00 0.03 0.11 Gopher 0.47 0.19 0.21 0.05 0.04 0.01 0.02 0.01 0.00 0.00 James Bond 0.28 0.11 0.22 0.08 0.06 0.06 0.05 0.05 0.03 0.06 Kangaroo 0.20 0.39 0.27 0.02 0.01 0.04 0.01 0.01 0.01 0.04 Koolaid 0.36 0.15 0.19 0.06 0.06 0.06 0.05 0.02 0.03 0.04 Krull 0.92 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.00 Kung Fu 0.46 0.10 0.05 0.11 0.08 0.06 0.04 0.03 0.04 0.05 NTG 0.01 0.01 0.91 0.01 0.01 0.01 0.01 0.01 0.01 0.01 Phoenix 0.44 0.44 0.04 0.02 0.01 0.01 0.01 0.01 0.02 0.01 Pong 0.19 0.16 0.13 0.13 0.06 0.09 0.04 0.05 0.07 0.10 Q-bert 0.51 0.27 0.05 0.02 0.00 0.00 0.00 0.00 0.01 0.13 Road Runner 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Sea Quest 0.59 0.19 0.06 0.02 0.02 0.03 0.05 0.03 0.01 0.00 Spc Invdrs 0.42 0.18 0.11 0.06 0.04 0.02 0.02 0.02 0.03 0.10 Star Gunner 0.59 0.31 0.06 0.02 0.00 0.01 0.00 0.00 0.00 0.00 Time Pilot 0.580 0.14 0.11 0.05 0.03 0.01 0.01 0.01 0.02 0.04 Tutankham 0.16 0.74 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.01 Wzd of Wor 0.28 0.12 0.08 0.19 0.11 0.08 0.04 0.04 0.04 0.02\nTable 8 contains the average action repetition chosen in each of the games for the two FiGAR variants. The same episodes used to populate Table 6 and 7 were used to fill Table 8. It can be seen that in most games, the Stochastic variant of policy learns to play at a higher speed, although this might result in some loss in optimality of game play. as demonstrated in Figure 5\nTable 8: Average Action Repetition comparison between stochastic and greedy policies\nName Stochastic 0.1-Gree Alien 8.43 6.87 Amidar 13.77 9.61 Assault 7.14 5.86 Asterix 6.53 4.22 Atlantis 11.68 7.20 Bank Heist 1.65 1.62 Beam Rider 12.47 7.68 Bowling 28.64 5.13 Breakout 10.14 9.93 Centipede 6.84 7.88 Chopper Command 13.76 9.58 Crazy Climber 8.00 5.74 Enduro 2.91 2.69 Demon Attack 10.23 8.59 Freeway 14.62 14.25 Frostbite 11.33 7.69 Gopher 6.68 5.33 James Bond 14.98 10.37 Kangaroo 15.07 7.84 Koolaid 13.66 8.48 Krull 3.83 3.12 Kung Fu Master 10.00 8.53 Name this Game 14.98 9.55 Phoenix 10.31 4.64 Pong 12.99 12.28 Q-bert 2.02 1.76 Road Runner 1.63 1.26 Sea Quest 6.98 5.33 Space Invaders 10.48 8.55 Star Gunner 5.21 3.69 Time Pilots 12.72 5.39 Tutankhamun 9.75 5.73 Wizard of Wor 14.27 9.87\nAPPENDIX C: EXPERIMENTAL SETUP FOR FIGAR-TRPO\nFiGAR-TRPO and the corresponding baseline algorithm operate on low dimensional feature vecto observations. The TRPO (and hence FiGAR-TRPO) algorithm operates in two phases. In the firs phase (P1), K trajectories are sampled according to current behavioral policy to create the sur rogate loss function. In the second phase (P2) a policy improvement step is performed by carrying out an optimization step on the surrogate loss function, subject to the KL-divergence constraint or the new policy. In our experiments, 500 such policy improvement steps were performed. K varies with the learning progress and the schedule on what value K would take in next iteration of P1 is defined linearly in terms of the return in the last iteration of P1. Hence if the return was large ir previous iteration of P1, a small number of episodes are are used to construct the surrogate loss function in current iteration. The best policy was found by keeping track of the average returns seer during the training phase P1. This policy was then evaluated on 100 episodes to obtain the average score of the TRPO policy learnt. The most important hyper-parameters for FiGAR-TRPO are a and kL. By using a grid search on the set {0.01, 0.02, 0.04, 0.08, 0.16, 0.32, 0.64, 1.28} we founc the optimal hyper-parameters ar = 1.28 and kL = 0.64. These were tuned on all the 5 tasks."}, {"section_index": "18", "section_name": "LOSS FUNCTION AND ARCHITECTURE", "section_text": "The tanh non-linearity is used throughout. The mean vector is realized using a 2-Hidden Laye neural network (mean network) with hidden layer sizes (128, 64). The standard deviation is realize using a Parameter layer (std-dev layer) which parameterizes the standard deviation but does no depend on the input. Hence the concatenation of the output of mean network and the std-dev laye forms the action policy fe. as described in Section 4. The Action Repetition function fe. is realizec using a 2-Hidden Layer neural (act-rep network) network similar to the mean network albeit witl smaller hidden layer sizes: (128, 64). However, its output non-linearity is a softmax layer of siz 30 as dictated by the value of W. The action repetition network was kept small to ensure tha FiGAR-TRPO does not have significantly more parameters than TRPO. The mean network, std-de layer and act-rep network do not share any parameters or layers (See appendix G for experiment on FiGAR-TRPO with shared layers).\nThe surrogate loss function in TRPO when the Single Path method of construction is followe reduces to [Schulman et al. (2015)]:\nold\nwhere q, the sampling distribution is just the old behavioral policy eog (defining characteristic o Single-Path method) and p is the improper discounted state visitation distribution.\nThe surrogate loss function for a factored policy such as that of FiGAR-TRPO is:\n= Es,a,x 7\n0a,0x here s ,a ~ T0a,old,X ~ TOx,old and Tga = f0a, TOa,old = fOa,old, TOn= ol d x,ol d .ol d\nL0a.old,0x,old 0a ) X Lea old,0x.0\nworks better. This leads to the FiGAR-TRPO objective defined in Section 4.3\nThis kind of a splitting of probability distributions happens because the action-policy fe. and the action-repetition policy fe. are independent probability distributions. The theoretically sound way to realize FiGAR-TRPO is to minimize the loss Lea,old,z,ota(0a, 0x). However, we found that in practice, optimizing a relaxed version of the objective function, that is,\nThe DDPG algorithm also operates on the low-dimensional (29 dimensional) feature-vector obser. vations. The domain consists of 3 continuous actions, acceleration, break and steering. The W. hyper-parameter used in main experiments was chosen to be 15 arbitrarily. Unlike Lillicrap et al. (2015), we did not find it useful to use batch normalization and hence it was not used. However a replay memory was used of size 10000. Target networks were also used with soft updates being. applied with t = 0.001. Sine DDPG is an off-policy actor-critic method, we need to ensure that. sufficient exploration takes place. Use of an Ornstein-Uhlenbeck process (refer to Lillicrap et al. (2015) for details) ensured that exporation was carried out in action-policy space. To ensure explo. ration in the action-repetition policy space, we adopted two strategies. First, an e-greedy version of. the policy was used during train time. The e was annealed from 0.2 to 0 over 50000 training steps. The algorithm was run for 40o00 training steps for baselines as well as FiGAR-DDPG. Second. with probability 1 - e, instead of picking the greedy action-repetition , we sampled from the outpu. distribution fe. (s)."}, {"section_index": "19", "section_name": "ARCHITECTURAL DETAILS", "section_text": "hrough the architecture, the hidden layer non-linearity used was. Alnadenla were initialized using the He initialization [He et al. (2015)]. The actor network consisted of a 2-hidden layer neural network with hidden sizes (300, 600) (call the second hidden layer representation h2. We learn two different output layers on top of this com-. mon hidden representation. fe. was realized by transforming h2 with an output layer of size 3. The. output neuron corresponding to the action steering used tanh non linearity where as those corre sponding to acceleration and break used the sigmoid non-linearity. The fe, network was realized. by transforming h2 using a softmax output layer of size |W]. The output of the Actor network is a. 3 +W= 18 dimensional vector. The critic network takes as input the state vector (29-dimensional) and the action vector (18 dimensional). The critic is a 3 hidden layer network of size (300, 600, 600). Similar to Lillicrap.\nThe critic network takes as input the state vector (29-dimensional) and the action vector (18 dimensional). The critic is a 3 hidden layer network of size (300, 600, 600). Similar to Lillicrap. et al. (2015), actions were not included until the 2nd hidden layer of fo.. The final output is linear. and is trained using the TD-error obiective function. similar to Lillicrap et al. (2015).\nAPPENDIX E: DETAILS FOR FIGAR-VARIANTS\nFigure 6: Comparison of FiGAR-A3C variants to the A3C baseline for 2 games: Sea Quest anc Asterix\nFigure 7: Comparison of FiGAR-A3C variants to the A3C baseline for 3 games: Sea Quest, Space Invaders and Asterix. Game scores have been scaled down by 1o00 and rounded to 1 decimal place\nTable 2 contains final evaluation scores attained by various FiGAR variants. Figure 7 contains a bar graph visualization of the same table to demonstrate the advantage of all FiGAR variants relative to the baselines.\nSpacelnvaders Asterix 3000 14000 2500 12000 10000 2000 8000 1500 6000 1000 4000 500 2000 0 Seaquest 25000 FiGAR-30-50 20000 FiGAR-P L5000 Baseline FiGAR-20 LO000 FiGAR-20-30 5000 FiGAR-30 FiGAR-50 0\nIt is clear from Figure 6 that even though FiGAR A3C needs to explore in 2 separate action-spaces (those of primitive actions and the action repetitions), the training progress is not slowed down as a result of this exploration, for any FiGAR variant.\n22.9 20000 20.0 19.1 18.0 17.1 15000 14.6 11.9 11.010.9 10000 9.1 8.1 7.7 5000 2.7 2.3 2.3 2.2 2.0 1.9 1.8 2.3 sel Bas e"}, {"section_index": "20", "section_name": "APPENDIX F: IMPORTANCE OF e,", "section_text": "One could potentially use FiGAR at evaluation stage (after training has been completed) at an action repetition rate of 1 by picking every action according to e. and completely discarding the learnt. repetition policy ex. Such a FiGAR variant is denoted as FiGAR-wo-. We demonstrate that. FiGAR-wo-. is worse than FiGAR on most games and hence the temporal abstractions learnt by. and encoded in e. are indeed non-trivial and important for gameplay performance. Table 9 contains the comparison between standard FiGAR agent and FiGAR-wo-e. Evaluation scheme is the same as Appendix A.\nTable 9: Gameplay pared with FiGAR-wo- performance of FiGAR comr\nWe observe that in 24 out of 33 games, e. helps the agent learn temporal abstractions which resul. in a significant boost in performance compared to the FiGAR-wo-e. agents.\nName FiGAR FiGAR-wo-o Alien 3138.50 582.17 Amidar 1465.70 497.90 Assault 1936.37 1551.40 Asterix 11949.00 780.00 Atlantis 6330600.00 680890.00 Bank Heist. 3364.60 223.00 Beam Rider 2348.78 3732.00 Bowling 30.09 0.90 Breakout 814.50 321.90 Centipede 3340.35 3934.90 Chopper Command 3147.00 2730.00 Crazy Climber 154177.00 210.00 Enduro 707.80 941.10 Demon Attack 7499.30 6661.00 Freeway 33.14 30.60 Frostbite 309.60 308.00 Gopher 12845.40 10738.00 James Bonde 478.0 320.00 Kangaroo 48.00 40.00 Koolaid 1669.00 2110.00 Krull 1316.10 2076.00 Kung Fu Master 40284.00 29770.00 Name this Game 1752.60 1692.00 Phoenix 5106.10 5266.00 Pong 20.32 -21.00 Road Runner 22907.00 23560.00 Sea Quest 18076.90 18324.00 Space Invaders 2251.95 1721.00 Star Gunner. 51269.00 55150.00 Time Pilot 11865.00 11810.00 Tutankhamun 276.95 182.20 Wizard of Wor 6688.00 6160.00"}, {"section_index": "21", "section_name": "APPENDIX G: SHARED REPRESENTATION EXPERIMENTS FOR FIGAR-TRPO", "section_text": "Section 5.2 contains results of experiments on FiGAR-TRPO. Appendix C contains the experimen tal setup for the same. Throughout these experiments on FiGAR-TRPO the policy components fe.. and fe. do not share any representations. This appendix contains experimental results in the setting wherein (fe. and fe) share all layers except the final one. This agent/network is denoted with. the name FiGAR-shared-TRPO. All the hyper-parameters are the same as those in Appendix C ex-. cept ar and kL which were obtained through a grid-search similar to appendix C. These were tuned on all the 5 tasks. The values for these hyper-parameters that we found to be optimal are. ar = 1.28 and kL = 0.16. The same training and evaluation regime as appendix C was used.. The performance of the best policy learnt is tabulated in Table 10.\nTable 10: Evaluation of FiGAR with shared representations for f . and fe. on Mujoco\nFiGAR-shared-TRPO on the whole does not perform much better than FiGAR-TRPO. In these TRPO experiments, the neural networks we used were rather shallow at only two hidden layers deep. Hence, we believe that sharing of layers thus leads to only small gains in terms of optimality of policy learnt.\nDomain FiGAR-TRPO FiGAR-shared-TRPO TRPO Ant 947.06 (28.35) 1779.72 (7.99) -161.93 (1.00) Hopper 3038.63 (1.00) 2649.09 (2.07) 3397.58 (1.00) Inverted Pendulum 1000.00 (1.00) 986.35 (1.00) 971.66 (1.00) Inverted Double Pendulum 8712.46 (1.01) 9138.85 (1.00) 8327.75 (1.00) Swimmer 337.48 (10.51) 340.74 (8.02) 364.55 (1.00)"}]
SyEiHNKxx
[{"section_index": "0", "section_name": "A DIFFERENTIABLE PHYSICS ENGINE FOR DEEP LEARNING IN ROBOTICS", "section_text": "[Jonas.Degrave, Joni.Dambre, Francis.wyffels}@uGent.be\nOne of the most important fields in robotics is the optimization of controllers. Cur-. rently, robots are often treated as a black box in this optimization process, which is the reason why derivative-free optimization methods such as evolutionary algo- rithms or reinforcement learning are omnipresent. When gradient-based methods. are used, models are kept small or rely on finite difference approximations for. the Jacobian. This method quickly grows expensive with increasing numbers of. parameters, such as found in deep learning. We propose an implementation of a. modern physics engine, which can differentiate control parameters. This engine. is implemented for both CPU and GPU. Firstly, this paper shows how such an. engine speeds up the optimization process, even for small problems. Furthermore, it explains why this is an alternative approach to deep Q-learning, for using deep learning in robotics. Finally, we argue that this is a big step for deep learning in robotics, as it opens up new possibilities to optimize robots, both in hardware and software."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A recently popular alternative approach is to use deep Q-learning, a reinforcement learning algo. rithm. This method requires a lot of evaluations in order to train the many parameters (Levin. et al.]2016). However, deep learning experience has taught us that optimizing with a gradient i. often faster and more efficient. This fact is especially true when there are a lot of parameters, a. is common in deep learning. However, in the optimization processes for control systems, the robo is almost exclusively treated as a non-differentiable black box. The reason for this is that the robc in hardware is not differentiable, nor are current physics engines able to provide the gradient o. the robot models. The resulting need for derivative-free optimization approaches limits both th. optimization speed and the number of parameters in the controllers..\nRecent physics engines, such as mujoco (Todorov et al.]2012), can derive gradients through the. model of a robot but rely on a finite difference method to approximate the gradient. Evaluating finite difference approximations, however, requires the same number of model evaluations as the number. of states with respect to which is differentiated. Additionally, the gradient is an estimation..\nIn this paper, we suggest an alternative approach, by introducing a differentiable physics engine. with analytical gradients. This idea is not novel. It has been done before with spring-damper models in 2D and 3D (Hermans et al.[2014). This technique is also similar to adjoint optimization, a method widely used in various applications such as thermodynamics (Jarny et al.]1991) and fluid.\nFormer member. currently unaffiliatec"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "To solve tasks efficiently, robots require an optimization of their control system. This optimization process can be done in automated testbeds (Degrave et al.]2015), but typically these controllers are optimized in simulation. Standard methods to optimize these controllers include particle swarms. reinforcement learning, genetic algorithms and evolutionary strategies. These are all derivative-free methods.\ndynamics (Iollo et al.|2001). However, modern engines to model robotics are not based on spring damper systems. The most commonly used ones are 3D rigid body engines, which rely on impulse based velocity stepping methods (Erez et al.]2015). In this paper, we test whether these engines are also differentiable and whether this gradient is computationally tractable. We will show how this method does speed up the optimization process tremendously, and give some examples where we optimize deep learned neural network controllers with millions of parameters."}, {"section_index": "3", "section_name": "2 A 3D RIGID BODY ENGINE", "section_text": "The goal is to implement a modern 3D Rigid body engine, in which parameters can be differenti. ated with respect to the fitness a robot achieves in a simulation, such that these parameters can be optimized with methods based on gradient descent..\nThe most frequently used simulation tools for model-based robotics, such as PhysX, Bullet, Havok and ODE, go back to MathEngine (Erez et al.] 2015). These tools are all 3D rigid body engines where bodies have 6 degrees of freedom, and the relations between them are defined as constraints These bodies exert impulses on each other, but their positions are constrained, e.g. to prevent the bodies from penetrating each other. The velocities, positions and constraints of the rigid bodies. define a linear complementarity problem (LCP) (Chappuis]2013), which is then solved using a. Gauss-Seidel projection (GsP) method (Jourdan et al.|1998). The solution of this problem are the new velocities of the bodies, which are then integrated by semi-implicit Euler integration to get the new positions (Stewart and Trinkle| 2o0o). This system is not always numerically stable. Therefore. the constraints are usually softened (Catto 2009).\nThe recent growth of automatic differentiation libraries, such as Theano (Al-Rfou et al.2016). Caffe (Jia et al.]2014) and Tensorflow (Abadi et al.]2015), has allowed for efficient differentiation. of remarkably complex functions before (Degrave et al.]2016). Therefore, we implemented such. a physics engine as a mathematical expression in Theano, a software library which does automatic evaluation and differentiation of expressions with a focus on deep learning. The resulting compu. tational graph to evaluate this expression is then compiled for both CPU and GPU. To be able to. compile for GPU however, we had to limit our implementation to a restricted set of elementary op-. erations. The range of implementable functions is therefore severely capped. However, since the. analytic gradient is determined automatically, the complexity of correctly implementing the differ. entiation is removed entirely.\nOne of these limitations with this restricted set of operations, is the limited support for conditionals. Therefore we needed to implement our physics engine without branching, as this is not yet available. in Theano for GPU. Therefore some sacrifices had to be made. For instance, our system only allows for contact constraints between different spheres or between spheres and the ground plane. Collision detection algorithms for cubes typically have a lot of branching (Mirtich1998). However, this sphere based approach can in principle be extended to any other shape (Hubbard 1996). On the other hand, we did implement a rather accurate model of servo motors, with gain, maximal torque. and maximal velocity parameters.\nAnother design choice was to use rotation matrices rather than the more common quaternions fo. representing rotations. Consequently, the states of the bodies are larger, but the operations requirec. are matrix multiplications. This design reduced the complexity of the graph. However, cumulative. operations on a rotation matrix might move the rotation matrix away from orthogonality. To correc for this, we renormalize our matrix with the update equation (Premerlani and Bizard2009):.\nwhere A' is the renormalized version of the rotation matrix A. 'o' denotes the elementwise multi plication, and '.' the matrix multiplication.\nThese design decisions are the most important aspects of difference with the frequently used sim. ulation tools. In the following section, we will evaluate our physics simulator on some different. problems. We take a look at the speed of computation and the number of evaluations requirec before the parameters of are optimized.\n3A-A0(A.A) A' 2\nTo test our engine, we implemented the model of a giant soccer ball in the physics engine, as show. in Fig.2a] The ball has a 1 m diameter, a friction of = 1.0 and restitution e = 0.5. The ball start off at position (0, 0). After 5 s it should be at position (10, 0) with zero velocity v and zero angula velocity w. We optimized the initial velocity vo and angular velocity wo at time t = Os until th errors at t = 5 s are less than 0.01 m and 0.01 m/s respectively.\nSince the quantity we optimize is only know at the end of the simulation, but we need to opti- mize the parameters at the beginning of the simulation, we need to backpropagate our error through time (BPTT) (Sutskever2013). This approach is similar to the backpropagation through time method used for optimizing recurrent neural networks (RNN). In our case, every time step in the simulation can be seen as one pass through a neural network, which transforms the inputs from this timestep to inputs for the next time step. For finding the gradient, this RNN is unfolded com- pletely, and the gradient can be obtained by differentiating this unfolded structure. This analytic differentiation is done automatically by the Theano library.\nOptimizing the six parameters in vo and wo took only 88 iterations with gradient descent and back propagation through time. Optimizing this problem with CMA-ES (Hansen]2006), a state of the art derivative-free optimization method, took 2422 iterations. Even when taking the time to compute the gradient into account, the optimization with gradient descent takes 16.3 s, compared to 59.9 s with CMA-ES. This result shows that gradient-based optimization of kinematic systems can in some cases already outperform gradient-free optimization algorithms from as little as six parameters."}, {"section_index": "4", "section_name": "3 POLICY SEARCH", "section_text": "To evaluate the relevance of our differentiable physics engine, we use a neural network as a general controller for a robot, as shown in Figure [1 We consider a general robot model in a discrete-time dynamical system xt+1 = fph(xt, ut) with a task cost function of l(x', p), where xt is the state of the system at time t and ut is the input of the system at time t. p provides some freedom in parameterizing the loss. If Xt is the trajectory of the state up to time t - 1, the goal is to find a policy ut = r(Xt) such that we minimize the loss L.\nLn=>l(x,p) t=0 s.t. xt+1 and x0 = xinit CIT\nIn previous research, finding a gradient for this objective has been described as presenting cha. lenges (Mordatch and Todorov2014). An approximation to tackle these issues has been discusse in Levine and Koltun(2013).\nWe define our controller as a deep neural network gdeep with weights W. We do not pass all infor- mation Xt to this neural network, but only a vector of values s' observed by the modeled sensors s(xt). We also provide our network with (some of the) task-specific parameters p'. Finally, we add a recurrent connection to the controller in the previous timestep ht. Therefore, our policy is the following:\n(X) = gdeep(s(x),h,p'|W) ht = hdeep. ,pw) and h0 = 0 S.t. X\nNotice the similarity between equations 2]and [3] Indeed, the equations for recurrent neural net- works (RNN) in equation[3|are very similar to the ones of the loss of a physical model in equation2 Therefore, we optimize this entire system as an RNN unfolded over time, as illustrated in Figure4 The weights W are optimized with stochastic gradient descent. The gradient required for that is the Jacobian dL/dW, which is found with automatic differentiation software..\nWe implement this equation into an automatic differentiation library, ignoring these challenges in finding the analytic gradient altogether. The automatic differentiation library, Theano in our case, analytically derives this equation and compiles code to evaluate both the equation and its gradient\nWe have now reduced the problem to a standard deep learning problem. We need to train our. network gdeep on a sufficient amount of samples xinit and for a sufficient amount of sampled tasks p. in order to get adequate generalization. Standard RNN regularization approaches could also improve this generalization. We reckon that generalization of gdeep to more models fph, in order to ease the. transfer of the controller from the model to the real system, is also possible (Hermans et al.[2014) but it is outside the scope of this paper..\nNext timestep. Motor signals Neural network controller Sensor signals Previous timeste.\nNext timestep Motor signals Neural network controller Sensor signals Previous timestep\nFigure 1: Illustration of how a closed loop neural network controller would be used to actuate robot. The neural network receives sensor signals from the sensors on the robot and uses these t generate motor signals which are sent to the servo motors. The neural network can also generate a signal which it can use at the next timestep to control the robot."}, {"section_index": "5", "section_name": "3.1 QUADRUPEDAL ROBOT - COMPUTING SPEED", "section_text": "To verify the speed of our engine, we also implemented a small quadrupedal robot model, as illus. trated in Fig.2b] This model has a total of 81 sensors, e.g. encoders and an inertial measurement. unit (IMU). The servo motors are controlled in a closed loop by a small neural network gdeep with. a number of parameters, as shown previously in Fig.4] The gradient is the Jacobian of , the total. traveled distance of the robot in 10s , differentiated with respect to all the parameters of the con-. troller W. This Jacobian is found by using BPTT and propagating all 10s back. The time it takes. to compute this traveled distance and the accompanying Jacobian is shown in Table|1] We include. both the computation time with and without the gradient, i.e. both the forward and backward pass and the forward pass alone. This way, the numbers can be compared to other physics engines, as. those only calculate without gradient. Our implementation and our model can probably be made more efficient, and evaluating the gradient can probably be made faster a similar factor..\n(a) Ball model. (b) quadruped model (c) robot arm model\nFigure 2: (a) Illustration of the ball model used in the first task. (b) Illustration of the quadrupec robot model with 8 actuated degrees of freedom, 1 in each shoulder, 1 in each elbow. The spine oi the robot can collide with the ground, through 4 spheres in the inside of the cuboid. (c) Illustration of the robot arm model with 4 actuated degrees of freedom\nCombining this with our previous observation that fewer iterations are needed when using gradi- ent descent, our approach can enable the use of gradient descent through physics for highly complex\nWhen only a single controller is optimized, our engine runs more slowly on GPU than on CPU. To tackle this issue, we implemented batch gradient descent, which is commonly used in complex op- timization problems. In this case, by batching our robot models, we achieve significant acceleration on GPU. Although backpropagating the gradient through physics slows down the computations by roughly a factor 10, this factor only barely increases with the number of parameters in our controller.\ndeep neural network controllers with millions of parameters. Also note that by using a batch method a single GPU can simulate about 864 000 model seconds per day, or 86 400 000 model states. This should be plenty for deep learning. It also means that a single simulation step of a single robot which includes collision detection, solving the LCP problem, integrating the velocities and back propagating the gradient through it all, takes about 1 ms on average. Without the backpropagation this process is only about seven times faster.."}, {"section_index": "6", "section_name": "3.2 4 DEGREE OF FREEDOM ROBOT ARM", "section_text": "As a first test of optimizing robot controllers, we implemented a four degree of freedom robotic arm as depicted in Fig.2c The bottom of the robot has a 2 degrees of freedom actuated universal joint. the elbow has a 2 degree of freedom actuated joint as well. The arm is 1 m long, and has a total mass of 32 kg. The servos have a gain of 30 s-1, a torque of 30 Nm and a velocity of 45 s-1\nFor this robot arm, we train controllers for a task with a gradually increasing amount of difficulty To be able to train our parameters, we have to use a couple of tricks often used in the training of. recurrent neural networks"}, {"section_index": "7", "section_name": "3.2.1 REACHING A FIXED POINT", "section_text": "A first simple task, is to have a small neural net controller learn to move the controller to a certain fixed point in space, at coordinates (0.5 m; 0.5 m; 0.5 m). The objective we minimize for this task, is. the distance between the end effector and the target point, averaged over the 8 seconds we simulate. Our model.\nAs a controller, we use a dense neural network with 1 input, 2 hidden layers of 128 units with a rectifier activation function, and 4 outputs with an identity activation function. This controller has 17 284 parameters in total. We disabled the recurrent connections ht.\nWe use gradient descent with a batch size of 1 robot for optimization, as the problem is not stochastic. in nature. The parameters are optimized with Adam's rule (Kingma and Ba] 2014) with a learning. rate of O.001. Every update step with this method takes about 5 seconds on CPU. We find that the. controller comes within 4 cm of the target in 100 model evaluations, and within 1 cm in 150 model. evaluations, which is small compared to the 1 m arm of the robot. Moreover, the controller does finc a more optimal trajectory which takes into account the sensor information..\nSolving problems like these in fewer iteration steps than the number of parameters, is unfeasible with derivative free methods (Sjoberg et al.|1995). Despite that, we did try to optimize the same\nWe choose an objective which is evaluated at every time step and then averaged, rather than at specific points of the simulation. This approach vastly increases the number of samples over which the gradient is averaged, which in turn makes the gradient direction more reliable (Sjoberg et al.1995). The value of the gradient is decreased by a factor a < 1 at every time step. This trick has the effect of a prior. Namely, events further in the past are less important for influencing current events, because intermediate events might diminish their influence altogether. It also improves robustness against exploding gradients (Hermans et al.2014). We initialize the controller intelligently. We do not want the controller to shake the actua- tors violently and explore outside the accurate domain of our simulation model. Therefore our controllers are initialized such that they only output zeros at the start of the simulation. The initial policy is the zero policy. We constraint the size of the gradient to an L2-norm of 1. This makes sure that gradients close to discontinuities in the fitness landscape do not push the parameter values too far away, such that everything which was learned is forgotten (Sutskever2013).\nWe provide the controller with a single sensor input, namely the current distance between the end. effector and the target point. Input is not required for this task, as there are solutions for which the motor signals are constant in time. However, this would not necessarily be the optimal approach for minimizing the average distance over time, it only solves the distance at the end of the simulation, but does not minimize the distance during the trajectory to get at the final position.\nproblem with CMA-ES. After a week of computing and 60 O00 model evaluations, CMA-ES did no show any sign of convergence, as it cannot handle the sheer amount of parameters.."}, {"section_index": "8", "section_name": "3.2.2 REACHING A RANDOM POINT", "section_text": "As a second task, we sample a random target point in the reachable space of the end effector. We. give this point as input v' to the controller, and the task is to again minimize the average distance between the end effector and the target point v. Our objective is this distance averaged over all. timesteps.\nAs a controller, we use a dense neural network comparable to the previous section, but this time witl 3 inputs. We used 3 hidden layers with 1024 units each, so the controller has 2 107 396 parameters. in total. This is not necessary for this task, but we do it like this to demonstrate the power of this. approach. In order to train for this task, we use a batch size of 128 robots, such that every update step. takes 58 s on GPU. Each simulation takes 8 s with a simulation step of 0.01 s. Therefore, the gradien. on the parameters of the controllers has been averaged over 51 200 timesteps at every update step. We update the parameters with Adam's rule, where we scale the learning rate with the average error. achieved in the previous step.\nWe find that it takes 576 update steps before the millions of parameters are optimized, such that the end effector of the robot is on average less than 10 cm of target, 2 563 update steps before the error. is less than 5 cm.\nOptimizing a gait for a quadrupedal robot is a problem of a different order, something the authors have extensive experience with (Sproewitz et al.| 2013f Degrave et al.20132015). The problem is way more challenging and allows for a broad range of possible solutions. In nature, we find a wide variety of gaits, from hopping over trotting, walking and galloping. With hand tuning on the robot model shown in Figure[2b] we were able to obtain a trotting motion with an average forward speed of 0.7 m/s. We found it tricky to find a gait where the robot did not end up like an upside down turtle, as 75% of the mass of the robot is located in its torso.\nAs a controller for our quadrupedal robot, we use a neural network with 2 input signals s', namely a sine and a cosine signal with a frequency of 1.5 Hz. On top of this, we added 2 hidden layers of 128 units and a rectifier activation function. As output layer, we have a dense layer with 8 units and a linear activation function, which has as input both the input layer and the top layer of the hidden layers. In total, this controller has 17 952 parameters. Since the problem is not stochastic in nature. we use a batch size of 1 robot. We initialize the output layer with zero weights, so the robot starts the optimization in a stand still position.\nIn only 500 model evaluations or about 1 hour of optimizing on CPU, the optimization with BPTT comes up with a solution with a speed of 1.17 m/s. This solution is a hopping gait, with a summ- ersault every 3 steps'I despite limiting the torque of the servos to 4 Nm on this 28.7 kg robot. For more life-like gaits, energy efficiency could be use as a regularization method. Evaluating these improvements are however outside the scope of this paper.\nAs a fourth example, we implemented a model of the pendulum-cart system we have in our labora torium. This pendulum-cart system is used for the classic control task of the underactuated inverte. pendulum (Vaccaro|1995). In this example however, a camera which is set up in front of the syster. is the only available information for the controller. It therefore has to observe the system it control using vision. A frame captured by this camera is shown in Figure3\nWe optimize these parameters to maximize the average velocity of the spine over the course of 10 s. of time in simulation. This way, the gradient used in the update step is effectively an average of the 1 00o time steps after unrolling the recurrent connections. This objective does not take into account. energy use, or other metrics typically employed in robotic problems..\nFigure 3: A frame captured by the differentiable camera looking at the model of the pendulum-cari system. The resolution used is 288 by 96 pixels. All the textures are made from pictures of the actual system.\nWe minimize the distance from the end of the pendulum to the desired point and regularize th speed of the pendulum. The memoryless deep controller receives the current image of the camera, il addition to two images from the past such that it can estimate velocity and acceleration. We observe that a controller with 1,065,888 parameters is able to learn to swing up and keep the pendulum stable after only 2420 episodes of 3 model seconds. The complete optimization process took 15 hours. The resulting controller keeps the pendulum stable for more than one minutq? In order to do this, the controller has learned to interpret the frames it receives from the camera and found a suitable contro strategy."}, {"section_index": "9", "section_name": "4 DISCUSSION", "section_text": "Our results show the first prototype of a differentiable physics engine based on similar algorithm. as those that are commonly used in modern robotics simulators. When initially addressing th problem, we did not know whether finding the gradient would be computationally tractable, let alon. whether evaluating it would be fast enough to be beneficial for optimization. In this paper, we have demonstrated that evaluating the gradient is tractable enough to speed up optimization on problem. with as little as six parameters. The speed of this evaluation mainly depends on the complexit. of the physics model and only slightly on the number of parameters to optimize. Therefore, ou. results suggest that this cost is dominated by the gain achieved from the combination of using batcl. gradient descent and GPU acceleration.\nOptimizing the controller of a robot model with gradient-based optimization is equivalent to op. timizing an RNN. After all, the gradient passes through each parameter at every time step. The parameter space is therefore very noisy. Consequently, training the parameters of this controller is a highly non-trivial problem, as it corresponds to training the parameters of an RNN. On top of that exploding and vanishing signals and gradients cause far more challenging problems compared tc. feed forward networks.\nIn section3.2 we already discussed some of the tricks used for optimizing RNNs. Earlier re. search shows that these methods can be extended to more complicated tasks than the ones discussec here (Hermans et al.]2014] Sutskever2013). Hence, we believe that this approach towards learn ing controllers for robotics applies to more complex problems than the illustrative examples in this. paper.\nAll of the results in this paper will largely depend on showing how these controllers will work on the physical counterparts of our models. Nonetheless, we would like to conjecture that to a certain extent, this gradient of a model is close to the gradient of the physical system. The gradient of the model is more susceptible to high-frequency noise introduced by modeling the system, than the imaginary gradient of the system itself. Nonetheless, it contains information which might be indicative, even if it is not perfect. We would theorize that using this noisy gradient is still better than\nIn order to build this model, we implemented a differentiable camera in our physics engine. This camera uses a ray-tracing approach to find where it needs to sample from the textures, and uses bilinear interpolation to sample from these textures, similar to the one used for the spatial transform layer (Jaderberg et al.] 2015). This interpolation is necessary for making the frame captured by the camera differentiable to the state of the robot with non-zero derivatives.\noptimizing in the blind and that the transferability to real robots can be improved by evaluating the gradients on batches of (slightly) different robots in (slightly) different situations and averaging the results. This technique has already been applied in (Hermans et al.||2014) as a regularization methoc to avoid bifurcations during online learning. If the previous proves to be correct, our approacl. can offer an addition or possibly even an alternative to deep Q-learning for deep neural network controllers in robotics."}, {"section_index": "10", "section_name": "5 CONCLUSION", "section_text": "In this paper, we show it is possible to build a differentiable physics engine. We implemented a modern engine which can run a 3D rigid body model, using the same algorithm as other engines commonly used to simulate robots, but we can additionally differentiate control parameters with BPTT. Our implementation also runs on GPU, and we show that using GPUs to simulate the physics can speed up the process for large batches of robots. We show that even complex sensors such as cameras, can be implemented and differentiated through, allowing for computer vision to be learned together with a control policy.\nWe can see the use of this extended approach for a broad range of applications in robotics. Not only do we think there are multiple ways where recent advances in deep learning could be applied to robotics more efficiently with a differentiable physics engine, we also see various ways in which this engine could improve existing angles at which robotics are currently approached:.\nIn this paper, we added memory by introducing recurrent connections in the neural netwc. controller. We reckon that advanced, recurrent connections such as ones with a memc. made out of LSTM cells (Hochreiter and Schmidhuber!1997) can allow for more power. controllers than the controllers described in this paper.. Using a differentiable physics engine, we reckon that knowledge of a model can be trai. ferred more efficiently into a forward or backward model in the form of a neural netwo. similar to methods such as used in Johnson et al.(2016) and Dumoulin et al.(2016). differentiating through an exact model and defining a relevant error on this model, it shot. be possible to transfer knowledge from a forward or backward model in the differential. physics engine to a forward or backward neural network model. Neural network mc. els trained this way might be more robust than the ones learned from generated trajec. ries (Christiano et al.]2016). In turn, this neural model could then be used for faster l. approximate evaluation of the model. Although we did not address this in this paper, there is no reason why only control para. eters could be differentiated. Hardware parameters of the robot have been optimized t. same way before (Jarny et al.[ 1991f Iollo et al.2001; Hermans et al.[2014). The auth reckon that the reverse process is also true. A physics engine can provide a strong pri. which can be used for robots to learn (or adjust) their robot models based on their hardwa. measurements faster than today. You could optimize the model parameters with gradi. descent through physics, to have the model better mimic the actual observations.. Where adversarial networks are already showing their use in generating image mode. we believe adversarial robotics training (ART) will create some inventive ways to desi. and control robots. Like in generative adversarial nets (GAN) (Goodfellow et al.201. where the gradient is pulled through two competing neural networks, the gradient could. pulled through multiple competing robots as well. It would form an interesting approa. for swarm robotics, similar to previous results in evolutionary robotics (Sims||1994)Pfei and Bongard2006Cheney et al.]2014), but possibly faster.\nWe find that these gradients can be computed fast enough for use in applications. We also show that using gradient descent with BPTT speeds up optimization processes often found in robotics, even for rather small problems, due to the reduced number of model evaluations required. We show that this improvement in speed scales to problems with a lot of parameters. We also show that using this engine, finding policies for robot models can be done faster and in a more straightforward way. This method should allow for a new approach to apply deep learning techniques in robotics."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "Special thanks to David Pfau for pointing out relevant prior art we were previously unaware of. and Iryna Korshunova for proofreading the paper. The research leading to these results has re. ceived funding from the Agency for Innovation by Science and Technology in Flanders (IWT). The. NVIDIA Corporation donated the GTX 1080 used for this research.\nAbadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur M.. Levenberg, J.. Mane, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viegas, F., Vinyals, O., Warden, P. Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org\nCatto, E. (2oo9). Modeling and solving constraints. In Game Developers Conference\nChappuis, D. (2013). Constraints derivation for rigid body simulation in 3D.\nDegrave, J., Burm, M., Kindermans, P.-J., Dambre, J., and wyffels, F. (2015). Transfer learning of gaits on a quadrupedal robot. Adaptive Behavior, page 1059712314563620\nDumoulin, V., Shlens, J., and Kudlur, M. (2o16). A learned representation for artistic style. CoRR, abs/1610.07629\nHansen, N. (2006). The cma evolution strategy: a comparing review. In Towards a new evolutionary computa tion, pages 75-102. Springer Berlin Heidelberg.\nAl-Rfou, R., Alain, G., Almahairi, A., Angermueller, C., Bahdanau, D., Ballas, N., Bastien, F., Bayer, J., Be- likov, A., Belopolsky, A., Bengio, Y., Bergeron, A., Bergstra, J., Bisson, V., Bleecher Snyder, J., Bouchard. N., Boulanger-Lewandowski, N., Bouthillier, X., de Brebisson, A., Breuleux, O., Carrier, P.-L., Cho, K., Chorowski, J., Christiano, P., Cooijmans, T., Cote, M.-A., Cote, M., Courville, A., Dauphin, Y. N., Delal- leau, O., Demouth, J., Desjardins, G., Dieleman, S., Dinh, L., Ducoffe, M., Dumoulin, V., Ebrahimi Kahou. S.. Erhan. D.. Fan. Z.. Firat. O.. Germain. M.. Glorot. X.. Goodfellow. I.. Graham. M.. Gulcehre, C.. Hamel P., Harlouchet, I., Heng, J.-P., Hidasi, B., Honari, S., Jain, A., Jean, S., Jia, K., Korobov, M., Kulkarni, V.. Lamb. A.. Lamblin. P.. Larsen, E., Laurent, C., Lee, S., Lefrancois, S., Lemieux, S., Leonard, N., Lin, Z. Livezey, J. A., Lorenz, C., Lowin, J., Ma, Q., Manzagol, P.-A., Mastropietro, O., McGibbon, R. T., Memi- sevic, R., van Merrienboer, B., Michalski, V., Mirza, M., Orlandi, A., Pal, C., Pascanu, R., Pezeshki, M., Raffel, C., Renshaw, D., Rocklin, M., Romero, A., Roth, M., Sadowski, P., Salvatier, J., Savard, F., Schluter. J., Schulman, J., Schwartz, G., Serban, I. V., Serdyuk, D., Shabanian, S., Simon, E., Spieckermann, S. Subramanyam, S. R., Sygnowski, J., Tanguay, J., van Tulder, G., Turian, J., Urban, S., Vincent, P., Visin, F. de Vries, H., Warde-Farley, D., Webb, D. J., Willson, M., Xu, K., Xue, L., Yao, L., Zhang, S., and Zhang Y. (2016). Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.02688\nHermans, M., Schrauwen, B., Bienstman, P., and Dambre, J. (2014). Automated design of complex dynami systems. PloS one, 9(1):e86696\nHochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8):1735-1780\nHubbard, P. M. (1996). Approximating polyhedra with spheres for time-critical collision detection. ACM Transactions on Graphics (TOG), 15(3):179-210.\nJaderberg, M., Simonyan, K., Zisserman, A., et al. (2015). Spatial transformer networks. In Advances in Neural Information Processing Systems, pages 2017-2025.\nJohnson, J., Alahi, A., and Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution arXiv preprint arXiv:1603.08155.\nJourdan, F., Alart, P., and Jean, M. (1998). A gauss-seidel like algorithm to solve frictional contact problems Computer methods in applied mechanics and engineering, 155(1):31-47\nKingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. Proceedings of the 3r International Conference on Learning Representations (ICLR).\nLevine, S. and Koltun, V. (2013). Variational policy search via trajectory optimization. In Advances in Neura Information Processing Systems, pages 207-215.\nMirtich, B. (1998). V-clip: Fast and robust polyhedral collision detection. ACM Transactions On Graphics (TOG), 17(3):177-208.\nMordatch, I. and Todorov, E. (2014). Combining the benefits of function approximation and trajectory opti mization. In Robotics: Science and Systems (RsS).\nPremerlani, W. and Bizard, P. (2009). Direction cosine matrix IMU: Theory. DIY DRONE: USA, pages 13-15\nSims, K. (1994). Evolving 3d morphology and behavior by competition. Artificial life, 1(4):353-372\nStewart, D. and Trinkle, J. C. (2oo0). An implicit time-stepping scheme for rigid body dynamics with coulomb friction. In International Conference on Robotics and Automation (ICRA), volume 1, pages 162-169. IEEE\ntskever, I. (2013). Training recurrent neural networks. PhD thesis, University of Toronto\nTodorov, E., Erez, T., and Tassa, Y. (2012). Mujoco: A physics engine for model-based control. In 201 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026-5033. IEEE\nVaccaro, R. J. (1995). Digital control: a state-space pproach, volume 196. McGraw-Hill New York.\nIollo, A., Ferlauto, M., and Zannetti, L. (2oo1). An aerodynamic optimization method based on the inverse problem adjoint equations. Journal of Computational Physics, 173(1):87-115.\nLevine, S., Pastor, P., Krizhevsky, A., and Quillen, D. (2016). Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. arXiv preprint arXiv:1603.02199.\nPfeifer, R. and Bongard, J. (2006). How the body shapes the way we think: a new view of intelligence. MIT press.\nSjoberg, J., Zhang, Q., Ljung, L., Benveniste, A., Delyon, B., Glorennec, P.-Y., Hjalmarsson. H.. and Judit sky, A. (1995). Nonlinear black-box modeling in system identification: a unified overview. Automatica,. 31(12):1691-1724.\nSproewitz, A., Tuleu, A., D'Haene, M., Mockel, R., Degrave, J., Vespignani, M., Gay, S., Ajallooeian, M. Schrauwen, B., and Ijspeert, A. J. (2013). Towards dynamically running quadruped robots: performance, scaling, and comparison. In Adaptive Motion of Animals and Machines, pages 133-135.\nAPPENDIX 1 FIGURES time t t+] fph fph u gdeep Ideep, , hdeep Ndeep 9deej ldeep ht ht+1 W\nFigure 4: Illustration of the dynamic system with the robot and controller, after unrolling over time The neural networks gdeep and hdeep with weights W receive sensor signals st from the sensors on the robot and use these to generate motor signals u' which are used by the physics engine fph to find the next state of the robot in the physical system. These neural networks also have a memory, implemented with recurrent connections ht. From the state x' of these robots, the loss L can be found. In order to find d/dW, every block in this chart needs to be differentiable. The contribution of this paper, is to implement a differentiable fph, which allows us to optimize W to minimize more efficiently than was possible before.\nTable 1: Evaluation of the computing speed of our engine on a robot model controlled by a closed. loop controller with a variable number of parameters. We evaluated both on CPU (i7 5930K) and. GPU (GTX 1080), both for a single robot optimization and for batches of multiple robots in parallel.. The numbers are the time required in seconds for simulating the quadruped robot(s) for 10 s, with. and without updating the controller parameters through gradient descent. The gradient calculated here is the Jacobian of the total traveled distance of the robot in 10 s, differentiated with respect to all the parameters of the controller. For comparison, the model has 102 states. It is built from 17 rigid. bodies, each having 6 degrees of freedom. These states are constrained by exactly 100 constraints.\nSeconds of computing time reguired to simulate a batch of robots for 10 second.\nMilliseconds of com rform one time step of one robot uting time eaulred to."}]
HkCjNI5ex
[{"section_index": "0", "section_name": "REGULARIZING NEURAL NETWORKS BY PENALIZING CONFIDENT OUTPUT DISTRIBUTIONS", "section_text": "Gabriel Pereyra\nGeorge Tucker\nGoogle Brain\ngjt@google.com\npereyra@google.com\nlukaszkaiser@google.com\nWe systematically explore regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. Furthermore, we connect a maximum entropy based confidence penalty to label smoothing through the direction of the KL divergence. We exhaustively evaluate the proposed confidence penalty and label smoothing on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT'14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and the confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyperparameters, suggesting the wide ap plicability of these regularizers."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Large neural networks with millions of parameters achieve strong performance on image classifica. tion (Szegedy et al.]|2015a), machine translation (Wu et al.]2016), language modeling (Jozefowicz et al. 2016), and speech recognition (Graves et al.[2013). However, despite using large datasets. neural networks are still prone to overfitting. Numerous techniques have been proposed to prevent overfitting, including early stopping, L1/L2 regularization (weight decay), dropout (Srivastava et al. 2014), and batch normalization (Ioffe & Szegedy 2015). These techniques, along with most other. forms of regularization, act on the hidden activations or weights of a neural network. Alternatively regularizing the output distribution of large, deep neural networks has largely been unexplored..\nJan Chorowski\nGoogle Brain\nchorowski@google.com\nUniversity of Toronto & Google Brain"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "To motivate output regularizers, we can view the knowledge of a model as the conditional distribu- tion it produces over outputs given an input (Hinton et al.]2015) as opposed to the learned values. of its parameters. Given this functional view of knowledge, the probabilities assigned to class labels. that are incorrect (according to the training data) are part of the knowledge of the network. For. example, when shown an image of a BMW, a network that assigns a probability of 10-3 to \"Audi'. and 10-9 to \"carrot' is clearly better than a network that assigns 10-9 to \"Audi\"' and 10-3 to carrot,. all else being equal. One reason it is better is that the probabilities assigned to incorrect classes are. an indication of how the network generalizes. Distillation (Hinton et al.2015) Bucilu et al.2006) exploits this fact by explicitly training a small network to assign the same probabilities to incorrect classes as a large network or ensemble of networks that generalizes well. Further, by operating on. the output distribution that has a natural scale rather than on internal weights, whose significance depends on the values of the other weights, output regularization has the property that it is invariant to the parameterization of the underlying neural network..\nIn this paper, we systematically evaluated two output regularizers: a maximum entropy based confi. dence penalty and label smoothing (uniform and unigram) for large, deep neural networks on 6 com mon benchmarks: image classification (MNIST and Cifar-1O), language modeling (Penn Treebank). machine translation (WMT'14 English-to-German), and speech recognition (TIMIT and WSJ). We. find that both label smoothing and the confidence penalty improve state-of-the-art models across. benchmarks without modifying existing hyperparameters.\nThe maximum entropy principle (Jaynes1957) has a long history with deep connections to many. areas of machine learning including unsupervised learning, supervised learning, and reinforcemen. learning. In supervised learning, we can search for the model with maximum entropy subject tc constraints on empirical statistics, which naturally gives rise to maximum likelihood in log-linea models (see (Berger et al.][1996) for a review). Deterministic annealing Rose(1998) is a genera. approach for optimization that is widely applicable, avoids local minima, and can minimize discrete. objectives, and it can be derived from the maximum entropy principle. Closely related to our work. Miller et al.(1996) apply deterministic annealing to train multilayer perceptrons, where an entropy. based regularizer is introduced and slowly annealed. However, their focus is avoiding poor initial ization and local minima, and while they find that deterministic annealing helps, the improvemen diminishes quickly as the number of hidden units exceeds eight..\nIn reinforcement learning, encouraging the policy to have an output distribution with high entrop.. has been used to improve exploration (Williams & Peng1991). This prevents the policy fron. converging early and leads to improved performance (Mnih et al.|2016). Penalizing low entropy. has also been used when combining reinforcement learning and supervised learning to train a neura speech recognition model to learn when to emit tokens (Luo et al.[2016). When learning to emit. the entropy of the emission policy was added to the training objective and was annealed throughou. training. Indeed, in recent work on reward augmented maximum likelihood (Norouzi et al.]2016. this entropy augmented reinforcement learning objective played a direct role in linking maximun. likelihood and reinforcement learning objectives..\nPenalizing the entropy of a network's output distribution has not been evaluated for large deep neura networks in supervised learning, but a closely related idea, label smoothing regularization, has beer shown to improve generalization (Szegedy et al.|2015b). Label smoothing regularization estimates the marginalized effect of label-dropout during training, reducing overfitting by preventing a network from assigning full probability to each training example and maintaining a reasonable ratio between the logits of the incorrect classes. Simply adding label noise has also been shown to be effective at regularizing neural networks (Xie et al.]2016). Instead of smoothing the labels with a uniform distribution, as in label smoothing, we can smooth the labels with a teacher model (Hinton et al. 2015) or the model's own distribution (Reed et al.]2014). Distillation and self-distillation both regularize a network by incorporating information about the ratios between incorrect classes.\nVirtual adversarial training (VAT) (Miyato et al. 2015) is another promising smoothing regularizer However, we did not compare to VAT because it has multiple hyperparameters and the approximated gradient of the local distributional smoothness can be computed with no more than three pairs of for- ward and back propagations, which is significantly more computation in grid-searching and training than the other approaches we compared to.\nConfident predictions correspond to output distributions that have low entropy. A network is over confident when it places all probability on a single class in the training set, which is often a symptom of overfitting (Szegedy et al.|2015b). The confidence penalty constitutes a regularization term that prevents these peaked distributions, leading to better generalization..\nA neural network produces a conditional distribution pe(y[x) over classes y given an input x through a softmax function. The entropy of this conditional distribution is given by.\nFigure 1: Distribution of the magnitude of softmax probabilities on the MNIST validation set. A fully-connected, 2-layer, 1024-unit neural network was trained with dropout (left), label smoothing. (center), and the confidence penalty (right). Dropout leads to a softmax distribution where probabil ities are either O or 1. By contrast, both label smoothing and the confidence penalty lead to smoothe output distributions, which results in better generalization..\nH(pe(y|x)) =-pe(yi|x) log(pe(yi|x))\nTo penalize confident output distributions, we add the negative entropy to the negative log-likelihood during training\n(0) = -) logpe(y[x) H(pe(y[x))\naH(pe) = pe(y|x) (- logpe(y|x) - H(pe)) dzi\nwhich is the weighted deviation from the mean\nIn reinforcement learning, penalizing low entropy distributions prevents a policy network from con. verging early and encourages exploration. However, in supervised learning, we typically want quick. convergence, while preventing overfitting near the end of training, suggesting a confidence penalty that is weak at the beginning of training and strong near convergence. A simple way to achieve this is to anneal the confidence penalty..\nAnother way to strengthen the confidence penalty as training progresses is to only penalize outpul distributions when they are below a certain entropy threshold. We can achieve this by adding a hinge. loss to the confidence penalty, leading to an objective of the form.\n(0) = )`logpe(y[x) - max(0, T- H(pe(y[x))\nwhere I is the entropy threshold below which we begin applying the confidence penalty\nInitial experiments suggest that thresholding the confidence penalty leads to faster convergence at. the cost of introducing an extra hyper-parameter. For the majority of our experiments, we were. able to achieve comparable performance without using the thresholded version. For the sake of simplicity, we focus on the single hyper-parameter version in our experiments..\nDropout Label smoothing. Confidence penalty. .8 02 . 0.2 0.0 0.0.0 0.0.0 .0 Predicted probability Predicted probability. Predicted probability.\nwhere controls the strength of the confidence penalty. Notably, the gradient of the entropy term with respect to the logits is simple to compute. Denoting the ith logit by z;, then\nLabel smoothing estimates the marginalized effect of label noise during training. When the prio. label distribution is uniform, label smoothing is equivalent to adding the KL divergence between the. uniform distribution u and the network's predicted distribution pe to the negative log-likelihood\n(0) = - >`logpe(y|x) - DkL(u||pe(y|x))\nBy reversing the direction of the KL divergence, DkL(pe(y[x)||u), we recover the confidence. penalty. This interpretation suggests further confidence regularizers that use alternative target distri butions instead of the uniform distribution. We leave the exploration of these regularizers to future. work."}, {"section_index": "3", "section_name": "4 EXPERIMENTS", "section_text": "We also plotted the norm of the gradient as training progressed in Figure[2] We observed that labe smoothing and confidence penalty had smaller gradient norms and converged more quickly thar models regularized with dropout. If the output distributions is peaked on a misclassified example the model receives a large gradient. This may explain why the regularized models have smaller gradient norms.\nModel Layers Size Test Wan et al.[(2013) - Unregularized 2 800 1.40% Srivastava et al. 2014) - Dropout 3 1024 1.25% Wan et al. (2013) - DropConnect 2 800 1.20% Srivastava et al. 2014) - MaxNorm + Dropout 2 8192 0.95 % Dropout 2 1024 1.28 0.06% Label Smoothing 2 1024 1.23 0.06% Confidence Penalty 2 1024 1.17 0.06%\nTable 1: Test error (%) for permutation-invariant MNIST\nWe evaluated the confidence penalty and label smoothing on MNIST and CIFAR-10 for image clas- sification, Penn Treebank for language modeling, WMT'14 English-to-German for machine transla- tion, and TIMIT and WSJ for speech recognition. All models were implemented using TensorFlow (Abadi et al.2016) and trained on NVIDIA Tesla K40 or K80 GPUs.\nAs a preliminary experiment, we evaluated the approaches on the standard MNIST digit recognition task. We used the standard split into 60k training images and 10k testing images. We use the last 10k images of the training set as a held-out validation set for hyper-parameter tuning and then retrained. the models on the entire dataset with the best configuration..\nWe trained fully-connected, ReLu activation neural networks with 1024 units per layer and two hidden layers. Weights were initialized from a normal distribution with standard deviation O.01. Models were optimized with stochastic gradient descent with a constant learning rate O.05 (except. for dropout where we set the learning rate to 0.001)..\nFor label smoothing, we varied the smoothing parameter in the range [0.05, 0.1, 0.2, 0.3, 0.4, 0.5]. and found O.1 to work best for both methods. For the confidence penalty, we varied the weight. values over [0.1, 0.3, 0.5, 1.0, 2.0, 4.0, 8.0] and found a confidence penalty weight of 1.0 to work best.\nCIFAR-10 is an image classification dataset consisting of 32x32x3 RGB images of 10 classes. Th. dataset is split into 50k training images and 10k testing images. We use the last 5k images of th. training set as a held-out validation set for hyper-parameter tuning, as is common practice\nFor our experiments, we used a densely connected convolutional neural network, which represent the current state-of-the-art on CIFAR-10 (Huang et al.[2016a). We use the small configuration fron. (Huang et al.2016a), which consists of 40-layers, with a growth rate of 12. All models were trainec. for 300 epochs, with a batch-size of 50 and a learning rate 0.1. The learning rate was reduced by. a factor of 10 at 150 and 225 epochs. We present results for training without data-augmentation. We found that the confidence penalty did not lead to improved performance when training with data. augmentation, however neither did other regularization techniques, including dropout..\nFor our final test scores. we trained on the entire training set. For label smoothing, we tried smooth ing parameter values of [0.05, 0.1, 0.2, 0.3, 0.4, 0.5], and found 0.1 to work best. For the confidenc penalty, we performed a grid search over confidence penalty weight values of [0.1, 0.25, 0.5, 1.0 1.51 and found a confidence penalty weight of 0.1 to work best.\nModel Layers Parameters Test He et al. (2015) - Residual CNN 110 1.7M 13.63% Huang et al.. (2016b) - Stochastic Depth Residual CNN 110 1.7m 11.66% Larsson et al 2016) - Fractal CNN 21 38.6M 10.18% Larsson et al. 2016) - Fractal CNN (Dropout) 21 38.6M 7.33% Huang et al. 2016a) - Densely Connected CNN 40 1.0M 7.00% Huang et al. (2016a) - Densely Connected CNN 100 7.0M 5.77% Densely Connected CNN (Dropout) 40 1.0M 7.04% Densely Connected CNN (Dropout + Label Smoothing) 40 1.0M 6.89% Densely Connected CNN (Dropout + Confidence Penalty) 40 1.0M 6.77%\nTable 2: Test error (%) on Cifar-10 without data augmentation\nFor language modeling, we found that confidence penalty significantly outperforms label noise and. label smoothing. We performed word-level language modeling experiments using the Penn Tree. bank dataset (PTB) (Marcus et al.]1993). We used the hyper-parameter settings from the large. configuration in (Zaremba et al.[2014). Briefly, we used a 2-layer, 1500-unit LSTM, with 65% dropout applied on all non-recurrent connections. We trained using stochastic gradient descent for 55 epochs, decaying the learning rate by 1.15 after 14 epochs, and clipped the norm of the gradients. when they were larger than 10.\nFor reference, we also include results of the existing state-of-the-art models for the word-level lan guage modeling task on PTB. Variational dropout (Gal|2015) applies a fixed dropout mask (stochas tic for each sample) at each time-step, instead of resampling at each time-step as in traditiona dropout. Note, that we do not include the variational dropout results that use Monte Carlo (MC model averaging, which achieves lower perplexity on the test set but requires 1o00 model evalua tions, which are then averaged. Recurrent highway networks (Zilly et al.|[2016) currently represen the state-of-the-art performance on PTB.\nFor label noise and label smoothing, we performed a grid search over noise and smoothing values of [0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5]. For label noise, we found 0.1 to work best. For label smoothing, we found 0.1 to work best. For the confidence penalty, we performed a grid search over confidence penalty weight values of [0.1, 0.5, 1.0, 2.0, 3.0]. We found a confidence penalty weight of 2.0 to work best, which led to an improvement of 3.7 perplexity points over the baseline..\nModel Parameters Validation Test Zaremba et al.[(2014) - Regularized LSTM 66M 82.2 78.4 Gal(2015) - Variational LSTM 66M 77.9 75.2 Press & Wolf(2016) - Tied Variational LSTM 51M 79.6 75.0 Merity et al.(2016)- Pointer Sentinel LSTM 21M 72.4 70.9 Zilly et al.(2016) - Variational RHN 32M 71.2 68.5 Zilly et al. (2016) - Tied Variational RHN 24M 68.1 66.0 Regularized LSTM (label noise) 66M 79.7 77.7 Regularized LSTM (label smoothing) 66M 78.9 76.6 Regularized LSTM (unigram smoothing) 66M 79.1 76.3 Regularized LSTM (confidence penalty) 66M 77.8 74.7\nTable 3: Validation and test perplexity for word-level Penn Treebank"}, {"section_index": "4", "section_name": "4.3 MACHINE TRANSLATION", "section_text": "For machine translation, we evaluated the confidence penalty on the WMT'14 English-to-Germar. translation task using Google's production-level translation system Wu et al.[(2016). The training se consists of 5M sentence pairs, and we used newstest2012 and newtests2013 for validation and new stest2014 for testing. We report tokenized BLEU scores as computed by the multi-bleu.perl. script from the Moses translation machine translation package..\nOur model was an 8-layer sequence-to-sequence model with attention (Bahdanau et al.]2014). The. first encoder was a bidirectional LSTM, the remaining encoder and decoder layers were unidirec- tional LSTMs, and the attention network was a single layer feed-forward network. Each layer had 512 units (compared to 1024 in (Wu et al.2016)). The model was trained using 12 replicas running. concurrently with asynchronous updates. Dropout of 30% was applied as described in (Zaremba. et al.]2014). Optimization used a mix of Adam and SGD with gradient clipping. Unlike (Wu et al.. 2016), we did not use reinforcement learning to fine-tune our model. We used a beam size of 12 during decoding. For more details, see (Wu et al.]2016).\nFor label smoothing, we performed a grid search over values [0.05, 0.1, 0.2, 0.3, 0.4, 0.5] and found. 0.1 to work best for both uniform and unigram smoothing. For the confidence penalty, we searched. over values of [0.5, 2.5, 4.5] and found a value of 2.5 to work best . For machine translation, we found label smoothing slightly outperformed confidence penalty. When applied without dropout,. both lead to an improvement of just over 1 BLEU point (dropout leads to an improvement of just. over 2 BLEU points). However, when combined with dropout, the effect of both regularizers was. diminished.\nModel Parameters Validation Test Buck et al.(2014) - PBMT 20.7 Cho et al.(2015) - RNNSearch 16.9 Zhou et al.f(2016) - Deep-Att 20.6 Luong et al.f(2015) - P-Attention 164M 20.9 Wu et al.(2016) - WPM-16K 167M 24.4 Wu et al. (2016) - WPM-32K 278M 24.6 WPM-32K (without dropout) 94M 22.33 21.24 WPM-32K (label smoothing) 94M 23.85 22.42 WPM-32K (confidence penalty) 94M 23.25 22.52 WPM-32K (dropout) 94M 24.1 0.1 23.41 0.04 WPM-32K (dropout + label smoothing) 94M 24.3 0.1 23.52 0.03 WPM-32K (dropout + unigram smoothing) 94M 24.3 0.1 23.57 0.02 WPM-32K (dropout + confidence penalty) 94M 24.3 0.1 23.4 0.1\nTable 4: Validation and test BLEU for WMT'14 English-to-German. For the last four model con figurations, we report the mean and standard error of the mean (SEM) over 5 random initializations."}, {"section_index": "5", "section_name": "4.4 SPEECH RECOGNITION", "section_text": "In the TIMIT corpus, the training set consists of 3512 utterances, the validation set consists of 184 utterances and the test set consists of 192 utterances. All 61 phonemes were used during training and decoding, and during scoring, these 61 phonemes were reduced to 39 to compute the phoneme error rate (PER).\nAs our base model, we used a sequence-to-sequence model with attention. The encoder consistec. of 3 bidirectional LSTM layers, the decoder consisted of a single unidirectional LSTM layer, anc. the attention network consisted of a single layer feed-forward network. All layers consisted of 25 units. Dropout of 15% was applied as described inZaremba et al.(2014). We trained the mode. with asynchronous SGD with 5 replicas. We used a batch size of 32, a learning rate of O.01, anc. momentum of 0.9. Gradients were clipped at 5.0. For more details, see[Norouzi et al.(2016).\nFor label smoothing, we performed a grid search over values [0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.6. 0.8] and found 0.2 to work best. For the confidence penalty, we performed a grid search over values. of [0.125, 0.25, 0.5, 1.0, 2.0, 4.0, 8.0] and found a value of 1.0 to work best. Label smoothing led to an absolute improvement over the dropout baseline of 1.6%, while the confidence penalty led to an absolute improvement of 1.2%.\nModel Parameters Validation Test Mohamed et al.(2012) - DNN-HMM 20.7 Norouzi et al.(2016) - RML 6.5M 18.0 19.9 Graves et al.(2006)- CTC 6.8M 18.4 Graves et al.(2013) - RNN Transducer 4.3M 17.7 T0th|(2014)- CNN 13.9 16.7 Dropout 6.5M 21.0 0.1 23.2 0.4 Dropout + Label Smoothing. 6.5M 19.3 0.1 21.6 0.2 Dropout + Confidence Penalty 6.5M 19.9 0.2 22.0 0.4\nTable 5: Validation and test phoneme error rates (PER) for TIMIT. We report the mean and SEN over 5 random initializations"}, {"section_index": "6", "section_name": "4.4.2 WALL STREET JOURNAL", "section_text": "Network architecture details were as follows. The encoder of the network consisted of 4 bidirection LSTM layers each having 256 units, interleaved with 3 time-subsampling layers, configured to dro every second frame (Bahdanau et al.) 2016] Chan et al.[2015). The decoder used a single LSTN layer with 256 units. The attention vectors were computed with a single layer feedforward networ having 64 hidden units and the convolutional filters as described in|Chorowski et al.(2015). Weight were initialized from a uniform distribution [-0.075, 0.075]. All models used weight decay of 10-6 additive Gaussian weight noise with standard deviation 0.075, applied after 20K steps, and wer trained for 650K steps. We used the ADAM optimizer asynchronously over 8 GPUs. We used learning rate of 10-3, which was reduced to 10-4 after 400K and 10-5 after 500K steps.\nWe tested three methods of increasing the entropy of outputs: the confidence penalty and two vari- ants of label smoothing: uniform and unigram. All resulted in improved Word Error Rates (WER), however the unigram smoothing resulted in the greatest WER reduction, and we found it to be least sensitive to its hyperparameter (the smoothing value). Furthermore, uniform smoothing and con- fidence penalty required masking network outputs corresponding to tokens that never appeared as. labels, such as the start-of-sequence token..\nFor the WSJ corpus we used attention-based sequence-to-sequence networks that directly predicted characters. We used the SI284 subset for training, DEV93 for validation, and EVAL92 for test- ing. We used 240-dimensional vectors consisting of 80-bin filterbank features augmented with their deltas and delta-deltas with per-speaker normalized mean and variances computed with KaldiPovey et al. (2011). We did not use text-only data or separate language models during decoding.\nTable [6|compares the performance of the regularized networks with several recent results. We ob- serve that the benefits of label smoothing (WER reduction from 14.2 to 11) improve over the re cently proposed Latent Sequence Decompositions (LSD) method (Chan et al.|2016) which reduces the WER from 14.7 to 12.9 by extending the space of output tokens to dynamically chosen character n-grams.\nModel Parameters Validation Test Graves & Jaitly (2014) - CTC 26.5M 27.3 Bahdanau et al. (2016) - seq2seq 5.7M 18.6 Chan et al. 2016) - Baseline 5.1M 14.7 Chan et al. 2016) - LSD 5.9M 12.9 Baseline 6.6M 17.9 14.2 Uniform Label Smoothing 6.6M 14.7 11.3 Unigram Label Smoothing 6.6M 14.0 0.25 11.00.35 Confidence Penalty 6.6M 17.2 12.7\nTable 6: Validation and test word error rates (WER) for WsJ. For Baseline, Uniform Label Smooth ing and Confidence Penalty we report the average over two runs. For the best setting (Unigram Label Smoothing), we report the average over 6 runs together with the standard deviation."}, {"section_index": "7", "section_name": "5 CONCLUSION", "section_text": "Motivated by recent successes of output regularizers (Szegedy et al.]2015bfXie et al.]2016), we conduct a systematic evaluation of two output regularizers: the confidence penalty and label smooth ing. We show that this form of regularization, which has been shown to improve exploration ir reinforcement learning, also acts as a strong regularizer in supervised learning. We find that both the confidence penalty and label smoothing improve a wide range of state-of-the-art models, withou the need to modify hyper-parameters."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Sergey Ioffe, Alex Alemi and Navdeep Jaitly for helpful discussions. We. would also like to thank Prajit Ramachandran, Barret Zoph, Mohammad Norouzi, and Yonghu Wu for technical help with the various models used in our experiments. We thank the anonymous. reviewers for insightful comments\nAdam L Berger, Vincent J Della Pietra, and Stephen A Della Pietra. A maximum entropy approach to natural language processing. Computational linguistics, 22(1):39-71, 1996.\nWilliam Chan, Yu Zhang, Quoc Le, and Navdeep Jaitly. Latent sequence decompositions. arXiv preprint arXiv:1610.03035, 2016.\nSebastien Jean Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large targe vocabulary for neural machine translation. 2015\nJan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. In Advances in Neural Information Processing Systems, pp. 577-585, 2015.\nAlex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In ICML, volume 14, pp. 1764-1772, 2014.\nAlex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur rent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pp. 6645-6649. IEEE, 2013.\nGao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks arXiv preprint arXiv:1608.06993, 2016a.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing interna1 covariate shift. arXiv preprint arXiv:1502.03167, 2015.\nEdwin T Jaynes. Information theory and statistical mechanics. Physical review, 106(4):620, 1957\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.\nYuping Luo, Chung-Cheng Chiu, Navdeep Jaitly, and Ilya Sutskever. Learning online alignments with continuous rewards policy gradient. arXiv preprint arXiv:1608.01281, 2016\nMinh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention based neural machine translation. arXiv preprint arXiv:1508.04025, 2015.\nStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.\nDavid Miller, Ajit V Rao, Kenneth Rose, and Allen Gersho. A global optimization technique for statistical classifier design. IEEE Transactions on Signal Processing. 44(12):3108-3122. 1996\nAlex Graves, Santiago Fernandez, Faustino Gomez, and Jurgen Schmidhuber. Connectionist tem. poral classification: labelling unsegmented sequence data with recurrent neural networks. In Pro-. ceedings of the 23rd international conference on Machine learning. pp. 369-376. ACM. 2006\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385. 2015.\nMitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330, 1993.\nTakeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distribution smoothing by virtual adversarial examples. arXiv preprint arXiv:1507.00677, 2015.\nVolodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.\nKenneth Rose. Deterministic annealing for clustering, compression, classification, regression, an related optimization problems. Proceedings of the IEEE, 86(11):2210-2239, 1998\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine trans. lation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.\nLingxi Xie, Jingdong Wang, Zhen Wei, Meng Wang, and Qi Tian. Disturblabel: Regularizing cn on the loss layer. arXiv preprint arXiv:1605.00055, 2016..\nJie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forwar connections for neural machine translation. arXiv preprint arXiv:1606.04199, 2016.\nJulian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnik, and Jurgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.\n2.0 No regularization Label Smoothing. 1.5 Dropout Confidence Penalty 1.0 0.5 0.0 0 10 20 30 40 50 60 70 80 90 Stens (xle4)\nLabel Smoothing 1.5 Dropout Confidence Penalty 1.0 0.5 0.0 0 10 20 30 40 50 60 70 80 90 Steps (x1e4)\nFigure 2: Norm of the gradient as training proceeds on the MNIST dataset. We plot the norm of. the gradient while training with confidence penalty, dropout, label smoothing, and without regular- ization. We use early stopping on the validation set, which explains the difference in training steps. between methods. Both confidence penalty and label smoothing result in smaller gradient norm.."}]
BysvGP5ee
[{"section_index": "0", "section_name": "VARIATIONAL LOSSY AUTOENCODER", "section_text": "Xi Chen', Diederik P. Kingma', Tim Salimans, Yan Duant, Prafulla Dhariwal John Schulmant, Ilya Sutskever', Pieter Abbeelt\nRepresentation learning seeks to expose certain aspects of observed data in a learned representation that's amenable to downstream tasks like classification. For instance, a good representation for 2D images might be one that describes only global structure and discards information about detailed texture. In this paper, we present a simple but principled method to learn such global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN. Our proposed VAE model allows us to have control over what the global latent code can learn and by designing the architecture accordingly, we can force the global latent code to discard irrelevant information such as texture in 2D images, and hence the VAE only \"autoencodes' data in a lossy fashion. In addition, by leveraging autoregressive models as both prior distribution p(z) and decoding distribution p(x[z), we can greatly improve generative modeling performance of VAEs, achieving new state-of-the-art results on MNIST, OMNIGLOT and Caltech-101 Silhouettes density estimation tasks as well as competitive results on CIFAR10."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A key goal of representation learning is to identify and disentangle the underlying causal factors of the data, so that it becomes easier to understand the data, to classify it, or to perform other tasks (Bengio et al., 2013). For image data this often means that we are interested in uncovering the 'global structure\"' that captures the content of an image (for example, the identity of objects present in the image) and its \"style', but that we are typically less interested in the local and high frequency sources of variation such as the specific textures or white noise patterns.\nA popular approach for learning representations is to fit a probabilistic latent variable model, an ap. proach also known as analysis-by-synthesis (Yuille & Kersten, 2006; Nair et al., 2008). By learning. a generative model of the data with the appropriate hierarchical structure of latent variables, it is. hoped that the model will somehow uncover and untangle those causal sources of variations that. we happen to be interested in. However, without further assumptions, representation learning via. generative modeling is ill-posed: there are many different possible generative models with different. (or no) kinds of latent variables that all encode the same probability density function on our ob-. served data. Thus, the results we empirically get using this approach are highly dependent on the. specific architectural and modeling choices that are made. Moreover, the objective that we optimize. is often completely disconnected from the goal of learning a good representation: An autoregressive. model of the data may achieve the same log-likelihood as a variational autoencoder (VAE) (Kingma. & Welling, 2013), but the structure learned by the two models is completely different: the latter. typically has a clear hierarchy of latent variables, while the autoregressive model has no stochastic. latent variables at all (although it is conceivable that the deterministic hidden units of the autore gressive models will have meaningful and useful representations). For this reason, autoregressive. models have thus far not been popular for the purpose of learning representations, even though they. are extremely powerful as generative models (see e.g. van den Oord et al., 2016a)..\nA natural question becomes: is it possible to have a model that is a powerful density estimatoi and at the same time has the right hierarchical structure for representation learning? A potentia solution would be to use a hybrid model that has both the latent variable structure of a VAE, as."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "well as the powerful recurrence of an autoregressive model. However, earlier attempts at combining. these two kinds of models have run into the problem that the autoregressive part of the model ends up explaining all structure in the data, while the latent variables are not used (Fabius & van Amersfoort 2014; Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016). Bowman et al. (2015) noted that weakening the autoregressive part of the model by for example, dropout can encourage the latent variables to be used. We analyze why weakening is necessary, and we propose a principled solution that takes advantage of this property to contro what kind of information goes into latent variables. The model we propose performs well as a. density estimator, as evidenced by state-of-the-art log-likelihood results on MNIST, OMNIGLOT and Caltech-101, and also has a structure that is uniquely suited for learning interesting globa. representations of data.\nA VAE is frequently interpreted as a regularized autoencoder (Kingma & Welling, 2013; Zhang. et al., 2016), but the conditions under which it is guaranteed to autoencode (reconstruction being. close to original datapoint) are not discussed. In this section, we discuss the often-neglected fact that VAEs do not always autoencode and give explicit reasons why previous attempts to apply VAE. in sequence modeling found that the latent code is generally not used unless the decoder is weakened (Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016). The understanding of when VAE does autoencode will be an essential building piece for VLAE.."}, {"section_index": "3", "section_name": "2.1 TECHNICAL BACKGROUND", "section_text": "N logp(X) =1 i=1\nlogp(x) Eq(z|x) [logp(x,z) - log q(z|x)] = L(x;0)\nThere are various ways to optimize the lower bound L(x; 0); for continuous z it can be done effi. ciently through a re-parameterization of q(z|x) (Kingma & Welling, 2013; Rezende et al., 2014).\nwhere the first term can be seen as the expectation of negative reconstruction error and the KI divergence term can be seen as a regularizer, which as a whole could be seen as a regularized autoencoder loss with q(z|x) being the encoder and p(x|z) being the decoder. In the context of 2D images modeling, the decoding distribution p(x|z) is usually chosen to be a simple factorized distribution, i.e. p(x|z) = I, p(x;[z), and this setup often yields a sharp decoding distribution p(x[z) that tends to reconstruct original datapoint x exactly.\nIt's straightforward to see that having a more powerful p(x|z) will make VAE's marginal generative. distribution p(x) = f-p(z)p(x|z)dz more expressive. This idea has been explored extensively\nLet x be observed variables, z latent variables and let p(x,z) be the parametric model of their oint distribution, called the generative model defined over the variables. Given a dataset X = } we wish to perform maximum likelihood learning of its parameters:\nbut in general this marginal likelihood is intractable to compute or differentiate directly for flexible generative models that have high-dimensional latent variables and flexible priors and likelihoods. A solution is to introduce q(z|x), a parametric inference model defined over the latent variables, and optimize the variational lower bound on the marginal log-likelihood of each observation x:\nThis way of optimizing the variational lower bound with a parametric inference network and re parameterization of continuous latent variables is usually called VAE. The \"autoencoding\" termi. nology comes from the fact that the lower bound L(x: 0) can be re-arranged:.\nC(x;0) = Eq(z|x) [logp(x,z) - logq(z|x)] =Eq(z|x) [logp(x|z)]- DkL(q(z|x)|p(z))\nin previous work applying VAE to sequence modeling (Fabius & van Amersfoort, 2014; Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016) where the decoding distribution is a powerful RNN with autoregressive dependency, i.e., p(x|z) = l I, p(x; z, x<i). Since RNNs are universal function approximators and any joint distribution over x admits an autoregressive factorization, the RNN autoregressive decoding distribution can in theory represent any probability distribution even without dependence on z.\nHowever, previous attempts have found it hard to benefit from VAE when using an expressive de- coding distribution p(x|z). Indeed it's documented in detail by Bowman et al. (2015) that in most cases when an RNN autoregressive decoding distribution is used, the latent code z is completely ignored and the model regresses to be a standard unconditional RNN autoregressive distribution that doesn't depend on the latent code. This phenomenon is commonly attributed to \"optimization chal lenges'' of VAE in the literature (Bowman et al., 2015: Serban et al., 2016: Kaae Sgnderby et al. 2016) because early in the training the approximate posterior q(z|x) carries little information about datapoint x and hence it's easy for the model to just set the approximate posterior to be the prior to avoid paying any regularization cost Dk L(q(z x)p(z)).\nHere we present a simple but often-neglected observation that this phenomenon arises not just due to optimization challenges and instead even if we can solve the optimization problems exactly, the latent code should still be ignored at optimum for most practical instances of VAE that have in tractable true posterior distributions and sufficiently powerful decoders. It is easiest to understanc this observation from a Bits-Back Coding perspective of VAE.\nIt is well-known that Bits-Back Coding is an information-theoretic view of Variational Inference. (Hinton & Van Camp, 1993; Honkela & Valpola, 2004) and specific links have been established between Bits-Back Coding and the Helmholtz Machine/VAE (Hinton & Zemel, 1994; Gregor et al... 2013). Here we briefly relate VAE to Bits-Back Coding for self-containedness:.\nFirst recall that the goal of designing an efficient coding protocol is to minimize the expected code length of communicating x. To explain Bits-Back Coding, let's first consider a more naive coding scheme. VAE can be seen as a way to encode data in a two-part code: p(z) and p(xz), where z can be seen as the essence/structure of a datum and is encoded first and then the modeling error (deviation from z's structure) is encoded next. The expected code length under this naive coding scheme for a given data distribution is hence:.\nCnaive(x) = Ex~data,z~q(z|x) logp(z) - logp(x[z)\nThis coding scheme is, however, inefficient. Bits-Back Coding improves on it by noticing tha the encoder distribution q(z|x) can be used to transmit additional information, up to H(q(z|x expected nats, as long as the receiver also has access to q(z|x). The decoding scheme works a follows: a receiver first decodes z from p(z), then decodes x from p(x[z) and, by running th same approximate posterior that the sender is using, decodes a secondary message from q(z|x Hence, to properly measure the code length of VAE's two-part code, we need to subtract the extr nformation from q(z|x). Using Bit-Back Coding, the expected code length equates to the negativ variational lower bound or the so-called Helmholtz variational free energy, which means minimizing code length is equivalent to maximizing the variational lower bound:\nCBitsBack(x) = Ex~data,z~q(z|x) [log q(z|x) - logp(z) - log p(x|z = Ex~data[-logp(x) + DkL(q(z|x)|p(z|x))] Ex~data[logPdata(x) + DkL(q(z|x)||p(z|x))] = H(data) + Ex~data[DkL(q(z|x)|p(z|x))]\nCBitsBack(x) = Ex~data,z~q(z(x) [log q(z|x) - log p(z) - log p(x|z)] =Ex~data[-L(x)]\nCasting the problem of optimizing VAE into designing an efficient coding scheme easily allows us to reason when the latent code z will be used: the latent code z will be used when the two-part code is an efficient code. Recalling that the lower-bound of expected code length for data is given by the Shannon entropy of data generation distribution: H(data) = Ex~data [- log Pdata(x)], we can analyze VAE's coding efficiency:\nBitsBack(x) = Ex~data,z~q(z|x) [log q(z|x) - log p(z) - log p(x|z) = Ex~data[-logp(x) + DkL(q(z|x)|p(z|x))] Ex~data[ logPdata(x) + DkL(q(z[x)|[p(z[x))] H(data) + Ex~data[DkL(q(z|x)||p(z|x))]\nSince Kullback Leibler divergence is always non-negative, we know that using the two-part cod. derived from VAE suffers at least an extra code length of DkL(q(z[x)[[p(z|x)) nats for using a posterior that's not precise. Many previous works in Variational Inference have designed flexibl approximate posteriors to better approximate true posterior (Salimans et al., 2014; Rezende & Mo. hamed, 2015; Tran et al., 2015; Kingma et al., 2016). Improved posterior approximations have. shown to be effective in improving variational inference but none of the existing methods are able tc. completely close the gap between approximate posterior and true posterior. This leads us to believe. that for most practical models, at least in the near future, the extra coding cost Dk L (q(z|x)|p(z|x) will exist and will not be negligible..\nOnce we understand the inefficiency of the Bits-Back Coding mechanism, it's simple to realize wh. sometimes the latent code z is not used: if the p(x[z) could model pdata(x) without using informa. tion from z, then it will not use z, in which case the true posterior p(z|x) is simply the prior p(z. and it's usually easy to set q(z|x) to be p(z) to avoid incurring an extra cost DkL(q(z|x)|[p(z|x) And it's exactly the case when a powerful decoding distribution is used like an RNN autoregressiv. distribution, which given enough capacity is able to model arbitrarily complex distributions. Henc. there exists a preference of information when a VAE is optimized: information that can be modele. locally by decoding distribution p(x[z) without access to z will be encoded locally and only tl remainder will be encoded in z..\nWe note that one common way to encourage putting information into the code is to use a factorized decoder p(x[z) = I, p(x;|z) but so long as there is one dimension x; that's independent of all other dimensions for true data distribution, Pdata(x) = Pdata(xj)Pdata(xj), then the latent code doesn't contain all the information about x since at least x; will be modeled locally by factorized p(x|z). This kind of independence structure rarely exists in images so common VAEs that have factorized decoder autoencode almost exactly. Other techniques to encourage the usage of the latent code include annealing the relative weight of of DkL(q(z|x)||p(z)) in the variational lower bound (Bowman et al., 2015; Kaae Sonderby et al., 2016) or the use of free bits (Kingma et al., 2016) which can serve the dual purpose of smoothing the optimization landscape and canceling out part o1 the Bits-Back Code inefficiency DkL(q(z|x)p(z|x)).\nEven though the information preference property of VAE might suggest that one should always ust the full autoregressive models to achieve a better code length/log-likelihood, especially when slov. data generation is not a concern, we argue that this information preference property can be exploitec. to turn the VAE into a powerful representation learning method that gives us fine-grained contro. over the kind of information that gets included in the learned representation..\nWhen we try to learn a lossy compression/representation of data, we can simply construct a de- coding distribution that's capable of modeling the part of information that we don't want the lossy representation to capture, but, critically, that's incapable of modelling the information that we do want the lossy representation to capture.\nFor instance, if we are interested in learning a global representation for 2D images that doesn't. encode information about detailed texture, we can construct a specific factorization of the autore- gressive distribution such that it has a small local receptive field as decoding distribution, e.g.,. Plocal(x|z) = I; p(x|z, xwindowAround(i)). Notice that, as long as xwindowAround(i) is smaller than x<i, Plocal(x[z) won't be able to represent arbitrarily complex distribution over x without de-. pendence on z since the receptive field is limited such that not all distributions over x admit such. factorizations. In particular, the receptive field window can be a small rectangle adjacent to a pixel x, and in this case long-range dependency will be encoded in the latent code z. On the other hand,. if the true data distribution admits such factorization for a given datum x and dimension i, i.e..\nThe discussion in Section 2.2 suggests that autoregressive models cannot be combined with VAF since information will be preferred to be modeled by autoregressive models. Nevertheless, in this section, we present two complementary classes of improvements to VAE that utilize autoregressive models fruitfully to explicitly control representation learning and improve density estimation.\nPdata(x;|XwindowAround(i)) = Pdata(x;|x<i), then the information preference property discussed in Section 2.2 will apply here, which means that all the information will be encoded in local au-. toregressive distribution for x. Local statistics of 2D images like texture will likely be modeled. completely by a small local window, whereas global structural information of an images like shapes of objects is long-range dependency that can only be communicated through latent code z. There. fore we have given an example VAE that will produce a lossy compression of 2D images carrying. exclusively global information that can't be modeled locally..\nNotice that a global representation is only one of many possible lossy representations that we car construct using this information preference property. For instance, the conditional of an autoregres sive distribution might depend on a heavily down-sampled receptive field so that it can only mode long-range pattern whereas local high-frequency statistics need to be encoded into the latent code Hence we have demonstrated that we can achieve explicit placement of information by constraining the receptive field/factorization of an autoregressive distribution that's used as decoding distribution\nWe want to additionally emphasize the information preference property is an asymptotic view in a sense that it only holds when the variational lowerbound can be optimized well. Thus, we are not proposing an alternative to techniques like free bits Kingma et al. (2016) or KL annealing, and indeed they are still useful methods to smooth the optimization problem and used in this paper's experiments.\nInefficiency in Bits-Back Coding, i.e., the mismatch between approximate posterior and true poste. rior, can be exploited to construct a lossy code but it's still important to minimize such inefficienc. to improve overall modeling performance/coding efficiency. We propose to parametrize the prioi. distribution p(z; 0) with an autoregressive model and show that a type of autoregressive latent code. can in theory reduce inefficiency in Bits-Back coding..\nIt is well-known that limited approximate posteriors impede learning and therefore various expre.. sive posterior approximations have been proposed to improve VAE's density estimation performanc. Turner et al., 2008; Mnih & Gregor, 2014; Salimans et al., 2014; Rezende & Mohamed, 201. Kingma et al., 2016). One such class of approximate posteriors that has been shown to attain goc. empirical performance is based on the idea of Normalizing Flow, which is to apply an invertib. mapping to a simple random variable, for example a factorized Gaussian as commonly used fc (z|x), in order to obtain a complicated random variable. For an invertible transformation betwee a simple distribution y and a more flexible z, we know from the change-of-variable technique th log q(z|x) = log q(y|x) log det dz and using q(z|x) as approximate posterior will decrease th coding efficiency gap DkL(q(z|x)[(p(z|x)) provided the transformation is sufficiently expressive\nIn this paper, we propose to parametrize our learnable prior as an autoregressive flow from some. simple noise source like spherical Gaussian. Next, we show that using latent code transformed by autoregressive flow (AF) is equivalent to using inverse autoregressive flow (IAF) approximate posterior, which explains why it can similarly improve Bits-Back Coding efficiency. Moreover compared with an IAF posterior, an AF prior has a more expressive generative model that essentially 'comes for free\".\nFor an autoregressive flow f, some continuous noise source e is transformed into latent code z z = f(e). Assuming the density function for noise source is u(e), we similarly know that log p(z) =.\nKingma et al. (2016) introduced Inverse Autoregressive Flow, which is a powerful class of such i(Y1:i-1) are general functions that can be parametrized by expressive neural networks, such as MADE and PixelCNN variants (Germain et al., 2015; van den Oord et al., 2016a). Inverse autoregressive flow is the inverse/whitening of autoregressive flow: yi = Z0;(y1:i-1) + (y1:i-1). We refer interested readers to (Rezende & Mohamed, 2015; Kingma et al., 2016) for in-depth discussions on related topics.\nSimply re-arranging the variational lowerbound for using AF prior reveals that having an AF latent code z is equivalent to using an IAF posterior for e that we can interpret as the new latent code\nL(x;0) = Ez~q(z|x) [logp(x|z) + logp(z) - logq(z|x)] de = Ez~q(z|x),e=f-1(z) log p(x[f(e)) + log u(e) + log det dz =Ez~q(z|x),c=f-1(z) logp(xf(e)) + log u(e) (log q(z x) - log det dz IAF Posterior\nAF prior is the same as IAF posterior along the encoder path, f-1(q(z|x)), but differs along the decoder/generator path: IAF posterior has a shorter decoder path p(x[z) whereas AF prior has a. deeper decoder path p(x[f(e)). The crucial observation is that AF prior and IAF posterior have the same computation cost under the expectation of z ~ q(z[x), so using AF prior makes the model. more expressive at no training time cost.."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "In this paper, we evaluate VLAE on 2D images and leave extensions to other forms of data tc. future work. For the rest of the section, we define a VLAE model as a VAE that uses AF prioi. and autoregressive decoder. We choose to implement conditional distribution p(x[z) with a small receptive-field PixelCNN (van den Oord et al., 2016a), which has been proved to be a scalable autoregressive model.\nFor evaluation, we use binary image datasets that are commonly used for density estimation tasks MNIST (LeCun et al., 1998) (both statically binarized 1 and dynamically binarized version (Burda et al., 2015a), OMNIGLOT (Lake et al., 2013: Burda et al., 2015a) and Caltech-101 Silhouettes (Marlin et al., 2010). All datasets uniformly consist of 28x28 binary images, which allow us to use a unified architecture. VAE networks used in binary image datasets are simple variants of ResNet VAEs described in (Salimans et al., 2014; Kingma et al., 2016). For the decoder, we use a variant of PixelCNN that has 6 layers of masked convolution with filter size 3, which means the window of dependency, XwindowAround(i), is limited to a small local patch. During training, \"free bits\" (Kingma et al., 2016) is used improve optimization stability. Experimental setup and hyperparameters are detailed in the appendix. Reported marginal NLL is estimated using Importance Sampling with 4096 samples.\nWe designed experiments to answer the following questions"}, {"section_index": "5", "section_name": "4.1 LOSSY COMPRESSION", "section_text": "First we are interested in whether VLAE can learn a lossy representation/compression of data by. using the PixelCNN decoder to model local statistics. We trained VLAE model on Statically Bina- rized MNIST and the converged model has E[DkL(q(z|x)|[p(z))] = 13.3 nats = 19.2 bits, which is the number of bits it uses on average to encode/compress one MNIST image. By comparison, an identical VAE model with factorized decoding distribution will uses on average 37.3 bits in latent. code, and this thus indicates that VLAE can learn a lossier compression than a VAE with regular factorized conditional distribution.\nThe next question is whether VLAE's lossy compression encodes global statistics and discards local statistics. In Fig 1a, we visualize original images xdata and one random \"decompression'. Xdecompressed from VLAE: z ~ q(z|xdata), Xdecompressed ~ p(x|z). We observe that none of the.\nWe use the version provided by Hugo Larochelle\n;0) = Ez~q(z|x) [logp(xz) + logp(z) - logq(z|x) (12) de Ez~q(z|x),e=f-1(z) logp(x|f(e)) + log u(e) + log det (13) dz = Ez~q(z|x),c=f-1(z) logp(x|f(e)) + log u(e) - (log q(z|x) - log det (14) IAF Posterior\nCan VLAE learn lossy codes that encode global statistics?. Does using AF priors improves upon using IAF posteriors as predicted by theory?. Does using autoregressive decoding distributions improve density estimation performa\n1 F 1 4 5 L (a) Original test-setimages (left) (b) Samples from VLAE and \"decompressioned\" versions from VLAE's lossy code (right)\ndecompressions is an exact reconstruction of the original image but instead the global structure o1. the image was encoded in the lossy code z and regenerated. Also worth noting is that local statistic. are not preserved but a new set of likely local statistics are generated in the decompressed images. the binary masks are usually different and local styles like stroke width are sometimes slightly dif. ferent.\nHowever, we remark that the lossy code z doesn't always capture the kind of global information that. we care about and it's dependent on the type of constraint we put on the decoder. For instance, in Fig 4b, we show decompressions for OMNIGLOT dataset, which has more meaningful variations. in small patches than MNIST, and we can observe that semantics are not preserved in some cases.. This highlights the need to specify the type of statistics we care about in a representation, which will. be different across tasks and datasets, and design decoding distribution accordingly..\nE H F th 54 (a) Original test-set images (left) (b) Samples from VLAE and \"decompressioned'' yersions from"}, {"section_index": "6", "section_name": "4.2 DENSITY ESTIMATION", "section_text": "Next we investigate whether leveraging autoregressive models as latent distribution p(z) and as decoding distribution p(x|z) would improve density estimation performance\nTo verify whether AF prior is able to improve upon IAF posterior alone, it's desirable to test this model without using autoregressive decoder but instead using the conventional independent Bernoulli distribution for p(x[z). Hence we use the best performing model from Kingma et al\nTable 1: Statically Binarized MNIST\nModel NLL Test Normalizing flows (Rezende & Mohamed, 2015) 85.10 DRAW (Gregor et al., 2015) < 80.97 Discrete VAE (Rolfe, 2016) 81.01 PixelRNN (van den Oord et al., 2016a) 79.20 IAF VAE (Kingma et al., 2016) 79.88 AF VAE 79.30 VLAE 79.03\nIn addition, we hypothesize that the separation of different types of information, the modeling globa structure in latent code and local statistics in PixelCNN, likely has some form of good inductive bi ases for 2D images. In order to evaluate if VLAE is an expressive density estimator with gooc inductive biases, we will test a single VLAE model, with the same network architecture, on al binary datasets. We choose hyperparameters manually on statically binarized MNIST and use the same hyperparameters to evaluate on dynamically binarized MNIST, OMNIGLOT and Caltech-101 Silhouettes. We also note that better performance can be obtained if we individually tune hyperpa rameters for each dataset. As a concrete demonstration, we report the performance of a fine-tunec VLAE on OMNIGLOT dataset in Table 3.\nModel NLL Test Convolutional VAE + HV1 (Salimans et al., 2014) 81.94 DLGM 2hl + IWAE (Burda et al., 2015a) 82.90 Discrete VAE (Rolfe, 2016) 80.04 LVAE (Kaae Sonderby et al., 2016) 81.74 DRAW + VGP (Tran et al., 2015) <79.88 IAF VAE (Kingma et al., 2016) 79.10 Unconditional Decoder 87.55 VLAE 78.53\nTable 3: OMNIGLOT. [1] (Burda et al., 2015a) [2] (Burda et al., 2015b), [3] (Gregor et al., 2015), [4] (Gregor et al., 2016).\n(2016) on statically binarized MNIST and make the single modification of replacing the original IAF posterior with an equivalent AF prior, removing the context. As seen in Table 1, VAE with AF. prior is outperforming VAE with an equivalent IAF posterior, indicating that the deeper generative model from AF prior is beneficial. A similar gain carries over when an autoregressive decoder is. used: on statically binarized MNIST, using AF prior instead of IAF posterior reduces train NLL by 0.8 nat and test NLL by 0.6 nat.\nNext we evaluate whether using autoregressive decoding distribution can improve performance and we show in Table 1 that a VLAE model. with AF prior and PixelCNN conditional. is able to out perform a VAE with just AF prior and achieves new state-of-the-art results on statically binarized MNIST.\nTable 2: Dynamically binarized MNIST\nTable 4: Caltech-101 Silhouettes. 1(Born- schein & Bengio, 2014), [2] (Cho et al., 2011). [3] (Du et al., 2015), [4] (Rolfe, 2016), [5] (Goessling & Amit, 2015),\nAs seen in Table 2,3,4, with the same set of hyperparameters tuned on statically binarized MNIST VLAE is able to perform well on the rest of datasets, significantly exceeding previous state-of the-art results on dynamically binarized MNIST and Caltech-101 Silhouettes and tying statisticall with best previous result on OMNIGLOT. In order to isolate the effect of expressive PixelCNN as decoder, we also report performance of the same PixelCNN trained without VAE part under the name \"Unconditional Decoder\"."}, {"section_index": "7", "section_name": "4.3 NATURAL IMAGES: CIFAR1O", "section_text": "In addition to binary image datasets, we have applied VLAE to the CIFAR10 dataset of natural images. Density estimation of CIFAR10 images has been a challenging benchmark problem used by many recent generative models and hence is great task to position VLAE among existing methods.\nWe investigated using ResNet (He et al., 2016) and DenseNet (Huang et al., 2016) as building blocks for VAE networks and observed that DenseNet reduces overfitting. We also propose a new optimization technique that blends the advantages of KL annealing (Serban et al., 2016) and 'free bits\" (Kingma et al., 2016) to stabilize learning on this challenging dataset. Detailed experimental setup is described in Appendix.\nVLAE is compared to other methods on CIFAR10 in Table 5. We show that VLAE models attain new state-of-the-art performance among other variationally trained latent-variable models. DenseNet VLAE model also outperforms most other tractable likelihood models including Gated PixelCNN and PixelRNN and has results only slightly worse than currently unarchived state-of-the-art Pixel- CNN++.\nMethod b1ts/dim< Results with tractable likelihood models: Uniform distribution [1] 8.00 Multivariate Gaussian [1] 4.70 NICE [2] 4.48 Deep GMMs [3] 4.00 Real NVP [4] 3.49 PixelCNN [1] 3.14 Gated PixelCNN [5] 3.03 PixelRNN [1] 3.00 PixelCNN++ [6] 2.92 Results with variationally trained latent-variable models: Deep Diffusion [7] 5.40 Convolutional DRAW [8] 3.58 ResNet VAE with IAF [9] 3.11 ResNet VLAE 3.04 DenseNet VLAE 2.95\nFrom (a)-(c) in Figure 3, we can see that larger receptive fields progressively make autoregressive decoders capture more structural information. In (a), a smaller receptive field tends to preserve rather detailed shape information in the lossy code whereas the latent code only retains rough shape in (c) with a larger receptive field.\nTable 5: CIFAR10. Likelihood for VLAE is approximated with 512 importance samples. [1] (van den Oord et al., 2016a), [2] (Dinh et al., 2014), [3] (van den Oord & Schrauwen, 2014), [4] (Dinh et al., 2016), [5] (van den Oord et al., 2016b), [6] (Salimans et al., 2017), [7] (Sohl-Dickstein et al., 2015), [8] (Gregor et al., 2016), [9] (Kingma et al., 2016)\nWe also investigate learning lossy codes on CIFAR10 images. To illustrate how does the receptive field size of PixelCNN decoder influence properties of learned latent codes, we show visualizations of similar VLAE models with receptive fields of different sizes. Specifically we say a receptive field. XwindowAround(i), has size AxB when a pixel x, can depend on the rectangle block of size AxB immediately on top of x; as well as the [A-1] pixels immediately to the left of x,. We use this notation to refer to different types of PixelCNN decoders in Figure 3.\n(a) 4x2 (b) 5x3 (c) 7x4 (d) 7x4 Grayscale\nFigure 3: CIFAR10: Original test-set images (left) and \"decompressioned\" versions from VLAE's lossy code (right) with different types of receptive fields.\nIt's interesting to also note that in (a)-(c), oftentimes color information is partially omitted from latent codes and one explanation can be that color is very predictable locally. However, color information can be important to preserve if our task is, for example, object classification. To demonstrate how we can encode color information in the lossy code, we can choose to make PixelCNN decoder depend only on images' grayscale versions. In other words, instead of choos-. ing the decoder to be Plocal(x|z) = I p(x;|z, xwindowAround(i), we use a decoder of the form. Plocal(x|z) = II; p(x;|z, Grayscale(xwindowAround(i))). In (d) of Figure 3, we visualize lossy. codes for a VLAE that has the same receptive field size as (c) but uses a \"grayscale receptive field\". We note that the lossy codes in (d) encode roughly the same structural information as those in (c) bu1 generally generate objects that are more recognizable due to the preservation of color information This serves as one example of how we can design the lossy latent code carefully to encode what's. important and what's not.."}, {"section_index": "8", "section_name": "5 RELATED WORK", "section_text": "We investigate a fusion between variational autoencoders with continuous latent variables (Kingma & Welling. 2013: Rezende et al., 2014) and neural autoregressive models. For autoregression, we specifically apply a novel type of architecture where autoregression is realised through a carefully\nThe combination of latent variables with expressive decoder was previously explored using recurrent. networks mainly in the context of language modeling (Chung et al., 2015; Bowman et al., 2015;. Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016). Bowman et al. (2015) has also proposed to weaken an otherwise too expressive decoder by dropout to force some information into latent cOdes.\nConcurrent with our work, PixelVAE (Gulrajani et al., 2016) also explored using conditional Pixel- CNN as a VAE's decoder and has obtained impressive density modeling results through the use of. multiple levels of stochastic units.\nUsing autoregressive model on latent code was explored in the context of discrete latent variables in DARN (Gregor et al., 2013). Kingma et al. (2016), Kaae Sonderby et al. (2016), Gregor et al. (2016) and Salimans (2016) explored VAE architecture with an explicitly deep autoregressive prior for continuous latent variables, but the autoregressive data likelihood is intractable in those architectures and needs to inferred variationally. In contrast, we use multiple steps of autoregressive flows that has exact likelihood and analyze the effect of using expressive latent code.\nOptimization challenges for using (all levels of) continuous latent code were discussed before and practical solutions were proposed (Bowman et al., 2015; Kaae Sonderby et al., 2016; Kingma et al.,. 2016). In this paper, we present a complementary perspective on when/how should the latent code. be used by appealing to a Bits-Back interpretation of VAE..\nIn this paper, we analyze the condition under which the latent code in VAE should be used, i.e. when does VAE autoencode, and use this observation to design a VAE model that's a lossy compressor of observed data. At modeling level, we propose two complementary improvements to VAE that are shown to have good empirical performance."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Ben gio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.\nconstructed deep convolutional network, introduced in the PixelCNN model for images (van den. Oord et al., 2016a,b). These family of convolutional autoregressive models was further explored, and. extended, for audio in WaveNet (Oord et al., 2016), video in Video Pixel Networks (Kalchbrenner et al., 2016b) and language in ByteNet (Kalchbrenner et al., 2016a)..\nLearning a lossy compressor with latent variable model has been investigated with Con- vDRAW (Gregor et al., 2016). It learns a hierarchy of latent variables and just using high-leve latent variables will result in a lossy compression that performs similarly to JPEG. Our model simi- larly learns a lossy compressor but it uses an autoregressive model to explicitly control what kind of information should be lost in compression.\nVLAE has the appealing properties of controllable representation learning and improved density estimation performance but these properties come at a cost: compared with VAE models that have simple prior and decoder, VLAE is slower at generation due to the sequential nature of autoregres- sive model.\nMoving forward, we believe it's exciting to extend this principle of learning lossy codes to other forms of data, in particular those that have a temporal aspect like audio and video. Another promis- ing direction is to design representations that contain only information for downstream tasks and utilize those representations to improve semi-supervised learning.\nYoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and ney perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828 2013.\nYuri Burda, Roger B Grosse, and Ruslan Salakhutdinov. Accurate and conservative estimates of mr log-likelihood using reverse annealing. In A1STATS, 2015b.\nJunyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Ben gio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pp. 2980-2988, 2015.\nChao Du, Jun Zhu, and Bo Zhang. Learning deep generative models with doubly stochastic mcmc arXiv preprint arXiv:1506.04557, 2015\nMarco Fraccaro, Soren Kaae Sonderby, Ulrich Paquet, and Ole Winther. Sequential neural models with stochastic layers. arXiv preprint arXiv:1605.07571, 2016\nMathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution estimation. arXiv preprint arXiv:1502.03509. 2015.\nMarc Goessling and Yali Amit. Sparse autoregressive networks. arXiv preprint arXiv:1511.04776 2015.\nKarol Gregor, Andriy Mnih, and Daan Wierstra. Deep AutoRegressive Networks. arXiv preprin arXiv:1310.8499, 2013.\nKarol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.\nKarol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towards conceptual compression. arXiv preprint arXiv:1604.08772, 2016.\nIshaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez and Aaron Courville. Pixelvae: A latent variable model for natural images. arXiv preprint arXiv:1611.05013, 2016.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residua networks. arXiv preprint arXiv:1603.05027, 2016.\nGeoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the. description length of the weights. In Proceedings of the sixth annual conference on Computationa. learning theory, pp. 5-13. ACM, 1993.\nGeoffrey E Hinton and Richard S Zemel. Autoencoders, minimum description length, and Helmholtz free energy. Advances in neural information processing systems, pp. 3-3, 1994.\nLaurent Dinh, David Krueger, and Yoshua Bengio. Nice: non-linear independent components esti mation. arXiv preprint arXiv:1410.8516. 2014.\nNal Kalchbrenner, Lasse Espheholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. eural machine translation in linear time. arXiv preprint arXiv:1610.00527, 2016a\nNal Kalchbrenner. Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Ale. Graves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016b\nBenjamin M Marlin, Kevin Swersky, Bo Chen, and Nando de Freitas. Inductive principles fo restricted boltzmann machine learning. In A1STATS, pp. 509-516, 2010.\nDanilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceeding. of The 32nd International Conference on Machine Learning, pp. 1530-1538, 2015.\nJason Tyler Rolfe. Discrete variational autoencoders. arXiv 1 rint arXiv:1609.02200, 2016\nTim Salimans, Diederip P. Kingma, and Max Welling. Markov chain Monte Carlo and variationa inference: Bridging the gap. arXiv preprint arXiv:1410.6460, 2014..\nTim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving th pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprin. arXiv:1701.05517, 2017.\nJascha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper vised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585, 2015\nDustin Tran, Rajesh Ranganath, and David M Blei. Variational gaussian process. arXiv preprin arXiv:1511.06499, 2015.\nDiederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934. 2016.\nTim Salimans. A structured variational auto-encoder for learning deep hierarchies of sparse features arXiv preprint arXiv:1602.08734. 2016\nRichard E Turner. Pietro Berkes, and Maneesh Sahani. Two problems with variational expectatior maximisation for time-series models. In Proceedings of the Workshop on Inference and Estima-. tion in Probabilistic Time-Series Models, pp. 107-115, 2008.\nAaron van den Oord and Benjamin Schrauwen. Factoring variations in natural images with deep gaussian mixture models. In Advances in Neural Information Processing Systems, pp. 3518-. 3526, 2014.\nAaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks arXiv preprint arXiv:1601.06759, 2016a.\nAaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprint. arXiv:1606.05328, 2016b\nAlan Yuille and Daniel Kersten. Vision as bayesian inference: analysis by synthesis? Trends ii cognitive sciences, 10(7):301-308, 2006.\nBiao Zhang, Deyi Xiong, and Jinsong Su. Variational neural machine translation. arXiv preprint arXiv:1605.07869, 2016."}, {"section_index": "10", "section_name": "DETAILED EXPERIMENT SETUP FOR BINARY IMAGES", "section_text": "For VAE's encoder and decoder. we use the same ResNet (He et al.. 2015) VAE architecture as the one used in IAF MNIST experiment (Kingma et al., 2016). The only difference is that the decoder. network now, instead of outputing a 28x28x1 spatial feature map to specify the mean of a factorized. bernoulli distribution, outputs a 28x28x4 spatial feature map that's concatenated with the original. binary image channel-wise, forming a 28x28x5 feature map that's then fed through a typical masked. PixelCNN (van den Oord et al., 2016a). As such even though the PixelCNN conditions on the latent. code, we don't call it a Conditional PixelCNN because it doesn't use the specific architecture that. was proposed in van den Oord et al. (2016b). For the PixelCNN, it has 6 masked convolution layers. with 12 3x3 filters organized in ResNet blocks, and it has 4 additional 1x1 convolution ResNet block. between every other masked convolution layer to increase processing capacity since it employs fewer. masked convolutions than usual. All the masked convolution layer have their weights tied to reduce. overfitting on statically binarized MNIST, and untying the weights will increase performance for. other datasets. Experiments are tuned on the validation set and then final experiment was run with train and validation set, with performance evaluated with test set. Exponential Linear Units (Cleveri. et al., 2015) are used as activation functions in both VAE network and PixelCNN network. Weight. normalization is everywhere with data-dependent initialization (Salimans & Kingma, 2016)..\nA latent code of dimension 64 was used. For AF prior, it's implemented with MADE (Germair et al., 2015) as detailed in Kingma et al. (2016). We used 4 steps of autoregressive flow and each flow is implemented by a 3-layer MADE that has 640 hidden units and uses Relu (Nair & Hinton 2010) as activation functions. Differing from the practice of Kingma et al. (2016), we use mean-only autoregressive flow, which we found to be more numerically stable.\nAll experiments are implemented using TensorFlow (Abadi et al., 2016)\nLatent codes are represented by 16 feature maps of size 8x8, and this choice of spatial stochas tic units are inspired by ResNet IAF VAE (Kingma et al., 2016). Prior distribution is factorizec Gaussian noise transformed by 6 autoregressive flows, each of which is implemented by a Pixel CNN (van den Oord et al., 2016a) with 2 hidden layers and 128 feature maps. Between every othe autoregressive flow, the ordering of stochastic units is reversed.\nResNet VLAE has the following structure for encoder: 2 ResNet blocks, Conv w/ stride=2, 2 ResNet blocks, Conv w/ stride=2, 3 ResNet blocks, 1x1 convolution and has a symmetric decoder. Channel size = 48 for 32x32 feature maps and 96 for other feature maps. DenseNet VLAE follows a similar structure: replacing 2 ResNet blocks with one DenseNet block of 3 steps and each step produces a certain number of feature maps such that at the end of a block, the concatenated feature maps is slightly more than the ResNet VLAE at the same stage.\nConditional PixelCNN++ (Salimans et al., 2017) is used as the decoder. Specifically the channel- autoregressive variant is used to ensure there is sufficient capacity even when the receptive field is small. Specifically, the decoder PixelCNN has 4 blocks of 64 feature maps where each block is conditioned on previous blocks with Gated ResNet connections and hence the PixelCNN decoders we use are shallow but very wide. For 4x2 receptive field experiment, we use 1 layer of vertical stack convolutions and 2 layers of horizontal stack convolutions; for 5x3 receptive field experiment we use 2 layers of vertical stack convolutions and 2 layers of horizontal stack convolutions; For 5x3 receptive field experiment, we use 2 layers of vertical stack convolutions and 2 layers of horizontal stack convolutions; For 7x4 receptive field experiment, we use 3 layers of vertical stack convolutions and 3 layers of horizontal stack convolutions: for 7x4 Grayscale experiment. we transform RGB\nIn terms of training. Adamax (Kingma & Ba. 2014) was used with a learning rate of 0.002. 0.01 nats/data-dim free bits (Kingma et al., 2016) was found to be effective in dealing with the problem of all the latent code being ignored early in training. Polyak averaging (Polyak & Juditsky, 1992) was used to compute the final parameters, with a = 0.998.\nimages into gray-scale images via this specific transformation: (0.299 * R) + (0.587G) + (0.114B) Best density estimation result is obtained with 7x4 receptive field experiments\n'Free bits\"' was a technique proposed in (Kingma et al., 2016) where K groups of stochastic units are encouraged to be used through the following surrogate objective:\nThis technique is easy to use since it's usually easy to determine the minimum number of bits/nats. X, stochastic units need to encode. Choosing is hence easier than setting a fixed KL annealing schedule (Serban et al., 2016).\nOn the other hand, Kl annealing has the benefit of the surrogate objective will smoothly become th. true objective, the variational lower bound where as \"free bits'' has a sharp transition at the boundary Therefore, we propose to still use X as hyperparameter to specify at least A nats should be used bu. try to change the optimization objective as slowly as possible:.\nWe found it sufficient to increase/decrease at 10% increment and didn't further tune this parameter\nIn this section, we investigate the scenario of just using an autoregressive decoder without using an autoregressive prior. We compare the exact same model in three configurations: 1) using small- receptive-field PixelCNN as an unconditional density estimator; 2) using small-receptive-field as a decoder in a VAE with Gaussian latent variables; 3) replacing Gaussian latent variables with autoregressive flow latent variables in 2).\nTable 1: Ablation on Dynamically binarized MNIST\nIn Table 1. we can observe that each step of modification improves density estimation performance In addition, using an autoregressive latent code makes the latent code transmit more information as shown in the difference of E[DkL(q(z[x)|p(z))]\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.\nK x =Ex~M[Eq(z|x) [logp(x|z)]] maximum(, Ex~M [DkL(q(zj|x)]|p(zj))] I1\nAnd we make the optimization smoother by changing y slowly online to make sure at least A nats are used: when Kl is too much higher than X (we experimented wide range of thresholds from 3% to 30%. all of which yield improved results, and we tend to use 5% us a threshold), y is increased. and when Kl lower than X, y is decreased to encourage information flow.\nDjork-Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by Exponential Linear Units (ELUs). arXiv preprint arXiv:1511.07289. 2015.\nMathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution estimation. arXiv preprint arXiv:1502.03509, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.\nDiederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016.\nTim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac celerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016\nTim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprini arXiv:1701.05517, 2017.\nAaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprint. arXiv:1606.05328, 2016b\n(a) 4x2 @ 3.12 bits/dim (b) 7x4 @ 2.95 bits/dim\nFigure 4: CIFAR10: Generated samples for different models\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015."}]
Bkp_y7qxe
[{"section_index": "0", "section_name": "UNSUPERVISED DEEP LEARNING OF STATE REPRE- SENTATION USING ROBOTIC PRIORS", "section_text": "Timothee LESORT & David FILLIAT\nOur understanding of the world depends highly on how we represent it. Using. background knowledge about its complex underlying physical rules, our brain can produce intuitive and simplified representations which it can easily use to solve. problems. The approach of this paper aims to reproduce this simplification process. using a neural network to produce a simple low dimensional state representation of the world from images acquired by a robot. As proposed in (Jonschkowski. & Brock! 2015), we train the neural network in an unsupervised way, using the a priori knowledge we have about the world as loss functions called 'robotic priors'. that we implemented through a siamese network. This approach has been used to learn a one dimension representation of a Baxter head position from raw images. The experiment resulted in a 97,7% correlation between the learned representation. and the ground truth, and show that relevant visual features form the environment are learned.\nThe environment we live in is ruled by complex physical laws. However humans are likely to interact. with it without any detailed knowledge of these laws. The human brain constructs simple models of the world in order to come up with an easy, though approximate, understanding of it. .\nThis representation is learned using deep learning. The deep neural network is trained by using images of robot experiences in a given environment and has to estimates for each image a state. which is the representation we want to learn. Instead of using a ground truth for supervised training we make use of an approach that ensures consistency between the states representation. For this purpose, the states are constrained by \"robotics priors (Jonschkowski & Brock2015) which are an. expression of the knowledge we have about physics..\nThe main contribution of this paper is the use of the robotics prior approach in a siamese network to train a deep convolutional neural network. The network is trained with images of the robot environment, information on the actions performed by the robot and rewards defining a task. The neural network learns a state representation usable for the robot to perform this task. The resulting neural network also displays useful feature detectors for environment analysis that could be a basis for transfer learning to similar tasks."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "This paper aims to reproduce this behavior for robots. We want to build a simple representation of. the world that retains enough information to make a machine able to use it to interact afterwards i.e to perform an assigned task. Finding such a minimal representation (e.g., the position of an objec1.. extracted from an image) is the standard way to implement behaviors in robots. However, this is most of the time done in a task specific and supervised way. In this paper, we want to learn such. representation without supervision, based on generic learning objectives.."}, {"section_index": "2", "section_name": "3 STATE REPRESENTATION LEARNING", "section_text": "The first challenge of state representation is to define which parameters are sufficient to characterize. the state of the entire environment. For example, in a visually rich environment where only one. object moves through time, the environment description depends only on the object position, whicl happens to be the only relevant parameter. The challenge is thus to learn what are the various relevan parameters. A second challenge is to find which of those parameters are truly interesting. For this. we exploit a reward function. This function will give rewards for given states of the environmen. according to a defined task. This function gives the learning process a way to know which parameter.\nRELATED WORK oRobotic Priors The term of prior in Bayesian statistics refers to the prior probability distribution but like in the article from (Bengio et al.]2013), (Jonschkowski & Brock2014) and (Jonschkowski & Brock] [2015) we use this word as a reference to an a priori knowledge we have and not to a probability distribution. This knowledge comes from various domains which define several kind of priors : Task-Specific, Generic, and Robotic Priors. (Bengio et al.[2013) state that the key to successful state representation learning is the use of \"many general priors about world around us\" (cited by (Jonschkowski & Brock 2015) As noticed by (Jonschkowski & Brock2015), (Bengio et al.2013) \"proposed a list of generic priors for artificial intelligence and argue that refining this list and incorporating it into a method for representation learning will bring us closer to artificial intelligence ' For these authors however those generic priors are too weak to be used in the robotics fields and stronger ones have to be defined to achieve an efficient learning : the robotics priors used in this paper. Those priors, in a way similar to the approach of (Scholz et al.]2014), are physically grounded, which means that they aim at building a representation of the world consistent with physics. ostate representation The goal of state representation learning is to find a mapping from a set of observations to a set of states that makes it possible to describe an environment at a given time with enough information to fulfill a given task. In our approach we impose a dimension of the state and use the priors to guide the neural network in learning task specific states representation in this given dimension. This is an alternative approach to selecting a state representation from a set (Seijen et al.]2014) and (Konidaris & Barto2009) or creating a auto-encoder tc compress the information into a low dimension state (Lange et al.) ,(Watter et al.[2015) (Finn et al.]2016) and (van Hoof et al.2016). .Unsupervised learning Using priors with neural networks is an unsupervised way for training a neural network This approach is a different, but similar from energy based methods (Lecun et al.] 2006) auto-encoder or denoising auto-encoder (Vincent et al.]2010) to train a deep neural net work. The training process does not use directly energy functions but more specific func- tions in order to have targeted representation of task relevant parameters. Using unsu- pervised learning for training deep neural network may, according to (Bengio2009), be efficient because when the neural network does not have information about what to learn precisely and what future learning tasks are \"it would appear very rational to collect and integrate as much information as possible about this world so as to learn what makes it tick\". On the other hand this way of training reduces overfitting risks. .Model Architecture The method used for this paper involves a convolutional network whose architecture is inspired by (Krizhevsky et al.]2012). It makes it possible to create easy to train deep networks. Furthermore convnets are easier to use for training state visualization than GoogleNet (Szegedy et al.[2014) or ResNet (He et al.[ 2015) architecture. This architecture is coupled with Siamese networks like in (Chopra et al.[2005) or (Xing et al.2003). Siamese network are used in this paper to impose constraints on learned representation in the implementation of robotic priors.\nare relevant to the assigned task and which are not. The neural network has finally to map the relevant parameters into a state representation of a given dimension. To evaluate this representation there are two main possibilities:\nWhile the second approach is more objective, the first one is simpler and we use it in this paper as a. proof of concept using the correlation (Eq :1) between learned representation and a ground truth"}, {"section_index": "3", "section_name": "4.1 ROBOTIC PRIORS", "section_text": "Robotic priors are used to provide the model to train with basic knowledge about the environmen. physical features. They add constraints to make the learned representation altogether consisten. with simple physical and task specific rules. Each prior is formalized by a cost function implemente. through a siamese network. By minimizing them, the model is trained according to the prior and cai learn task-specific representation. The four priors we used are the one presented in (Jonschkowsk. & Brock 2015). We will use the following notations:.\n1(t) is the image perceived at time t. s(t) is the state at time t and s(t) is its estimation.. is a function which to an image I(t) returns a state s(t). is its estimati r(t) is the reward at time t a(t) is the action done a time t D is the input data (images, actions, rewards). e+L1 S(+\nThe definitions of loss functions associated to priors and the attached assumption are as follows\nTemporal coherence Prior: Two states close to each other in time are also close to each other ir\nLProp(D,) = E[(l] st2 ] I Ist1WD2|at1 = at2]\nCausality Prior: Two states on which the same action gives two different rewards should not b close to each other in the state representation space..\nLCaus(D,) = E[e-lst2-st1lI at1 At2Tt1+1rt2+1]\nThis last prior is the only one giving information about the task and helps discovering the underlyin factors which give a reward.\nevaluating if the representation is compatible with a ground truth. determining if a reinforcement learning algorithm can use this learned representatic learn to perform the assigned task (Jonschkowski & Brock] 2015)\nE(s-E[sD)(s- Es) Corr(s,s) : Os0s\nLTemp(D,) = E[J] st l2]\nLRep(D,) =E[e-|st2-8t|l2 st2 st1 at1=At2]"}, {"section_index": "4", "section_name": "4.2 SIAMESE NETWORKS", "section_text": "The training using the priors needs the simultaneous estimation of several states to perform the optimization process. At this end our approach uses siamese networks. They are neural networks which share all their parameters. With this method the cost functions can be applied on several states computed at the same time. Those cost functions requires to choose the right images as input fo siamese networks. For example for applying temporal prior, two input images are chosen following each other in time. Two siamese networks are used to compute the state estimation for each image and applying the temporal cost function. The backward propagation can then be done by computing gradient based on the loss function. Another example would be to use proportionality prior in which two state variations are estimated with the state of two images. Therefore four images and fou siamese networks are needed to compute the proportionality cost function. The figure1shows the global network architecture. An image is input on each network and the application of the cost function is used on all the outputs. The layers are shared among all the siamese networks.\nBatch Normalisation + ReLu Activation +2*2 Maxpolling Convolutional layers Convolutional layers Fully connected with 3*3 filters with 1*1 filters layers A 32 Feature 500 neurons 64Feature maps 1 Feature map maps 256Feature maps 1 neuron 11 Prior cost function 14 Conv1 Conv2 Conv4 Conv5\n+ ReLu Activation + 2*2 Maxpolling Convolutional layers Convolutional layers Fully connected with 3*3 filters with 1*1 filters layers 32Feature 500 neurons 64 Feature maps 1 Feature map maps 256Feature maps 1 neuron 11 Prior cost function I4\nFigure 1: Illustration of the architecture with four Siamese networks"}, {"section_index": "5", "section_name": "4.3 MODEL ARCHITECTURE", "section_text": "The architecture of the network uses four stacked convolution layers with 3*3 filters (inspired by. (Simonyan & Zisserman, 2014)) with for each convolution layer, batch normalization, ReLu activa-. tion and 2*2 maxpooling (in that order). The convolution layers have 32-64-128-256 filters/Layer. respectively. On the top of the network, there are two stacked fully connected layer (500 neurons and then one neurons for one dimension output). Between the last convolution layer with 3*3 filters. and the fully connected layers, We insert a convolution layer with one 1*1 filters. This architecture. helps the network to choose which feature map it wants to use to build its representation. This leads to a reduction of parameters in the fully connected layer by a factor of 256..\nThe data we used for training is a set of RGB images 200 pixels * 200 pixels which come fron simulation. Those images are taken by the head camera of a Baxter robot. Images thus represent the front view of the robot and what it is able to see with its camera. The inputs images are normalizec with O mean and 1 standard deviation before use, but for training we also add data augmentation tc the images in order to improve robustness to noise and luminosity variations. To make the training\nWe use batch Normalization (Ioffe & Szegedyl 2015) for training, which helps to keep reasonable internal values and to make the training possible. In the experiment we make without batch nor-. malization, the network is unable to learn the representation. Relu is used for fast learning and Increasing the sparsity of neuron activation..\nresistant to those perturbation, we add a random color filter to training sample like in (Krizhevsky. et al.2012) weighted with a Gaussian with random parameters. This transformation aims to make. the representation invariant to the luminosity variation. We also add noise with the same mear. and standard deviation than images. Those two data augmentation are added online during th training process. This method force the neural network to learn a lot of feature detector to make its. representation robust."}, {"section_index": "6", "section_name": "4.5 TRAINING", "section_text": "The training has been done with Adagrad Duchi et al. (2011) with the following hyper-parameters: - Batch Size : 12 - Learning rate : 0.001 - weight Decay : 0 - Epoch :200 - iterations/Epoch : 10 The training is very fast in comparison with imageNet (Deng et al.|2009) classification training for example. Our understanding of this behavior is that the network does not need as many high level feature detectors as in classification because it is specialized in only one environment. A training without data augmentation is done for bootstrapping the neural network before the training with data augmentation.\nThis environment is produced by a simulation of a Baxter robot developed with the gazebo software The robot is in front of a table (figure[2. The objective is to produce a representation that will make it possible to control the head joint position. The images are taken when the robot moves its head from right to left or left to right, a unit reward is obtained when the head is at maximum left or maximum right. In this context actions are defined by movement of the head between t and t + 1 The representation constructed by the neural network is in one dimension and should be consistent with the actual head position which is used as ground truth. The dataset based on the image generated is a set of 27 chronological series of images. Each series is composed of approximately 75 images For each series the robot arms has a different position but only the state of the head joint change trough time. We know for each image at what time it has been taken and if the actual state gives a reward. We also know which action has been made between image at time t and image at time t+1. The actions are \"move left'' and \"move right' with a certain angle. With these actions we can find pairs of images with same angle variation therefore compatible with proportionality anc repeatability priors. Furthermore, with reward information we can find which images generate a reward with which action. Those images are then gathered to constitute an image set compatible with the causality cost function.\nIVcrldInsert Scene P ground_plane ale TableDlocke Igt_E abe outton cylinder2 ylinder Lights Property l Steps: 1Real Time Factor: 0.93 Sim Tine: 0000:10:54.231 Real Time: 00 00:11:23.174 Iterations: 654231 Resct\nFigure 2: Illustration of the Baxter simulation used to generate data\nDuring the training process, the sets of images compatible with a certain prior are randomly sampled for training inside 26 of the 27 series. The last series is used for validation.\ntrain test 1.8 1.6 1.4 0.8 0 20 40 60 80 100 120 140 160 180 200\nFigure 3: Sum of the cost functions at each epoch"}, {"section_index": "7", "section_name": "5.3 RESULT", "section_text": "The resulting state representation of the head position for the validation data, after training with the. deep model presented above and the results after training with 1 fully connected layer model similar. to the one used in (Jonschkowski & Brockl2015) are on figure[5] The correlation computed between state representation learned for both models and ground truth are in table[1 Those results show that. both models are able to learn a good state representation of the Baxter head position in the case. where no noise is present. However, table[1 shows that the deep neural network is much more robust to noise and luminosity perturbation than the one-layer-network. The evolution at each epoch of the. correlation during bootstrapping and training are in figure[6\nState with ruth State 10 20 30 40 50 60 70 10 20 30 40 50 60 70\nFigure 5: Comparison of the ground truth of the head position for each image of the validation se (Ground truth), the estimation of the state base on the original images (State) and the state based or the images with noise and random luminosity perturbations (State with DA). The left hand figure shows the result after training a deep neural network the right hand figure show the result afte training a one layer fully connected neural network\none Layer Network. Deep Network without Data Augmentation 97.0 % 97.7 % with Data Augmentation. 61.7 % 96.4 %\nTable 1: Influence of the neural network deepness on Correlation between learned representatior and ground-truth on the validation set.\nThe training process is done by minimizing each of the cost functions but those functions are in. conflict to impose their constraint. The result of the training is an equilibrium between cost functions minimization. The result of the sum of the priors costs is presented on|3|which shows the decreasing of the global cost. The value of each cost functions separately is on figure4 It is not surprising that. all the cost functions are not minimized the same way. For example, the temporal function aims at. minimizing the distance between representations when causality aims at maximizing it..\n0.9 Temp Prop 0.8 Rep Caus 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 20 40 60 80 100 120 140 160 180 200\nFigure 4: cost functions at each epoch\nFigure 6: Those figures show the evolution of the correlation between ground-truth and learned state representation on the validation set. The left hand figure shows the correlation at each Epoch of the bootstrapping training. The right hand figure shows the correlation at each Epoch of the training with data augmentation. The final results in table[1and[2|are the absolute values of the correlation\nRegarding these features, using data augmentation makes it possible to train the neural network to use a larger part of the image. To illustrate this assumption we train the network with and without data augmentation to compare the activation on the last convolution layer. Those activation are shown on figure7] We can see that the training without data augmentation makes the neural network use only the position of the blue button of the image when with data augmentation the neural network use much more pieces of information such as the table top border. Furthermore, Table2 shows that training with data augmentation slightly increases the performance of the deep neural network.\nFigure 7: The figures show a representation of the activation produced by the neural network or. its last convolution layer (10 pixels*1Opixels). For each figure, on the left is presented a feature map for a given image , on the right the original image (200 pixels*200pixels) and in the middle the superposition of both images to show which part of the image produces activation. The left. hand side image shows activation produce by the image after training without data-augmentation. The right hand side image furthermore compares activation between cases with and without data. augmentation after training with data-augmentation..\nTable 2: Correlation results between learned representation and ground-truth on the validation se after training deep neural network with and without data augmentation.\n0.2 0.925 correlatior correlation 0.93 -0.935 0.2 0.94 0.945 0.4 0.95 0.6 0.955 0.96 -0.965 MM 0.97 50 100 150 200 250 0 50 100 150 200 250\nBeside the performance gain, our deep model makes it possible to learn relevant visual features that could be interesting for other tasks in a transfer learning scenario. For example, the left image of figure[7|shows that a button of the environment has been learned to be a good feature for the current task, but could obviously be used in other scenarios.\ntest with Data Augmentation. without Data Augmentation Data Augmentation. 96.4 % 97.7 % ut Data Augmentation 94.0 % 97.2 %"}, {"section_index": "8", "section_name": "6 DISCUSSION", "section_text": "In the reported preliminary experiments, the simulation environment is not as rich as the real world. therefore the variability of input image is low. However, this approach should work with real images in order to make the neural network learn more specialized feature detectors. It will be tested in further experiments.\nA limitation of our approach is the assessment of the the training quality. In the case presented ir. this paper, correlation between state representation and ground truth is a possible measurement o the training quality. On the other hand, had we tried to learn a representation in higher dimension. the correlation could have been irrelevant. Furthermore if the process is applied to a situation where. ground truth is unavailable, the correlation cannot be measured. A possible method would be to use. a reinforcement learning algorithm to measure if the learned representation is suitable to the tasl like in (Jonschkowski & Brock2015)."}, {"section_index": "9", "section_name": "7 CONCLUSION", "section_text": "This approach provides us with evidences that a deep network trained by the method of robotics. priors can learn state representations. This technique makes it possible to learn a one dimension representation and furthermore to train a network to be robust to both noise and luminosity perturbations. The next step to be done will be to learn more complex representations like objects positions in three dimensions, and to use those representations within a reinforcement learning process to check if a robot can use the learned representations to perform various tasks. The use of. real image for training is also one of our goals.."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thanks Clement Masson for fruitful discussion and his help in generating the data used in this paper. This work is supported by the DREAM project' through the European Union Horizon 2020 research and innovation program under grant agreement No 640891."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Yoshua Bengio. Learning deep architectures for ai. Found. Trends Mach. Learn., 2(1), 2009. ISSI 1935-8237. doi: 10.1561/2200000006.\nYoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798-1828. 2013. 1SSN 0162-8828. doi: doi.ieeecomputersociety.org/10.1109/TPAM1.2013.50.\nSumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, witl application to face verification. In CVPR, 2005.\nThis approach makes it possible to train a deep neural network to learn specialized feature detectors used to build state representation in an unsupervised way. Those trained feature detectors could be used or transferred for learning another state representation in this environment..\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res.. 12:2121-2159. July 2011. ISSN 1532-4435.\nRico Jonschkowski and Oliver Brock. State representation learning in robotics: Using prior knowl edge about physical interaction. In Proceedings of Robotics: Science and Systems, July 2014.\nRico Jonschkowski and Oliver Brock. Learning state representations with robotic priors. Au tonomous Robots, 39(3):407-428, 2015. 1SSN 0929-5593.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo lutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (eds.) Advances in Neural Information Processing Systems 25. pp. 1097-1105. 2012\nSascha Lange, Martin Riedmiller, and Arne Voigtlander. Autonomous reinforcement learning on raw visual input data in a real world application. doi: 10.1109/IJCNN.2012.6252823\nYann Lecun, Sumit Chopra, Raia Hadsell, Fu Jie Huang, G. Bakir, T. Hofman, B. Schlkopf. A. Smola, and B. Taskar (eds. A tutorial on energy-based learning. In Predicting Structurec Data. MIT Press, 2006.\nJonathan Scholz, Martin Levihn, Charles Lee Isbell, and David Wingate. A physics-based model prior for object-oriented mdps. In ICML, 2014.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556. 2014\nPascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res.. 11:3371-3408. December 2010. 1SsN 1532-4435\nEric P. Xing, Andrew Y. Ng, Michael I. Jordan, and Stuart Russell. Distance metric learning, with application to clustering with side-information. In ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 15. pp. 505-512. MIT Press. 2003.\nEfficient skill learning using abstraction selection. In In Jeorge Konidaris and Andrew Barto..\nHarm Seijen, Shimon Whiteson, and Leon Kester. Efficient abstraction selection in reinforcement learning. Comput. Intell., 30(4):657-699, November 2014. ISSN 0824-7935\nManuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In C. Cortes, N. D. Lawrence D. D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems 28, pp. 2746-2754. Curran Associates, Inc., 2015."}]
SJc1hL5ee
[{"section_index": "0", "section_name": "COMPRESSING TEXT CLASSIFICATION MODELS", "section_text": "{ajoulin, egrave, bojanowski, matthijs, ,rvj,tmikolov}@fb.com\nWe consider the problem of producing compact architectures for text classifica- tion. such that the full model fits in a limited amount of memory. After consid- ering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store word embeddings. While the original technique leads to a loss in accuracy, we adapt this method to circumvent quan- tization artefacts. Combined with simple approaches specifically adapted to text classification, our approach derived from fast Text requires, at test time, only a fraction of the memory compared to the original FastText, without noticeably sacrificing quality in terms of classification accuracy. Our experiments carried out on several benchmarks show that our approach typically requires two orders of magnitude less memory than fast Text while being only slightly inferior with respect to accuracy. As a result, it outperforms the state of the art by a good margin in terms of the compromise between memory usage and accuracy."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Text classification is an important problem in Natural Language Processing (NLP). Real world use. cases include spam filtering or e-mail categorization. It is a core component in more complex sys tems such as search and ranking. Recently, deep learning techniques based on neural networks. have achieved state of the art results in various NLP applications. One of the main successes of deep learning is due to the effectiveness of recurrent networks for language modeling and their applicatior. to speech recognition and machine translation (Mikolov2012). However, in other cases including several text classification problems, it has been shown that deep networks do not convincingly beai. the prior state of the art techniques (Wang & Manning. 2012 Joulin et al.]2016).\nIn spite of being (typically) orders of magnitude slower to train than traditional techniques based on n-grams, neural networks are often regarded as a promising alternative due to compact model sizes, in particular for character based models. This is important for applications that need to run on systems with limited memory such as smartphones.\nThis paper specifically addresses the compromise between classification accuracy and the model. size. We extend our previous work implemented in the fast Text library' It is based on n-gram. features, dimensionality reduction, and a fast approximation of the softmax classifier (Joulin et al. 2016). We show that a few key ingredients, namely feature pruning, quantization, hashing, and re. training, allow us to produce text classification models with tiny size, often less than 100kB wher trained on several popular datasets, without noticeably sacrificing accuracy or speed..\nWe plan to publish the code and scripts required to reproduce our results as an extension of the f a st Text library, thereby providing strong reproducible baselines for text classifiers that optimize the compromise between the model size and accuracy. We hope that this will help the engineering. community to improve existing applications by using more efficient models..\nThis paper is organized as follows. Section 2lintroduces related work, Section|3 describes our tex classification model and explains how we drastically reduce the model size. Section 4|shows th effectiveness of our approach in experiments on multiple text classification benchmarks.\n'https://github.com/facebookresearch/fastText\nArmand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herve Jegou & Tomas Mikolov"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Models for text classification. Text classification is a problem that has its roots in many applica tions such as web search, information retrieval and document classification (Deerwester et al.l1990 Pang & Lee20o8). Linear classifiers often obtain state-of-the-art performance while being scal- able (Agarwal et al.]2014] Joachims1998 Joulin et al.]2016] McCallum & Nigam1998). They are particularly interesting when associated with the right features (Wang & Manning. 2012). They usually require storing embeddings for words and n-grams, which makes them memory inefficient.\nCompression of language models. Our work is related to compression of statistical language models. Classical approaches include feature pruning based on entropy (Stolcke] [2o00) and quanti-. zation. Pruning aims to keep only the most important n-grams in the model, leaving out those with probability lower than a specified threshold. Further, the individual n-grams can be compressed by quantizing the probability value, and by storing the n-gram itself more efficiently than as a sequence of characters. Various strategies have been developed, for example using tree structures or hash. functions, and are discussed in (Talbot & Brants|2008).\nCompression for similarity estimation and search. There is a large body of literature on how to compress a set of vectors into compact codes, such that the comparison of two codes approxi- mates a target similarity in the original space. The typical use-case of these methods considers an indexed dataset of compressed vectors, and a query for which we want to find the nearest neigh-. bors in the indexed set. One of the most popular is Locality-sensitive hashing (LSH) by Charikar. (2002), which is a binarization technique based on random projections that approximates the cosine similarity between two vectors through a monotonous function of the Hamming distance between the two corresponding binary codes. In our paper, LSH refers to this binarization strategy. Many subsequent works have improved this initial binarization technique, such as spectal hashing (Weiss. et al.|2009), or Iterative Quantization (ITQ) (Gong & Lazebnik]2011), which learns a rotation ma- trix minimizing the quantization loss of the binarization. We refer the reader to two recent surveys by[Wang et al.[(2014) and Wang et al.(2015) for an overview of the binary hashing literature..\nBeyond these binarization strategies, more general quantization techniques derived from Jegou et al. (2011) offer better trade-offs between memory and the approximation of a distance estimator. The. Product Quantization (PQ) method approximates the distances by calculating, in the compressed do. main, the distance between their quantized approximations. This method is statistically guaranteec to preserve the Euclidean distance between the vectors within an error bound directly related to the quantization error. The original PQ has been concurrently improved byGe et al.(2013) andNorouz & Fleet (2013), who learn an orthogonal transform minimizing the overall quantization loss. In ou. paper, we will consider the Optimized Product Quantization (OPQ) variant (Ge et al.] 2013).\nSoftmax approximation The aforementioned works approximate either the Euclidean distance. or the cosine similarity (both being equivalent in the case of unit-norm vectors). However, in the context of fast Text, we are specifically interested in approximating the maximum inner product involved in a softmax layer. Several approaches derived from LSH have been recently proposed to achieve this goal, such as Asymmetric LSH by|Shrivastava & Li|(2014), subsequently discussed byNeyshabur & Srebro(2015). In our work, since we are not constrained to purely binary codes. we resort a more traditional encoding by employing a magnitude/direction parametrization of our. vectors. Therefore we only need to encode/compress an unitary d-dimensional vector, which fits the. aforementioned LSH and PQ methods well.\nNeural network compression models. Recently, several research efforts have been conducted to compress the parameters of architectures involved in computer vision, namely for state-of-the- art Convolutional Neural Networks (CNNs) (Han et al.]2016f Lin et al.]2015). Some use vector quantization (Gong et al.2014) while others binarize the network (Courbariaux et al.|2016).Denil et al.(2013) show that such classification models are easily compressed because they are over- parametrized, which concurs with early observations byLeCun et al.(1990).\n2In the literature, LSH refers to multiple distinct strategies related to the Johnson-Lindenstrauss lemma For instance, LSH sometimes refers to a partitioning technique with random projections allowing for sublinear search via cell probes, see for instance the E2LSH variant of Datar et al.[(2004).\nSome of these works both aim at reducing the model size and the speed. In our case, since the fa st Text classifier on which our proposal is built upon is already very efficient, we are primilarly interested in reducing the size of the model while keeping a comparable classification efficiency.."}, {"section_index": "3", "section_name": "3.1 TEXT CLASSIFICATION", "section_text": "In the context of text classification, linear classifiers (Joulin et al.] 2016) remain competitive with more sophisticated, deeper models, and are much faster to train. On top of standard tricks commonly used in linear text classification (Agarwal et al.]2014) Wang & Manning2012] [Weinberger et al.. 2009), Joulin et al.(2016) use a low rank constraint to reduce the computation burden while sharing. information between different classes. This is especially useful in the case of a large output space. where rare classes may have only a few training examples. In this paper, we focus on a similar. model. that is. which minimizes the softmax loss l over N documents:.\nN l(yn,BAxn), n=1\nwhere xn is a bag of one-hot vectors and yn the label of the n-th document. In the case of a large vocabulary and a large output space, the matrices A and B are big and can require gigabytes of memory. Below, we describe how we reduce this memory usage..\nProduct quantization is a popular method for compressed-domain approximate nearest neighbor search (Jegou et al.|2011). As a compression technique, it approximates a real-valued vector by. finding the closest vector in a pre-defined structured set of centroids, referred to as a codebook This codebook is not enumerated, since it is extremely large. Instead it is implicitly defined by its. structure: a d-dimensional vector x E Rd is approximated as\nk qi(x) i=1\nwhere the different subquantizers q; : x +> q(x) are complementary in the sense that their respective centroids lie in distinct orthogonal subspaces, i.e., Vi j, Vx, y, (qi(x)[q(y)) = 0. In the original PQ, the subspaces are aligned with the natural axis, while OPQ learns a rotation, which amounts to alleviating this constraint and to not depend on the original coordinate system. Another way to see this is to consider that PQ splits a given vector x into k subvectors x', i = 1 . . . k, each of dimension d/k: x = [x1 ... x' ... xk], and quantizes each sub-vector using a distinct k-means quantizer. Each subvector x' is thus mapped to the closest centroid amongst 2' centroids, where b is the number of bits required to store the quantization index of the subquantizer, typically b = 8. The reconstructed vector can take 2kb distinct reproduction values, and is stored in kb bits.\nPQ estimates the inner product in the compressed domain as\nThis is a straightforward extension of the square L2 distance estimation of Jegou et al.(2011). In. practice, the vector estimate x is trivially reconstructed from the codes, i.e., from the quantization indexes. by concatenating these centroids..\nThe two parameters involved in PQ, namely the number of subquantizers k and the number of bits b per quantization index, are typically set to k E [2, d/2], and b = 8 to ensure byte-alignment..\nDiscussion. PQ offers several interesting properties in our context of text classification. Firstly the training is very fast because the subquantizers have a small number of centroids, i.e., 256 cen troids for b = 8. Secondly, at test time it allows the reconstruction of the vectors with almost nc\nk c`y~ty= qi(x)'yi i=1\nBottom-up strategy: re-training. The first works aiming at compressing CNN models like the one proposed by (Gong et al.2014) used the reconstruction from off-the-shelf PQ, i.e., without any re-training. However, as observed in Sablayrolles et al.(2016), when using quantization methods like PQ, it is better to re-train the layers occurring after the quantization, so that the network can re-adjust itself to the quantization. There is a strong argument arguing for this re-training strategy. the square magnitude of vectors is reduced, on average, by the average quantization error for any quantizer satisfying the Lloyd conditions; see [Jegou et al.(2011) for details.\nThis suggests a bottom-up learning strategy where we first quantize the input matrix, then retrain and quantize the output matrix (the input matrix being frozen). Experiments in section|4 show that it is worth adopting this strategy..\nMemory savings with PQ. In practice, the bottom-up PQ strategy offers a compression factor of. 10 without any noticeable loss of performance. Without re-training, we notice a drop in accuracy between 0.1% and 0.5%, depending on the dataset and setting; see Section4|and the appendix..\nThe memory usage strongly depends on the size of the vocabulary, which can be large in man. text classification tasks. While it is clear that a large part of the vocabulary is useless or redundan. directly reducing the vocabulary to the most frequent words is not satisfactory: most of the frequen. words, like \"the\"' or \"is\"' are not discriminative, in contrast to some rare words, e.g., in the context o. tag prediction. In this section, we discuss a few heuristics to reduce the space taken by the dictionary. They lead to major memory reduction, in extreme cases by a factor 100. We experimentally shoy. that this drastic reduction is complementary with the PQ compression method, meaning that th. combination of both strategies reduces the model size by a factor up to 1000 for some datasets\nPruning the vocabulary. Discovering which word or n-gram must be kept to preserve the overall performance is a feature selection problem. While many approaches have been proposed to select groups of variables during training (Bach et al.[2012] Meier et al.]2008), we are interested in selecting a fixed subset of K words and ngrams from a pre-trained model. This can be achieved by selecting the K embeddings that preserve as much of the model as possible, which can be reduced to selecting the K words and ngrams associated with the highest norms.\nWhile this approach offers major memory savings, it has one drawback occurring in some particular cases: some documents may not contained any of the K best features, leading to a significant drop. in performance. It is thus important to keep the K best features under the condition that they cover the whole training set. More formally, the problem is to find a subset S in the feature set V that maximizes the sum of their norms ws under the constraint that all the documents in the training set. D are covered:\nmax Ws s.t. sK, P1s > 1p sES\nwhere P is a matrix such that Pds = 1 if the s-th feature is in the d-th document, and 0 otherwise This problem is directly related to set covering problems that are NP-hard (Feige] 1998). Standard greedy approaches require the storing of an inverted index or to do multiple passes over the dataset, which is prohibitive on very large dataset (Chierichetti et al.2010). This problem can be cast as an instance of online submodular maximization with a rank constraint (Badanidiyuru et al.2014\ncomputational and memory overhead. Thirdly, it has been successfully applied in computer vision offering much better performance than binary codes, which makes it a natural candidate to compress relatively shallow models. As observed by Sanchez & Perronnin(2011), using PQ just before the last layer incurs a very limited loss in accuracy when combined with a support vector machine..\nIn the context of text classification, the norms of the vectors are widely spread, typically with a ratio of 1o0o between the max and the min. Therefore kmeans performs poorly because it optimizes an. absolute error objective, so it maps all low-norm vectors to O. A simple solution is to separate the. norm and the angle of the vectors and to quantize them separately. This allows a quantization with. no loss of performance, yet requires an extra b bits per vector..\nSogou Yahoo Yelp full 96.5 72.5 63.6 72.0 96.0 accnneey 71.5 63.2 95.5 71.0 62.8 95.0 70.5 94.5 62.4 70.0 94.0 69.5 62.0 2 4 8 2 4 8 2 4 8 number of bytes Full .pQ .+. OPQ .. LSH, norm .O. PQ, norm .... OPQ,norm\nFigure 1: Accuracy as a function of the memory per vector/embedding on 3 datasets fromZhar. et al.[(2015). Note, an extra byte is required when we encode the norm explicitly (\"norm'')\nBateni et al.]2010). In our case, we use a simple online parallelizable greedy approach: For eacl. document, we verify if it is already covered by a retained feature and, if not, we add the feature wit. the highest norm to our set of retained features. If the number of features is below k, we add th. features with the highest norm that have not yet been picked..\nHashing trick & Bloom filter. On small models, the dictionary can take a significant portion o. the memory. Instead of saving it, we extend the hashing trick used in Joulin et al.[(2016) to bot words and n-grams. This strategy is also used in Vowpal Wabbit (Agarwal et al.J2014) in the contex. of online training. This allows us to save around 1-2Mb with almost no overhead at test time (jus the cost of computing the hashing function)..\nPruning the vocabulary while using the hashing trick requires keeping a list of the indices of the K remaining buckets. At test time, a binary search over the list of indices is required. It has a complexity of O(log(K)) and a memory overhead of a few hundreds of kilobytes. Using Bloom filters instead reduces the complexity O(1) at test time and saves a few hundred kilobytes. However in practice, it degrades performance.\nThis section evaluates the quality of our model compression pipeline and compare it to other com pression methods on different text classification problems, and to other compact text classifiers.\nEvaluation protocol and datasets. Our experimental pipeline is as follows: we train a model. using fast Text with the default setting unless specified otherwise. That is 2M buckets, a learning. rate of 0.1 and 10 training epochs. The dimensionality d of the embeddings is set to powers of 2 to avoid border effects that could make the interpretation of the results more difficult. As baselines, we use Locality-Sensitive Hashing (LSH) (Charikar2002), PQ (Jegou et al.[|2011) and OPQ (Ge et al. 2013) (the non-parametric variant). Note that we use an improved version of LSH where random. orthogonal matrices are used instead of random matrix projection Jegou et al.(2008). In a first. series of experiments, we use the 8 datasets and evaluation protocol ofZhang et al. (2015). These datasets contain few million documents and have at most 10 classes. We also explore the limit of quantization on a dataset with an extremely large output space, that is a tag dataset extracted from. the YFCC100M collection (Thomee et al.20163] referred to as FlickrTag in the rest of this paper.\nAG Amazon full 0 -1 X -2 Amazon polarity DBPedia 0 + E X -1 -2 Sogou Yahoo 0 0 -1 -2 Yelp full Yelp polarity 0 - -1 X -2 X 100kB 1MB 10MB 100MB 100kB 1MB 10MB 100MB Full pQ Pruned + Zhang et al. (2015) X Xiao & Cho (2016)\nFigure 2: Loss of accuracy as a function of the model size. We compare the compressed model with different level of pruning with NPQ and the full fast Text model. We also compare with Zhang. et al.(2015) and[Xiao & Cho (2016). Note that the size is in log scale."}, {"section_index": "4", "section_name": "4.1 SMALL DATASETS", "section_text": "Compression techniques. We compare three popular methods used for similarity estimation wi ompact codes: LSH, PQ and OPQ on the datasets released by Zhang et al.(2015). Figure[1[shov he accuracy as a function of the number of bytes used per embedding, which corresponds to tl umber k of subvectors in the case of PQ and OPQ. See more results in the appendix. As discusse n Section 2] LSH reproduces the cosine similarity and is therefore not adapted to un-normaliz lata. Therefore we only report results with normalization. Once normalized, PQ and OPQ a lmost lossless even when using only k = 4 subquantizers per embedding (equivalently, bytes). V bserve in practice that using k = d/2, i.e., half of the components of the embeddings, works well ractice. In the rest of the paper and if not stated otherwise, we focus on this setting. The differen etween the normalized versions of PQ and OPQ is limited and depends on the dataset. Therefo ve adopt the normalized PQ (NPQ) for the rest of this study, since it is faster to train.\nTable 1: Best ranked words w.r.t. entropy (left) and norm (right) on the Amazon full review dataset. We give the rank for both criteria. The norm ranking filters out words carrying little information.\n3Data available at https://research.facebook.com/research/fasttext/\nword Entropy Norm word Entropy Norm 1 354 mediocre 1399 1 . 2 176 disappointing 454 2 , the 3 179 so-so 2809 3 and 4 1639 lacks 1244 4 1 5 2374 worthless 1757 5 6 970 a dreadful 4358 6 to 7 1775 drm 6395 7 it 8 1956 poorly 716 8 of 9 2815 uninspired 4245 9 this 10 3275 worst 402 10\nTable 2: Performance on very small models. We use a quantization with k = 1, hashing and an extreme pruning. The last row shows the average drop of performance for different size..\nPruning. Figure2|shows the performance of our model with different sizes. We fix k = d/2 anc use different pruning thresholds. NPQ offers a compression rate of 10 compared to the full model As the pruning becomes more agressive, the overall compression can increase up up to 1, 000 with little drop of performance and no additional overhead at test time. In fact, using a smalle dictionary makes the model faster at test time. We also compare with character-level Convolutiona Neural Networks (CNN) (Zhang et al.]2015} Xiao & Cho] 2016). They are attractive models foj text classification because they achieve similar performance with less memory usage than linea models (Xiao & Cho]2016). Even though fast Text with the default setting uses more memory NPQ is already on par with CNNs' memory usage. Note that CNNs are not quantized, and it would be worth seeing how much they can be quantized with no drop of performance. Such a study is beyond the scope of this paper. Our pruning is based on the norm of the embeddings according to the guidelines of Section [3.3] Table[1compares the ranking obtained with norms to the ranking obtained using entropy, which is commonly used in unsupervised settings Stolcke(2000)\nExtreme compression. Finally, in Table [2 we explore the limit of quantized model by looking at the performance obtained for models under 64KiB. Surprisingly, even at 64KiB and 32KiB, the drop of performance is only around 0.8% and 1.7% despite a compression rate of 1, 000 - 4, 000.\nIn this section, we explore the limit of compression algorithms on very large datasets. Similai. to Joulin et al.(2016), we consider a hashtag prediction dataset containing 312, 116 labels. We se1. the minimum count for words at 10, leading to a dictionary of 1, 427, 667 words. We take 10M buckets for n-grams and a hierarchical softmax. We refer to this dataset as FlickrTag.\nOutput encoding. We are interested in understanding how the performance degrades if the classi. fier is also quantized (i.e., the matrix B in Eq.[1) and when the pruning is at the limit of the minimum number of features required to cover the full dataset.\nTable 3: FlickrTag: Influence of quantizing the output matrix on performance. We use PQ for. quantization with an optional normalization. We also retrain the output matrix after quantizing the. input one. The ''norm'' refers to the separate encoding of the magnitude and angle, while \"retrain' refers to the re-training bottom-up PQ method described in Section |3.2.\nDataset full 64KiB 32KiB 16 KiB AG 65M 92.1 91.4 90.6 89.1 Amazon full 108M 60.0 58.8 56.0 52.9 Amazon pol.e 113M 94.5 93.3 92.1 89.3 DBPedia 87M 98.4 98.2 98.1 97.4 Sogou 73M 96.4 96.4 96.3 95.5 Yahoo 122M 72.1 70.0 69.0 69.2 Yelp full. 78M 63.8 63.2 62.4 58.7 Yelp pol.. 77M 95.7 95.3 94.9 93.2 Average diff. [%] 0 -0.8 -1.7 -3.5\nModel k norm retrain Acc. Size full (uncompressed) 45.4 12 GiB Input 128 45.0 1.7 GiB Input 128 x 45.3 1.8 GiB Input 128 x x 45.5 1.8 GiB Input+Output 128 x 45.2 1.5 GiB Input+Output 128 x x 45.4 1.5 GiB\nTable[3|shows that quantizing both the \"input\"' matrix (i.e., A in Eq.[1) and the \"output' matrix (i.e. B) does not degrade the performance compared to the full model. We use embeddings with d = 256 dimensions and use k = d/2 subquantizers. We do not use any text specific tricks, which leads to a compression factor of 8. Note that even if the output matrix is not retrained over the embeddings the performance is only 0.2% away from the full model. As shown in the Appendix, using less subquantizers significantly decreases the performance for a small memory gain.\nModel full Entropy pruning Norm pruning Max-Cover pruning #embeddings 11.5M 2M 1M 2M 1M 2M 1M Memory 12GiB 297MiB 174MiB 305MiB 179MiB 305MiB 179MiB Coverage [%] 88.4 70.5 70.5 73.2 61.9 88.4 88.4 Accuracy 45.4 32.1 30.5 41.6 35.8 45.5 43.9\nTable 4: FlickrTag: Comparison of entropy pruning, norm pruning and max-cover pruning methods We show the coverage of the test set for each method.\nPruning. Table4shows how the performance evolves with pruning. We measure this effect on top of a fully quantized model. The full model misses 11.6% of the test set because of missing words (some documents are either only composed of hashtags or have only rare words). There are 312, 116 labels and thus it seems reasonable to keep embeddings in the order of the million. A naive pruning with 1M features misses about 30 -- 40% of the test set, leading to a significant drop of performance On the other hand, even though the max-coverage pruning approach was set on the train set, it does not suffer from any coverage loss on the test set. This leads to a smaller drop of performance. If the pruning is too aggressive, however, the coverage decreases significantly."}, {"section_index": "5", "section_name": "5 FUTURE E WORK", "section_text": "It may be possible to obtain further reduction of the model size in the future. One idea is to condition. the size of the vectors (both for the input features and the labels) based on their frequency (Chen et al.[2015} Grave et al.2016). For example, it is probably not worth representing the rare labels. by full 256-dimensional vectors in the case of the FlickrTag dataset. Thus, conditioning the vector size on the frequency and norm seems like an interesting direction to explore in the future..\nIn this paper, we have presented several simple techniques to reduce, by several orders of magnitude.. the memory complexity of certain text classifiers without sacrificing accuracy nor speed. This is achieved by applying discriminative pruning which aims to keep only important features in the. trained model, and by performing quantization of the weight matrices and hashing of the dictionary.\nWe will publish the code as an extension of the fast Text library. We hope that our work will serve as a baseline to the research community, where there is an increasing interest for comparing the performance of various deep learning text classifiers for a given number of parameters. Overall compared to recent work based on convolutional neural networks, fast Text . zip is often more accurate, while requiring several orders of magnitude less time to train on common CPUs, and incurring a fraction of the memory complexity.\nWe may also consider combining the entropy and norm pruning criteria: instead of keeping the features in the model based just on the frequency or the norm, we can use both to keep a good set of features. This could help to keep features that are both frequent and discriminative, and thereby to reduce the coverage problem that we have observed.\nAdditionally, instead of pruning out the less useful features, we can decompose them into smaller units (Mikolov et al.]2012). For example, this can be achieved by splitting every non-discriminative word into a sequence of character trigrams. This could help in cases where training and test examples are very short (for example just a single word)."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends(R) in Machine Learning. 4(1):1-106. 2012.\nMohammad Hossein Bateni, Mohammad Taghi Hajiaghayi, and Morteza Zadimoghaddam. Sub modular secretary problem and extensions. In Approximation, Randomization, and Combinatoria Optimization. Algorithms and Techniques, pp. 39-52. Springer, 2010.\nMoses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, pp. 380 388, May 2002.\nJriel Feige. A threshold of ln n for approximating set cover. JACM, 45(4):634-652, 1998\nYunchao Gong and Svetlana Lazebnik. Iterative quantization: A procrustean approach to learnin binary codes. In CVPR, June 2011.\nEdouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, and Herve Jegou. Efficient. softmax approximation for gpus. arXiv preprint arXiv:1609.04309. 2016\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016..\nHerve Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbo. search. IEEE Trans. PAMI, January 2011.\nThorsten Joachims. Text categorization with support vector machines: Learning with many relevant features. Springer, 1998.\nMisha Denil. Babak Shakibi, Laurent Dinh, Marc-Aurelio Ranzato, and Nando et all de Freitas Predicting parameters in deep learning. In NIPS, pp. 2148-2156, 2013.\nTiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization for approximate nearest neighbor search. In CVPR, June 2013..\nYunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net works using vector quantization. arXiv preprint arXiv:1412.6115, 2014.\nHerve Jegou, Matthijs Douze, and Cordelia Schmid. Hamming embedding and weak geometric consistency for large scale image search. In ECCV, October 2008..\nann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. NIPs, 2:598-605, 1990\nZhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks wit few multiplications. arXiv preprint arXiv:1510.03009, 2015\nLukas Meier, Sara Van De Geer, and Peter Buhlmann. The group lasso for logistic regression Journal of the Royal Statistical Society: Series B (Statistical Methodology). 70(1):53-71. 2008.\nTomas Mikolov. Statistical language models based on neural networks. In PhD thesis. VUT Brno 2012.\nTomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cernocky Subword language modeling with neural networks. preprint, 2012\nMohammad Norouzi and David Fleet. Cartesian k-means. In CVPR, June 2013\nAlexandre Sablayrolles, Matthijs Douze, Herve Jegou, and Nicolas Usunier. How should we evalu ate supervised hashing? arXiv preprint arXiv:1609.06753, 2016.\nJorge Sanchez and Florent Perronnin. High-dimensional signature compression for large-scale im age classification. In CVPR, 2011.\nAnshumali Shrivastava and Ping Li. Asymmetric LSH for sublinear time maximum inner product search. In NIPS, pp. 2321-2329. 2014\nAndreas Stolcke. Entropy-based pruning of backoff language models. arXiv preprint cs/0006025 2000.\nJingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: A survey. arXiv preprint arXiv:1408.2927, 2014.\nJun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. Learning to hash for indexing big data - A survey. CoRR, abs/1509.05472, 2015.\nSida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In ACL, 2012\nYair Weiss, Antonio Torralba, and Rob Fergus. Spectral hashing. In NIPs, December 2009\nXiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas. sification. In NIPS, 2015.\nAndrew McCallum and Kamal Nigam. A comparison of event models for naive bayes text classifi cation. In AAAI workshop on learning for text categorization, 1998."}, {"section_index": "7", "section_name": "APPENDIX", "section_text": "mnn AG Amz. f. Amz. p. DBP Sogou Yah. Quant. k Yelp f. Yelp p. full 92.1 36M 59.8 97M 94.5 104M 98.4 67M 96.3 47M 72 120M 63.7 56M 95.7 53M full,nodict 92.1 34M 59.9 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.6 48M 95.6 46M LSH 8 88.7 8.5M 51.3 20M 90.3 21M 92.7 14M 94.2 11M 54.8 23M 56.7 12M 92.2 12M PQ 8 91.7 8.5M 59.3 20M 94.4 21M 97.4 14M 96.1 11M 71.3 23M 62.8 12M 95.4 12M OPQ 8 91.9 8.5M 59.3 20M 94.4 21M 96.9 14M 95.8 11M 71.4 23M 62.5 12M 95.4 12M LSH 8 x 91.9 9.5M 59.4 22M 94.5 24M 97.8 16M 96.2 12M 71.6 26M 63.4 14M 95.6 13M PQ 8 x 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13M OPQ 8 x 92.1 9.5M 59.9 22M 94.5 24M 98.4 16M 96.3 12M 72.2 26M 63.6 14M 95.6 13M LSH 4 88.3 4.3M 50.5 9.7M 88.9 11M 91.6 7.0M 94.3 5.3M 54.6 12M 56.5 6.0M 92.9 5.7M PQ 4 91.6 4.3M 59.2 9.7M 94.4 11M 96.3 7.0M 96.1 5.3M 71.0 12M 62.2 6.0M 95.4 5.7M OPQ 4 91.7 4.3M 59.0 9.7M 94.4 11M 96.9 7.0M 95.6 5.3M 71.2 12M 62.6 6.0M 95.4 5.7M LSH 4 92.1 5.3M 59.2 13M 94.4 13M 97.7 8.8M 96.2 6.6M 71.1 15M 63.1 7.4M 95.5 7.2M x PQ 4 x 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72.0 15M 63.6 7.5M 95.6 7.2M OPQ 4 92.2 5.3M 59.8 13M 94.5 13M 98.3 8.8M 96.3 6.6M 72.1 15M 63.7 7.5M 95.6 x 7.2m LSH 2 87.7 2.2M 50.1 4.9M 88.9 5.2M 90.6 3.5M 93.9 2.7M 51.4 5.7M 56.6 3.0M 91.3 2.9M PQ 2 91.1 2.2M 58.7 4.9M 94.4 5.2M 87.1 3.6M 95.3 2.7M 69.5 5.7M 62.1 3.0M 95.4 2.9M OPQ 2 91.4 2.2M 58.2 94.3 5.2M 91.6 94.2 2.7M 5.7M 95.4 2.9M 4.9M 3.6M 69.6 62.1 3.0M LSH 2 x 91.8 3.2M 58.6 7.3M 94.3 7.8M 97.1 5.3M 96.1 4.0M 69.7 8.6M 62.7 4.5M 95.5 4.3M PQ 2 x 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3M OPQ 2 92.1 3.2M 59.5 7.3M 94.5 7.8M 98.1 5.3M 96.2 x 4.0M 71.5 8.6M 63.4 4.5M 95.6 4.3M\nTable 5: Comparison between standard quantization methods. The original model has a dimension ality of 8 and 2M buckets. Note that all of the methods are without dictionary..\nk AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p. co full, nodict 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46M 8 full 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13M 4 full 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72 15M 63.6 7.5M 95.6 7.2M 2 full 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3M 8 200K 92.0 2.5M 59.7 2.5M 94.3 2.5M 98.5 2.5M 96.6 2.5M 71.8 2.5M 63.3 2.5M 95.6 2.5M 8 100K 91.9 1.3M 59.5 1.3M 94.3 1.3M 98.5 1.3M 96.6 1.3M 71.6 1.3M 63.4 1.3M 95.6 1.3M 8 50K 91.7 645K 59.7 645K 94.3 644K 98.5 645K 96.6 645K 71.5 645K 63.2 645K 95.6 644K 8 10K 91.3 137K 58.6 137K 93.2 137K 98.5 137K 96.5 137K 71.3 137K 63.3 137K 95.4 137K 4 200K 92.0 1.8M 59.7 1.8M 94.3 1.8M 98.5 1.8M 96.6 1.8M 71.7 1.8M 63.3 1.8M 95.6 1.8M 4 100K 91.9 889K 59.5 889K 94.4 889K 98.5 889K 96.6 889K 71.7 889K 63.4 889K 95.6 889K 4 50K 91.7 449K 59.6 449K 94.3 449K 98.5 450K 96.6 449K 71.4 450K 63.2 449K 95.5 449K 4 10K 91.5 98K 58.6 98K 93.2 98K 98.5 98K 96.5 98K 71.2 98K 63.3 98K 95.4 98K 2 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4M 2 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693K 100K 2 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352K 2 10K 91.3 78K 58.5 78K 93.2 78K 98.4 79K 96.5 78K 70.8 78K 63.2 78K 95.3 78K\nTable 6: Comparison with different quantization and level of pruning. \"co\"' is the cut-off parameter of the pruning.\nIn the appendix, we show some additional results. The model used in these experiments only had 1M ngram buckets. In Table5] we show a thorough comparison of LSH, PQ and OPQ on 8 different datasets. Table7|summarizes the comparison with CNNs in terms of accuracy and size. Table|8 show a thorough comparison of the hashing trick and the Bloom filters..\nTable 7: Comparison between CNNs and fast Text with and without quantization. The number forZhang et al.(2015) are reported from [Xiao & Cho[(2016). Note that for the CNNs, we repor the size of the model under the assumption that they use float32 storage. For fast Text(+PQ) w report the memory used in RAM at test time..\nTable 8: Comparison with and without Bloom filters. For NPQ, we set d = 8 and k = 2\nDataset Zhang et al.2015 Xiao & Cho (2016 fastText+PQ,k = d/2 AG 90.2 108M 91.4 80M 91.9 889K Amz. f. 59.5 10.8M 59.2 1.6M 59.6 449K Amz. p. 94.5 10.8M 94.1 1.6M 94.3 449K DBP 98.3 108M 98.6 1.2M 98.5 98K Sogou 95.1 108M 95.2 1.6M 96.5 98K Yah. 70.5 108M 71.4 80M 71.7 889K Yelp f. 61.6 108M 61.8 1.4M 63.3 98K Yelp p. 94.8 108M 94.5 1.2M 95.5 449K\nQuant. co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p. full,nodict 92.1 59.8 34M 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46M NPQ 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4M NPQ x 200K 92.2 830K 59.3 830K 94.1 830K 98.4 830K 96.5 830K 70.7 830K 63.0 830K 95.5 830K NPQ 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693K NPQ 100K 91.8 420K 59.1 420K 93.9 420K 98.4 420K 96.5 420K 70.6 420K 62.8 420K 95.3 420K x NPQ 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352K NPQ x 50K 91.5 215K 58.8 215K 93.6 215K 98.3 215K 96.5 215K 70.1 215K 62.7 215K 95.1 215K NPQ 10K 91.3 78K 58.5 78K 93.2 78K 98.4 79K 96.5 78K 70.8 78K 63.2 78K 95.3 78K NPQ x 10K 90.8 51K 56.8 51K 91.7 51K 98.1 51K 96.1 51K 68.7 51K 61.7 51K 94.5 51K\nModel k norm retrain Acc. Size full 45.4 12G Input 128 45.0 1.7G Input 128 x 45.3 1.8G Input 128 x x 45.5 1.8G Input+Output 128 x 45.2 1.5G Input+Output 128 x x 45.4 1.5G Input+Output, co=2M 128 x x 45.5 305M Input+Output, n co=1M 128 x x 43.9 179M Input 64 44.0 1.1G Input 64 44.7 x 1.1G Input 64 x 44.9 1.1G Input+Output 64 x 44.6 784M Input+Output 64 x x 44.8 784M Input+Output, co=2M 64 x 42.5 183M Input+Output, co=1M 64 x 39.9 118M Input+Output, co=2M 64 x x 45.0 183M Input+Output, co=1M 64 x x 43.4 118M Input 32 40.5 690M Input 32 x 42.4 701M Input 32 x x 42.9 701M Input+Output 32 42.3 x 435M Input+Output 32 x x 42.8 435M Input+Output, co=2M 32 x 35.0 122M Input+Output, co=1M 32 x 32.6 88M Input+Output, co=2M 32 x x 43.3 122M Input+Output, co=1M 32 x x 41.6 88M\nTable 9: FlickrTag: Comparison for a large dataset of (i) different quantization methods and paran eters, (ii) with or without re-training."}]
Hk1l9Xqxe
[{"section_index": "0", "section_name": "BIOACOUSTIC SEGMENTATION B Y HIER ARCHICAI DIRICHLET PROCESS HIDDEN MARKOV MODEL", "section_text": "Vincent Roger\nDYNI, LSIS UMR CNRS, Machine Learning AMU, University of Toulon, ENSAM La Garde. France"}, {"section_index": "1", "section_name": "LMNO UMR CNRS. Statistics and Data Scienc University of Caen Caen. France", "section_text": "faicel.chamroukhi@unicaen.fr\nUnderstanding the communication between different animals by analysing their acoustic signals is an important topic in bioacoustics. It can be a powerful tool for the preservation of ecological diversity. We investigate probabilistic models to analyse signals issued from real-world bioacoustic sound scenes. We study a Bayesian non-parametric sequential models based on Hierarchical Dirichlet Pro- cess Hidden Markov Models (HDP-HMM). The model is able to infer hidden states, that are referred here as song units. However, using such a model raise one main issue: defining the number of hidden states the model has to learn. In bioacoustic problems we often do not know the number of song units (unlike in human speech recognition). Hence, we work with the Hierarchical Dirichlet Pro- cess (HDP)-HMM, which is a Bayesian non-parametric (BNP) mode1 that offers a way to tackle this challenging problem. We focus our work on unsupervised learning from bioacoustic data. It consists in simultaneously finding the structure of hidden song units and automatically infer the unknown number of the hidden states to represent the data. Two real bioacoustic sound scene applications are in- vestigated in this work: on whale and multi-species birds segmentation. The learn- ing of these models is proceeded by using Markov-Chain Monte Carlo (MCMC) sampling techniques on Mel Frequency Cepstral Coefficients (MFCC) of audio signals. The results show an interesting song unit segmentation of the bioacoustic signals and open new insights for unsupervised analysis of such signals. This pa- per illustrates the potential of chunking non-human animal signals into structured parts. This can yield to a new species representation and help experts to better understand the behaviour of such species as|Kershenbaum et al.(2014) wanted."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Acoustic communication is common in the animal world where individuals communicate with se. quences of some different acoustic elements (Kershenbaum et al.]2014). An accurate analysis is important in order to give a better identification of some animal species and interpret the identified. song units in the course of time. In this paper, we automatically model the sequence of a non-human. signal and determine their acoustic song units. As highlighted in Kershenbaum et al.(2014), the. way according to which non-human acoustic sequences can be interpreted can be summarized as. shown in Fig4 We distinguish four common properties that are used to define potential criteria for segmenting such signals into song units. The first way, shown in Fig4(A), consists in separating. the signals using silent gaps. The second way, shown in Fig|4(B), consists in separating the signals. according to the changes in the acoustic properties in the signal. The third way, shown in Fig4(C). consists in grouping similar sounds separated with silent gaps as a single unit. The last common.\nMarius Bartcus\nDYNI, LSIS UMR CNRS, Machine Learning AMU, University of Toulon, ENSAM La Garde. France\nmarius.bartcus@qmail.com"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Acoustic units can be determined either manually (e.g. from spectrogram representation), or auto. matically (e.g. based on a model). Manual segmentation is time consuming and not possible for a large acoustic dataset. That is why automatic approaches are needed. Furthermore, in bioacoustic signals, the problem of segmenting signals of many species, is still an issue, including for bioacous- tic. Hence, a well-principled learning system based on unsupervised approach can help them to have. a better understanding of bioacoustics species. In this context, we investigate statistical latent data models to automatically identify song units..\nFirst, we study Hidden Markov Models (HMMs) (Rabiner & Juang 1986).Which are the gold standard for sequential data, and thus could be relevant for acoustic data modeling and segmenta tion. The typically used algorithm to learn the model is the Expectation-Maximization (EM) algo rithm (Dempster et al.]1977), also known as Baum-Welch in HMMs (Baum et al.]1970). The main issue with HMMs is the one of selecting the number of hidden states. Because of the lack of knowl edge on non-human species, it is hard to have this number. This rises a model selection problem which can be addressed by information selection criteria such as BIC, AIC (Schwarz1978)|Akaike 1974), which select an HMM with a number of states from pre-estimated HMMs with varying num ber of states.\nSuch approaches are limited because they require learning multiple HMMs. On the other hand, non parametric derivations of HMMs constitute a well-principled alternative to address this issue. Thi approach is more flexible than using a Bayesian non-parametric (BNP) formulation for HMMs (Tel et al.2006), also called the infinite HMM (iHMM) (Beal et al.J2002). It allows to infer the num ber of states (segments, units) from the data. The BNP approach for HMMs relies on Hierarchica Dirichlet Process (HDP) to define a prior over the states (Teh et al.]2006). It is known as the Hi erarchical Dirichlet Process for the Hidden Markov Models (HDP-HMM) (Teh et al.2006). Th HDP-HMM parameters can be estimated by MCMC sampling techniques such as Gibbs sampling The standard HDP-HMM Gibbs sampling has the limitation of an inadequate modeling of the tem poral persistence of states (Fox et al.]2008). This problem has been addressed by Fox et al.(2008 by relying on a sticky extension which allows a more robust learning. Hence, we have a model t separate non-human signals into states that represent different activities (song units) and exploring the inference of complex data such as bioacoustic data in surroundings cases is not yet resolved\nIn this paper, we investigate the BNP formulation of HMM, that is the HDP-HMM, into two chal-. lenges involving real bioacoustic data. First, a challenging problem of humpback whale song de-. composition is investigated. The objective is the unsupervised structuration of whale bioacoustic data. Humpback whale songs are long cyclical sequences produced by males during the reproduc. tion season which follows their migration from high-latitude to low-latitude waters. Singers from the same geographical region share parts of the same song. This leads to the idea of dialect (Helweg. et al.1998). Different hypotheses of these songs were emitted (Medrano et al.1994]Frankel et al. 1995Baker & Herman1984 Garland et al.2011). Next, we investigate a challenging problem. of bird song unit structuration.Catchpole & Slater(1995);Kroodsma & Miller (1996) show how birds sing and why birds have such elaborate songs. However, analysing bird song units is difficult. due to the transientness of typical bird chirps, the large behavioural intra-class variability, the small amount of examples per class, the presence of wildlife noise, and so forth. As shown later in the ob- tained segmentation results, such automatic approaches allow large-scale analysis of environmental bioacoustics recordings\nDiscovering the call units (which can be considered as a kind of non-human alphabet) of such com plex signals can be seen as a problem of unsupervised call units classification as Pace et al.(2010).\nPicot et al.(2008) also tried to analyse bioacoustic songs using a clustering approach. They imple. mented a segmentation algorithm based on Payne's principle to extract sound units from a bioacous. tic song. In this paper we reformulate the problem of song decomposition as an unsupervised data. classification problem. Contrary to the approach used byPace et al.(2010), in which the number o1. states (call units in this case) has been fixed manually, or the one used byPicot et al.[(2008), wher."}, {"section_index": "4", "section_name": "2 DATA AND METHODS", "section_text": "The data used represent the difficulties of bioacoustic problems, especially when the only informa tion linked to the signal is the species name. Thus, we have to determine a sequence without grounc truth."}, {"section_index": "5", "section_name": "2.1 HUMPBACK WHALE DATA", "section_text": "We extract MFCd1|features from the signal, with pre-emphasis: 0.95, hamming window, FFT on 1024 points (nearly 23ms), frameshift 10 ms, 24 Mel channels, 12 MFCC coefficients plus en ergy and their delta and acceleration, for a total of 39 dimensions as detailed in the NIPs 2013 challenge (NIP2013) where the signal and the features are available. The retained data for ou. experiment are the 51336 first observations.\nBird species song data from Fernand Deroussen Jerome Sueur of Musee National d'Histoire Na-. turelle (F. Deroussen2o06), consists of a training and a testing set (not used here). Theses sets were designed for the ICML4B challengd] The recordings have a frequency sample of 44.1kHz. 16 bits, one channel. The training set is composed of 35 recordings, 30 seconds each taken from 1. microphone. Each record contains 1 bird species in the foreground for a total of 35 different birds species.\nThe feature extraction for this application is applied as follows. First, a high pass filter is processec to reduce the noise (set at 1.000 kHz to avoid noises). Then. we extract the MFCC features witl windows of 0.06 seconds and shift of 0.03 seconds, we keep 13 coefficients, with energy as firs1 parameter, to be compact and sufficient accurate, considering only the vocal track information and removing the source information. Also, we focus on frequencies below 8.ooo kHz, because of the alterations into the spectrum. We obtain 34965 observations with 13 dimensions each for train set. that is used to learn our model.\nTo solve bioacoustic problems and finding the number of call units we propose to use the HDP HMM model to model the complex bioacoustic data. Our approach automatically discovers and. infers the number of states from the non-human song data..\nIn this paper we present two applications on bioacoustic data. We study the song unit structuration for the humpback whale and for the multi-species birds signal\nOur approach is based on a probabilist approach on the MFCC; it is a non-parametric formulation, that is well-suited to the problem of automatically inferring the number of the states in the data. In the next section we describe the real-world bioacoustic challenges we used and explain our approach\nHumpback whale song data consist of a recording (about 8.6 minutes) produced at few meters from the whale in La Reunion - Indian Ocean (NIP|2013), at a frequency sample of 44.1kHz, 32 bits, one. channel.\nIn the next section we give a brief description of the Hidden Markov Model and it's Bayesian non parametric alternative used in our bioacoustic signal representation applications..\n1The MFCC are features that represent and compress short-term power spectrum of a sound. It follows the Mel scale.\nThe finite Hidden Markov Model (HMM) is very popular due to its rich mathematical structure and its stability to model sequential data (e.g. acoustic data). It assumes that the observed sequence X = (x1,..., xT) is governed by a hidden state sequence z = (z1,..., zT), where xt E Rd is the multidimensional observation at time t and zt represents the hidden state of xt taking values in a finite set {1,..., K}, K being the possible number of states, that is unknown. The generative process of the HMM can be described in general by the following steps. First, z1 follows the initial distribution 1. Then, given the previous state (t-1), the current state zt follows the transition distribution. Finally, given the state zt, the observation x follows the emission distribution F(0zt) of that state. The HMM parameters, that are the initial state transition (1), the transition matrix () and the emission parameters (0) are in general estimated in a maximum likelihood estimation (MLE) framework by using the Expectation-Maximization (EM) algorithm, also known as the Bauch-Welch algorithm (Baum et al.|1970) in the context of HMMs.\nTherefore, for the finite HMM, the number of states K is required to be known a priori. This. model selection issue can be addressed in a two-stage scheme by using model selection criteria. such as the Bayesian Information Criterion (BIC) (Schwarz1978), the Akaike Information Crite-. rion (AIC) (Akaike||1974), the Integrated Classification Likelihood criterion (ICL) (Biernacki et al.. 2000), etc to select a model from pre-estimated HMMs with varying number of states. Such ap. proaches is limited, it requires learning N HMMs, N being sufficiently high to have an equivalent. of a non parametric approach. Regardless this, a non parametric approach is more efficient because. it theoretically tends to an infinite number of states. Thus, we use a Bayesian non-parametric (BNP). version of the HMM, that is able to infer the number of hidden states from the data. It is more flexible than learning multiple HMM, because in bio-acoustic problems the model have to charac-. terize multiple species/individuals, thus it possibly tends to a large number of hidden states. Thence.. exploring the inference of complex data such as bioacoustic data in surroundings cases is new..\nThe BNP approach for the HMM, that is the infinite HMM (iHMM), is based on a Dirichlet Process. (DP) (Ferguson1973) needs to be used. However, due to the transitions of states take independent. priors, there is no coupling across transitions between different states Beal et al.(2002), therefore. the DP is not sufficient to extend the HMM to an infinite model. The Hierarchical Dirichlet Process (HDP) prior distribution on the transition matrices over countability infinite state space, derived by. Teh et al.(2006), extends the HMM to the infinite state space model and is briefly described in the. next subsection.\nSuppose the data subdivided into J groups, each produced by a related, yet distinct process. The HDP extends the DP by an hierarchical Bayesian approach such that a global Dirichlet Process prior DP(ao, Go) is drawn from a global prior G, where Go is itself a Dirichlet Process distribution with two parameters, a base distribution H and a concentration parameter y. The generative process of the data with the HDP can be summarized as follows. Suppose data X, with i = 1, ..., T observations that is grouped into j = 1,..., J groups. Note that the observations of the group j are given by X, = (x1, X;2, ...), all observations of group j being exchangeable. Assume each observation is drawn from a mixture model, thus each observations xy; is associated with a mixture component, with parameter Oi, . Note that from the DP property, we observe equal values in the components Oij Now, giving the model parameter 0i, the data xi is drawn from the distribution F(0). Assuming a prior distribution G; over the model parameters associated for group j, 0, = (0,1, 0j2, ...), we can define the generative process in Eq. (1).\nThe Chinese Restaurant Process (CRP) (Pitman1995) is a representation of the Dirichlet Process. that results from a metaphor related to the existence of a restaurant with possible infinite tables\nGo[y, H ~ DP(y, H) G;[Qo, Go~ DP(ao, Go), Vj E 1,...,J, Oji|G,~ Gj, Vj E1,...,J andViE1,...,T,. Xi|0ji~ F(xi|0ji),Vj E1,...,JandVi E1,...,T.\nGo[y,H ~ DP(~,H) Gjao,Go ~ DP(ao,Go), Vj E 1, 0ji|G;~ Gj, Vj E1,..., J andVi E 1,...,T Fx:A:V E1 J and Vi E 1.\n(clusters) where customers (the observations) are sitting in it. An alternative of such a representatior for the Hierarchical Dirichlet Process can be described by the Chinese Restaurant Franchise (CRF). process by extending the CRP to multiple restaurants that share a set of dishes..\nThe idea of CRF is that it gives a representation for the HDP by extending the Chinese Restauran Process to a set of (J) restaurants, rather than a single restaurant. Suppose a patron of Chines Restaurant creates many restaurants, strongly linked to each other, by a franchise wide menu, havin dishes common to all restaurants. As a result, J restaurants are created (groups) with a possibility t extend each restaurant with an infinite number of tables (states) at witch the customers (observations sit. Each customer goes to his specified restaurant j, where each table of this restaurant has a disl that shares between the customers that sit at that specific table. However, multiple tables of differen existing restaurants can serve the same dish.\n3.2 THE HIERARCHICAL DIRICHLET PROCESS FOR THE HIDDEN MARKOV MODEL (HDP-HMM)\nThe HDP-HMM uses a HDP prior distribution providing a potential countability infinite number of hidden states and tackles the challenging problem of model selection for the HMM. This model is a Bayesian non-parametric extension for the HMM also presented as the infinite Hidden Markov Model (Beal et al.]2002). To derive the HDP-HMM model we suppose a doubly-infinite transition matrix, where each row corresponds to a CRP. Thus, in a HDP formalism, the groups correspond to states, with CRP distribution on next states. CRF links these states distributions.\nWe assume for simplicity a distinguished initial state zo. Let G; describes both, the transition matrix k and the emission parameters 0k, the infinite HMM can be described by the following generative process:\n3|y ~ GEM(), k|a, ~ DP(a, ), Zt|Zt-1 ~ Mult(zt-1) 0k|H ~ H, xt|Zt,{0k}k=1~ F(0zt)"}, {"section_index": "6", "section_name": "where,", "section_text": "Suppose the observed data likelihood is a Gaussian density W(x; 0) where the emission parame. ters 0k = {k, k} are respectively the mean vector and the covariance matrix k. According. to Gelman et al.(2003), the prior over the mean vector and the covariance matrix is a conjugate Normal-Inverse-Wishart distribution, denoted as NTW(o, Ko, Vo, Ao), with the hyper-parameters. describing the shapes and the position for each mixture components: o is the mean of Gaussian. should be, ko the number of pseudo-observations supposed to be attributed, and vo, Ao being simi-. larly for the covariance matrix.\nIn the generative process given in Eq. (2), is interpreted as a double-infinite transition matrix. with each row taking a CRP. Thus, in the HDP formulation \"the group-specific' distribution, k. corresponds to 'the state-specific\" transition where the CRF defines distributions over the next state In turn, Fox et al.(2008) showed that HDP-HMM inadequately models the temporal persistence of states, creating redundant and rapidly switching states and proposed an additional hyperparameter K. that increase the self-transition probabilities. This is named as sticky HDP-HMM. The distribution. on the transition matrix of Eq. (2) for the sticky HDP-HMM is given as follows:.\na+ k0k k|Q, ~ DP Q+ K, Q+K\nwhere a small positive k > 0 is added to the kth component of a3, thus of self-transition probability is increased by K. Note that setting k to O, the original HDP-HMM is recovered. Under such assumption for the transition matrix, Fox et al.(2008) proposes an extension of the CRF to the\nThe inference of the infinite HMM (the (sticky) HDP-HMM) with the Block Gibbs sampler algo rithm is given in Algorithm 3 of Supplementary Material in|Fox et al.(2008) paper. The base idea ol this sampler is to estimate the posterior distributions over all the parameters from the generative pro. cess of (sticky) HDP-HMM given in Eq. (2). Here, the CRF with Loyal Customers, hyperparameter K of the transition matrix can be sampled in order to increase the self-transition probability..\nHence, the HDP-HMM model resolves the problem of advanced signal decomposition using acous tic features with respect to time. It allows identifying song units (states), behaviour and enhancing populations studies. From the other point, modelling data with the HDP-HMM offers a great al-. ternative of the standard HMM to tackle the challenging problem of selecting the number of states,. identifying the unknown number of hidden units from the used features (here: MFCC). The experi mental results show the interest of a such approach.."}, {"section_index": "7", "section_name": "4 EXPERIMENTS", "section_text": "In this section we present two applications on bioacoustic data. We study the song unit structuration for the humpback whale signal and for multi-species birds signals.."}, {"section_index": "8", "section_name": "4.1 HUMPBACK WHALE SOUND SEGMENTATION", "section_text": "The learning of the humpback whale song, applied via the HDP-HMM, is done with the Blocked Gibbs sampling. A number of iterations was fixed to N, = 30o00 and a truncation level, that corre- sponds to the maximum number of possible states in the model (being sufficient big to approximate it to an infinite model), is fixed to Lk = 30. The number of states estimated by the HDP-HMM Gibbs sampling is 6.\nThe Fig6 shows the state sequences partition, for all 8.6 minutes of humpback whale song data. obtained by the HDP-HMM Gibbs sampling. For more detailed information, the result of the whole humpback whale signal segmentation is separated by several parts of 15 seconds. All the spectro- grams of the humpback whale song and the obtained segmentation are made available in the demo: http://sabiod.univ-t1n.fr/workspace/ICLR2017/whale/ This demo highlights the interest of using a BNP formulation of HMMs for unsupervised segmentation of whale signals. Three examples of the humpback whale song, with 15 seconds duration each, are presented and discussed in this paper (see Fig1).\nFigure[1represents the spectrogram and the corresponding state sequence partition obtained by the. HDP-HMM Gibbs inference algorithm. They respectively represent examples of the beginning, the. middle and the end of the whole signal. All the obtained state sequence partitions fit the spectra. patterns. We note that the estimated state 1 fits the sea noise, state 5 also fits sea noise, but it is. right before units associated to whale songs. The presence of this unit can be due to an insufficien number of Gibbs samples. For a longer learning the fifth state could be merged with the first state. State 2 fits the up and down sweeps. State 3 fits low and high fundamental harmonic sounds, state. 4 fits for numerous harmonics sound and state 6 fits very noisy and broad sounds. Fig7|shows twc. spectrograms extracted from the 6th song unit (left) and from the 2nd song unit (right) of the whole. humpback whale signal. We can see that the units fit specific patterns on the whole signal.."}, {"section_index": "9", "section_name": "4.2 BIRDS SOUND SEGMENTATION", "section_text": "In this section we describe the obtained bird song unit segmentation. We segment the bird sig-. nals into song units by learning the HDP-HMM model on the training set (containing 35 different. species). The main goal is to see if a such approach can model multiple species. Note that in this set, we assume there is no multiple species singing at the same time..\nPr. Gianni Pavan (Pavia University, Italy), an undersea NATO bioacoustic expert analysed the results on the humpback whale song segmentation, during his stay at DYNI in 2015. He validated the. proposed representation. This highlight the interest of learning BNP model on a single species Next, we will see how such model reacts with multiple species..\nzH [00Ss 'o] bo] soun buis : 12 14 4 I 4 8 10 14 8 10 12 4\nZH [00SS o] bo] suun duis 6 12 4 12 14 Time in seconds Time in seconds Time in seconds\nFigure 1: Obtained song units starting at 60 seconds (left), 255 seconds (middle) and 495 seconds (right). The spectrogram of the whale song (top), and the obtained state sequence (bottom) by the Blocked Gibbs sampler inference approach for the HDP-HMM. The silence (unit 1 and 5) looks well separated from the whale signal. Whale up and down sweeps (unit 2), harmonics (unit 3 and 4 and broad sounds (unit 6) are also present. See fig|13|for bigger figures.\nFor this application, we considered 145000 Gibbs iterations and a truncation level of 200 for the maximum number of states. We suppose them to be sufficiently big for this data problem. Moreover we use one mixture component per state, that appeared to give satisfactory results and we use a sticky HDP-HMM with the hyper-parameter k set to 0.1.\nWe discovered 76 song units with this method. For more detailed information over the signal, we. separated the whole train set into parts of 15 seconds each. All the spectrograms and the associ- ated segmentation obtained are made available in the demo: http: //sabiod. univ-t 1n. fr/ workspace/ICLR2017/bird/ Fig2 contains three examples of bird bioacoustic segmenta- tion. The species are: Carduelis chloris (left), Luscinia megarhynchos (middle) and Parus caeruleus (right).\n10 60\nFigure 2: Obtained song units on 15 seconds of Carduelis chloris bird song. The spectrogram of the. bird song (top), and the obtained state sequence (bottom) by the Blocked Gibbs sampler inference approach for the HDP-HMM. Three song units persist in this sound. The silence looks well separated. from the bird signal. Furthermore, we can denote that song units fit well the song and similarity ir spectrum persists for each song unit. See fig14 for bigger figures..\nTo evaluate the bird results, we used a ground truth produced by Simone Clemente (bird expert). W. ask him to segment each recording of the dataset according to the different patterns on the signal Then we compare this ground truth with the segments produce by the model using NMI (Strehl 8. Ghosh] [2002) which calculates shared information between two clustering sets.\nFirst, we compute the NMI score between these two segmentations and we obtain a score of O.490 Thus, the global segmentation from the model isn't near from a segmentation done by an expert Second, we compute the NMI score for each species to see the possible mistake done by the model Tab|1 shows the different scores obtained with a resulting mean score of 0.367. The highest score i 0.680 (corvus corone) and the lowest score is O.003 (garrulus glandarius). Thus, for some species the model has difficulties to segment the data. Sometimes, it uses less states than the expert tc segment the data: for the oriolus oriolus (golden oriole), the model identifies 12 song units versu. 50 identified by the expert. Nevertheless, the model also uses more states than the expert to segmen the data: for the fringilla coelebs (chaffinch), the model identifies 15 song units versus 3 identifie\nby the expert. In other cases, the model can't differentiate 2 distinct vocalizes if they have close frequencies (phylloscopus collybita and columba palumbus), background and foreground species. (streptopelia decaocto). This can be due to the feature used or an insufficient number of iterations. of the Gibbs sampling. For most of species, the model and the ground truth have similar patterns observable on Fig8\nTo improve the model, we can investigate better feature representation for species with different acoustic characteristics. We can also improve noise reduction which could be useful for backgrounc activities. Nevertheless, the application highlights the interest of using BNP formulation of HMMs for unsupervised segmentation of bird signals\nIn this work, we relied on real world bioacoustic applications and propose to use a BNP formulation. of HMMs to realize a representation of bio-acoustic signals. It is a response for (Kershenbaum et al. 2014). We investigated this approach on real-world bioacoustic signals from two challenges. The. demo for the two applications are available online..\nThe obtained signal segmentation on the bioacoustic data is recovered in a fully automatic way and. is proceeded by a Hierarchical Dirichlet Process for Hidden Markov Model on MFCC. The BNP. formulation give an estimate number of cluster needed to segment the signal and our experiments highlight the interest of such formulation on bioacoustic problems. Furthermore we compare the. segmentation obtained for birds with the segmentation from an expert and the model using NMI, the. results are promising. We describe a full bioacoustic perspective in the annexes..\nHowever, the model used is computational expensive and not suitable larger dataset. Future work will consist in studying methods that should accelerate the MCMC sampling and dealing with larger data problems, like variational inference (Jordan et al.J|1999) or stochastic variational inference used for HMMs (Foti et al.2014)\nFuture work will consist in considering our segmentation results for classification task, where th goal is to identify the species..\nMatthew J. Beal, Zoubin Ghahramani, and Carl E. Rasmussen. The infinite hidden markov model In Machine Learning, pp. 29-245. MIT Press, 2002\nF. Jiguet F. Deroussen. La sonotheque du Museum: Oiseaux de France. 2006\nAkaike. A new look al lne statstcal model ldentnc Control, 19(6):716-723, 1974. C. Scott Baker and Louis M. Herman. Aggressive behavior between humpback whales (Megaptera. novaeangliae) wintering in Hawaiian waters. Canadian Journal of Zoology, 62(10):1922-1937,. 1984. L.E. Baum, T. Petrie, G. Soules, and N. Weiss. A maximization technique occurring in the statistical. analysis of probabilistic functions of markov chains. Annals of Mathematical Statistics, 41:164-. 171, 1970.\nIn Machine Learning, pp. 29-245. MIT Press, 2002. C. Biernacki, G. Celeux, and G Govaert. Assessing a mixture model for clustering with the inte grated completed likelihood. IEEE Transactions on Pattern Analysis and Machine Intelligence,. 22(7):719-725, 2000. C K Catchpole and P J B Slater. Bird Song - Biological Themes and Variations. Cambridge Univer-. sity Press, 1995. A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of The Royal Statistical Society, B, 39(1):1-38, 1977.. F. Jiguet F. Deroussen. La sonotheque du Museum: Oiseaux de France. 2006..\nThomas S. Ferguson. A Bayesian Analysis of Some Nonparametric Problems. The Annals of Statistics, 1(2):209-230. 1973. 1SSN 00905364\nNicholas Foti, Jason Xu, Dillon Laird, and Emily Fox. Stochastic variational inference for hidden Markov models. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Wein berger (eds.), Advances in Neural Information Processing Systems 27, pp. 3599-3607. Curran Associates, Inc., 2014.\nVacninelearning,pp. AO.TV A. S. Frankel, C. W. Clark, L. M. Herman, and C. M. Gabriele. Spatial distribution, habitat uti lization, and social interactions of humpback whales, Megaptera novaeangliae, off Hawai'i, de- termined using acoustic and visual techniques. Canadian Journal of Zoology, 73(6):1134-1146, 1995. Ellen C. Garland, Anne W Goldizen, Melinda L. Rekdahl, Rochelle Constantine, Claire Garrigue. Nan Daeschler Hauser, M. Michael Poole, Jooke Robbins, and Michael J. Noad. Dynamic hori zontal cultural transmission of humpback whale song at the ocean basin scale. Current Biology, 21(8):687-691, 2011.\nAndrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. Bayesian Data Analysi. Chapman and Hall/CRC, 2003.\nDavid A. Helweg, Douglas H. Cato, Peter F. Jenkins, Claire Garrigue, and Robert D. McCauley Geographic Variation in South Pacific Humpback Whale Songs. Behaviour, 135(1):pp. 1-27 1998.\nFederica Pace, Frederic Benard. Herve Glotin, Olivier Adam, and Paul White. Subunit definition anc analysis for humpback whale call classification. Applied Acoustics, 71(11):1107 - 1112, 2010\nG. Picot, O. Adam, M. Bergounioux, H. Glotin, and F.-X. Mayer. Automatic prosodic clustering. of humpback whales song. In New Trends for Environmental Monitoring Using Passive Systems 2008, pp. 1-6, Oct 2008. J. Pitman. Exchangeable and partially exchangeable random partitions. Probab. Theory Related Fields, 102(2):145-158, 1995. ISSN 0178-8051.\nLawrence R Rabiner and Biing-Hwang Juang. An introduction to hidden Markov models. ASSP Magazine, IEEE, 3(1):4-16, 1986\nG. Schwarz. Estimating the dimension of a model. Annals of Statistics, 6:461-464, 1978\nJ. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639-650, 199\nArik Kershenbaum, Daniel T Blumstein, Marie A Roch, Caglar Akcay, Gregory Backus, Mark A Bee. Kirsten Bohn, Yan Cao, Gerald Carter, Cristiane Casar, et al. Acoustic sequences in non. human animals: a tutorial review and prospectus. Biological Reviews, 2014."}, {"section_index": "10", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We would like to thanks Pr. Gianni Pavan (Pavia University, Italy) and Simone Clemente for thei bioacoustic points of view. We also want to thanks Virgil Tassan for his re-reading"}, {"section_index": "11", "section_name": "BIOACOUSTICIAN DISCUSSION", "section_text": "One of the main topics in ecological acoustics is the development of unsupervised methods for automatic detection of vocalized species, which would help specialists in ecological works during their monitoring activities. Although some works already have reach good classification percentage 3 4] there's a lack of methodologies available for works focused on real world data, and with further applications in ecology and wildlife management. One of the major bottlenecks for the application of these methodologies is their inability to work under heavy complex acoustic environment, where different taxa may sing together or conversely, their extreme sensitivity which may result in an 'over classification' due to the high degree of variability insight many repertoire of the vocal species\nOur unsupervised method for automatic annotation of bioacoustics sequences seems to overtake these obstacles, by the identification of specie-specific pattern, and seems to be not influenced by the inter-individual variation whiting the song's structure. Furthermore, during the study, we have worked whit recording that may contain more than a species vocalizing, or partial overlapping fror different species and specimen, and even in these circumstance our model has shown a great abil ity of categorisations and generalisations, identifying the main pattern and almost all the other se quences recorded (including silence).\nGood examples are provided, from the birds' dataset, by recordings including the common wooc pigeon (Columba palumbus; Figure 3 (top)), or from the files containing golden oriole (Oriolus. oriolus) as a main species. Their vocalisations are partially overlapped by vocalisation of differen. species, such as Eurasian blue tit (Cyanistes caeruleus, figure [3 (bottom)) or cricket, and is clea in the analysis, the distinction between the bird vocalising and other animals in the background. Finally, a last example could be taken from the carrion crow (Corvus corone, figure |3|(middle recordings. Here, multiple specimens are vocalising at the same time, but it does not influence the. efficiency of our method.\nBy the other hand, in presence of species with complex acoustics behaviours, such as the commo nightingale (Luscinia megarhynchos) the method could lead to identify as different classes each on of the different phrasing elements in the song. This last approach could be useful in order to perforn behavioural analysis focused on the identification of hidden significance in the songs, but may loos his benefit when the main achievement is an ecological topic. We finally consider this method as promising tool, as we move forward, but further analysis will be needed to build, an efficient too whit relevant application in the acoustic monitoring and conservation of biodiversity.\n3Automatic large-scale classification of bird sounds is strongly improved by unsupervised feature learning Dan Stowell and Mark D Plumbley; 2014; PeerJ.\nSpecies NMI Score sturnus_vulgaris 0.467 turdus_philomelos 0.398 emberiza_citrinella 0.534 certhia_brachydactyla 0.417 0.352 columba_palumbus picus_viridis 0.602 anthus_trivialis 0.332 phasianus_colchicus 0.272 cuculus_canorus 0.205 sylvia_atricapilla 0.405 corvus_corone 0.68 phylloscopus_collybita 0.267 streptopelia_decaocto 0.306 turdus_viscivorus 0.417 dendrocopos_major 0.481 erithacus_rubecula 0.394 pavo_cristatus 0.437 fringilla_coelebs 0.565 aegithalos_caudatus 0.202 turdus merula 0.395 branta_canadensis 0.339 parus_palustris 0.521 sitta_europaea 0.332 alauda_arvensis 0.169 prunella_modularis 0.476 oriolus_oriolus 0.316 carduelis_chloris 0.385 phoenicurus_phoenicurus 0.291 strix_aluco 0.2 parus_caeruleus 0.413 parus_major 0.27 motacilla_alba 0.105 luscinia_megarhynchos 0.497 troglodytes_troglodytes 0.407 garrulus glandarius 0.003 mean 0.367\nTable 1: NMI score for the obtained segmentation using HDP-HMM\nzH [1666 '0] beuy 10 20 30 %40 5 50 60 70 2 20 30 %40 5 50 60 70 Temps en secondes. zH [666 0] 10 20 30 5 50 60 70 12 15 Temps en secondes\nFigure 3: The spectrogram of the bird song and the obtained state sequence by the Blocked Gibbs sampler inference approach for the HDP-HMM, Columba palumbus with cricket noises activities (top); different specimens of Corvus corone (middle); Oriolus oriolus partially overlapped whit Cyanistes caeruleus on the backgound (bottom).\nB Hrnnnnny Time Time (A) Separated by silence (B) Change in acoustic properties (regardless of silence) B C B B hrrnneney Time Time (C) Series of sounds (D) Higher levels of organisation.\nA B hrenneeey heennney Time Time\nB C B B hrenneney frrnnnney Time Time (C) Series of sounds. (D) Higher levels of organisation\nFigure 4: Acoustic common way used to divide spectrogram into units\nK a TT k Z1 Z2 Zt H. 0k X X1 X2 Xt\nTTk ZO Z1 H. 0k X1 X2 Xt\nFigure 5: Graphical representation of sticky Hierarchical Dirichlet Process for Hidden Marko Model (HDP-HMM).\nsoun buos 4 5 6\nFigure 6: State sequence for 8.6 min of humpback whale song obtained by the Blocked Gibb sampling inference approach for HDP-HMM.\n8000 7000 8000 6000 2 6000 ZH 5000 ul Rpuenee. 4000 4000 3000 2000 2000 1000 0 5 10 15 20 25 0 2 4 6 8 10 Time in seconds Time in seconds\nFigure 7: Spectrogi\n3 1 70 Rlssees eeees 60 50 40 30 20 10 0 20000 ZH 15000 Freeneeey 10000 5000 0 5 10 15 20 25\nFigure 8: Fringilla coelebs results. The first graph represents the labelled ground truth over 30s where label O is always the 'none\"' label and the other labels are not the same from one graph to an other. The second graph represents our model with the 76 classes. The last one is the spectrogram.\n70 aasseess 60 50 40 30 20 10 0 20000 15000 10000 5000 0 5 10 15 20 25\nFigure 9: Corvus corone results. See caption of|8[for details\n2 O 70 fasseee 60 50 40 Mpel 30 20 10 0 20000 ZH 15000 reenneey 10000 5000 1 0 5 10 15 20 25\nFigure 10: Garrulus glandarius results. See caption of[8|for details\n20 15 10 5 C 70 Cassee 60 50 40 30 20 10 0 20000 15000 reenneey 10000 5000 0 5 10 15 20 25\nFigure 11: Motacilla alba results. See caption of|8|for details\n4 3 1 O 70 dasseess 60 50 40 30 20 10 0 20000 ZH 15000 rrenneeey 10000 5000 0 5 10 15 20 25\nFigure 12: Picus viridis results. See caption of|8|for details\nzH [ooss o] beay Soun nuis 5 6A 2 4 6 8 10 12 14 Time in seconds zH [0oss 'd] beay soun duiss 5 6 1 2 6 8 10 12 14 Time in seconds zH [aogs o] baay 2 Sooun buiss 5 67 2 6 8 10 12 14 Time in seconds\nzH [1666 0] baa] 10 20 40 50 60 70 12 15 Time in seconds zH [1666 '0] baa] 10 20 50- 60 70 3 9 12 15 Time in seconds zH [1666 0] be1] 10 20 50 60 70 3 12 15 Time in seconds\nFigure 14: See caption of|2[for details\n3 9 9 12"}]
HyoST_9xl
[{"section_index": "0", "section_name": "INTRODUCTION", "section_text": "Deep neural networks (DNNs) have shown significant improvements in many application domains ranging from computer vision (He et al.(2015)) to natural language processing (Luong et al.(2015) and speech recognition (Amodei et al.(2015). The abundance of powerfu1 hardware makes it easier to train complicated DNN models with large capacities. The upside of complicated models is that they are very expressive and can capture the highly non-linear relationship between features and output. The downside of such large models is that they are prone to capturing the noise, rather than the intended pattern, in the training dataset. This noise does not generalize to new datasets, leading to over-fitting and a high variance.\n*Indicates equal contribution. f Also at NVIDIA Now at Google Brain. eriche@ google.co."}, {"section_index": "1", "section_name": "Peter Vajda, Manohar Paluri", "section_text": "vajdap,mano}@fb.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Dense Sparse Dense Pruning Re-Dense Sparsity Constraint Increase Model Capacity\nFigure 1: Dense-Sparse-Dense Training Flow. The sparse training regularizes the model, and the final dense training restores the pruned weights (red), increasing the model capacity without overfitting\nAlgorithm 1: Workflow of DSD training Initialization: W(0) with W(0) ~ N(0, ) Output : W(t) Initial Dense Phase while not converged do W(t) =W(t-1) _n(t)Vf(W(t-1);x C t = t + 1; end Sparse Phase II initialize the mask by sorting and keeping the Top-k weights S = sort(|W(t-1)|); X=Sk;Mask=1(|W(t-1)|>X); while not converged do W(t) _ W(t-1) _n(t)Vf(W(t-1);x(t-1) W(t) = W(t) . Mask; t = t + 1; end Final Dense Phase while not converged do W(t) =W(t-1) _n(t)Vf(W(t-1);x(t-1) t = t + 1; end goto Sparse Phase for iterative DSD;\nIn contrast, simply reducing the model capacity would lead to the other extreme, causing a machine learning system to miss the relevant relationships between features and target outputs, leading to under-fitting and a high bias. Bias and variance are hard to optimize at the same time.\nTo solve this problem, we propose a dense-sparse-dense training flow (DSD), a novel training strategy that starts from a dense model from conventional training, then regularizes the model with sparsity constrained optimization, and finally increases the model capacity by restoring and retraining the pruned weights. At testing time, the final model produced by DSD still has the same architecture and dimension as the original dense model, and DSD training doesn't incur any inference overhead We experimented DSD training on 7 mainstream CNN / RNN / LSTMs and found consistent performance gains over its comparable counterpart for image classification, image captioning anc speech recognition."}, {"section_index": "3", "section_name": "DSD TRAINING FLOW", "section_text": "Our DSD training employs a three-step process: dense, sparse, re-dense. Each step is illustratec Figure[1and Algorithm[1 The progression of weight distribution is plotted in Figure[2\nInitial Dense Training: The first D step learns the connection weights and importance via norma. network training on the dense network. Unlike conventional training, however, the goal of this D step. is not only to learn the values of the weights; we are also learning which connections are important We use a simple heuristic to quantify the importance of the weights using their absolute value.\nDense Sparse Dense - Pruning Re-Dense Sparsity Constraint Increase Model Capacity\nTrain on Dense (D) Pruning the Network Train on Sparse(S) Recover Zero Weights Train on Dense (D) 6400 6400 6400 6400 6400 4800 4800 4800 4800 4800 count count count 3200 3200 3200 1600 1600 1600 1600 1600 0.05 0.05 0.05 0 0 0.05 0.05 0.05 0.05 0.05 0.05 0 0.05 Weight Value Weight Value Weight Value Weight Value Weight Value (a) (b) (c) (d) (e)\nFigure 2: Weight distribution of a layer of GoogLeNet at different points in DSD training: the origina GoogLeNet (a), pruned (b), after retraining with the sparsity constraint (c), ignoring the sparisty constraint and recovering the zero weights (d), and after retraining the dense network (e).\nSparse Training: The S step prunes the low-weight connections and trains a sparse network. We applied the same sparsity to all the layers, thus there's a single hyper parameter: the sparsity, the. percentage of weights that are pruned to O. For each layer W with N parameters, we sorted the parameters, picked the k-th largest one X = Sk as the threshold where k = N * (1 - sparsity), and generated a binary mask to remove all the weights smaller than . Details are shown in Algorithm[1\nRetraining while enforcing the binary mask in each iteration, we converted a dense network into a sparse network that has a known sparsity support and can fully recover or even increase the original accuracy of initial dense model under the sparsity constraint. The sparsity is the same for all the layers and can be tuned using validation. We find a sparsity value between 25% and 50% generally works well in our experiments.\nFinal Dense Training: The final D step recovers the pruned connections, making the network dense again. These previously-pruned connections are initialized to zero and the entire network is retrained with 1/10 the original learning rate (since the sparse network is already at a good local minima) Hyper parameters like dropout ratios and weight decay remained unchanged. By restoring the pruned connections, the final D step increases the model capacity of the network and makes it possible to arrive at a better local minima compared with the sparse model from the S step.\nTo visualize the DSD training flow, we plotted the progression of the weight distribution in Figure|2 The figure is plotted using GoogLeNet's inception_5b3x3 layer, and we found this progression of weight distribution very representative for VGGNet and ResNet as well. The original distribution of weight is centered on zero with tails dropping off quickly. Pruning is based on absolute value so after pruning the large center region is truncated away. The un-pruned network parameters adjust themselves during the retraining phase, so in (c), the boundary becomes soft and forms a bimodal distribution. In (d), at the beginning of the re-dense training step, all the pruned weights come back again and are reinitialized to zero. Finally, in (e), the pruned weights are retrained together with the un-pruned weights. In this step, we kept the same learning hyper-parameters (weight decay, learning rate, etc.) for pruned weights and un-pruned weights. Comparing Figure (d) and (e), the un-pruned weights' distribution almost remained the same, while the pruned weights became distributed further around zero. The overall mean absolute value of the weight distribution is much smaller. This is a good phenomenon: choosing the smallest vector that solves the learning problem suppresses irrelevant components of the weight vector ([Moody et al.(1995)).\nWe remove small weights because of the Taylor expansion. The loss function and its Taylor expansion are shown in Equation (1)(2). We want to minimize the increase in Loss when conducting a hard thresholding on the weights, so we need to minimize the first and second terms in Equation 2 Since we are zeroing out parameters, W, is actually W, - 0 = W,. At the local minimum where aW? a2 Loss/aw? is expensive to calculate and W; has a power of 2, we use |W;| as the metric of pruning Smaller W means a smaller increase to the loss function.\naLoss 1 a2 Loss ALoss = AW + AW aw; 2 aw?\nTable 1: Overview of the neural networks, data sets and performance improvements from DSD\nDropout and DropConnect: DSD, Dropout (Srivastava et al.(2014)) and DropConnnect (Wan et al. (2013)) can all regularize neural networks and prevent over-fitting. The difference is that Dropout and. DropConnect use a random sparsity pattern at each SGD iteration, while DSD training learns with a deterministic data driven sparsity pattern throughout sparse training. Our experiments on VGG16,. GoogLeNet and NeuralTalk show that DSD training can work together with Dropout..\nModel Compression: Both model compression (Han et al.(2016, 2015)) and DSD training use network pruning (LeCun et al.[(1990); Hassibi et al.(1993)). The difference is that the focus of DSD training goes beyond maintaining the accuracy. DSD is able to further improve the accuracy by. considerable margins. Another difference is that DSD training doesn't require aggressive pruning. A. modestly pruned network (50%-60% sparse) can work well. However, model compression requires aggressively pruning the network to achieve high compression rates..\nSparsity Regularization and Hard Thresholding: the truncation-based sparse network has been theoretically analyzed for learning a broad range of statistical models in high dimensions (Langford et al.(2009);[Yuan & Zhang (2013); Wang et al.(2014). A similar training strategy with iterative hard thresholding and connection restoration is proposed by Jin et al.(2016) during the same time period as, but independently from, DSD. Sparsity regularized optimization is heavily applied in Compressed Sensing (Candes & Romberg(2007)) to find optimal solutions to the inverse problems in highly under-determined systems based on the sparsity assumption.\nWe applied DSD training to different kinds of neural networks in different domains. We found tha. DSD training improved the accuracy for all these networks compared to the baseline networks tha. were not trained with DSD. The neural networks are chosen from CNN, RNN and LSTMs; the. datasets covered image classification, speech recognition, and caption generation. For network. trained for ImageNet, we focus on GoogLeNet, VGG and ResNet, which are widely used in researcl and production. An overview of the networks, dataset and accuracy results are shown in Table[1] Fo. the convolutional networks, we do not prune the first layer during the sparse phase, since it has only 3. channels and is very sensitive to pruning. The sparsity is the same for all the other layers, including. convolutional and fully-connected layers. We do not change any other training hyper-parameters, anc. the initial learning rate at each stage is decayed the same as conventional training. The epochs are. decided by when the loss converges. When the loss no longer decreases, we stop the training..\nNeural Network Domain Dataset Type Baseline DSD Abs. Imp. Rel. Imp. GoogLeNet Vision ImageNet CNN 31.1% 30.0% 1.1% 3.6% VGG-16 Vision ImageNet CNN 31.5%1 27.2% 4.3% 13.7% ResNet-18 Vision ImageNet CNN 30.4%1 29.2% 1.2% 4.1% ResNet-50 Vision ImageNet CNN 24.0%1 22.9% 1.1% 4.6% NeuralTalk Caption Flickr-8K LSTM 16.82 18.5 1.7 10.1% DeepSpeech Speech WSJ'93 RNN 33.6%3 31.6% 2.0% 5.8% DeepSpeech-2 Speech WSJ'93 RNN 14.5% 3 13.4% 1.1% 7.4%\nDistillation: Model distillation (Hinton et al.(2015)) is a method that can transfer the learned. knowledge from a large model to a small model, which is more efficient for deployment. This is another method that allows for performance improvements in neural networks without architectural. changes."}, {"section_index": "4", "section_name": "4.1 GOOGLENET", "section_text": "We experimented with the BVLC GoogLeNet (Szegedy et al.(2015)) model obtained from the Caffe Model Zoo (Jia (2013). It has 13 million parameters and 57 convolutional layers. We pruned each. layer (except the first) to 30% sparsity. Retraining the sparse network gave some improvement in accuracy due to regularization, as shown in Table[2] After the final dense training step, GoogLeNet's. error rates were reduced by 1.12% (Top-1) and 0.62% (Top-5) over the baseline..\nTable 2: DSD results on GoogLeNet\nGoogLeNet Top-1 Err Top-5 Err Sparsity Epochs LR Baseline 31.14% 10.96% 0% 250 1e-2 Sparse 30.58% 10.58% 30% 11 1e-3 DSD 30.02% 10.34% 0% 22 1e-4 LLR 30.20% 10.41% 0% 33 1e-5 Improve (abs) 1.12% 0.62% Improve (rel) 3.6% 5.7%\nWe explored DSD training on VGG-16 (Simonyan & Zisserman(2014)), which is widely used in detection, segmentation and transfer learning. The baseline model is obtained from the Caffe Model Zoo (Jia|(2013)). Similar to GoogLeNet, each layer is pruned to 30% sparsity. DSD training greatly. reduced the error by 4.31% (Top-1) and 2.65% (Top-5), detailed in Table 3] DSD also wins over the LLR result by a large margin..\nTable 3: DSD results on VGG-16\nVGG-16 Top-1 Err Top-5 Err Sparsity Epochs LR Baseline 31.50% 11.32% 0% 74 1e-2 Sparse 28.19% 9.23% 30% 1.25 1e-4 DSD 27.19% 8.67% 0% 18 1e-5 LLR 29.33% 10.00% 0% 20 1e-7 Improve (abs) 4.31% 2.65% 1 Improve (rel) 13.7% 23.4%"}, {"section_index": "5", "section_name": "4.3 RESNET", "section_text": "Deep Residual Networks (ResNets,He et al.(2015)) were the top performer in the 2015 ImageNet. challenge. The baseline ResNet-18 and ResNet-50 models are provided byFacebook(2016). We. prune to 30% sparsity uniformly, and a single DSD pass for these networks reduced top-1 error by. 1.26% (ResNet-18) and 1.12% (ResNet-50), shown in Table4] A second DSD iteration can further improve the accuracy. As a fair comparison, we continue train the original model by lowering the learning rate by another decade, but can't reach the same accuracy as DSD, as shown in the LLR row..\nTable 4: DSD results on ResNet-18 and ResNet-50\nResNet-18 ResNet-50 Top-1 Err Top-5 Err Top-1 Err Top-5 Err Sparsity Epochs LR Baseline 30.43% 10.76% 24.01% 7.02% 0% 90 1e-1 Sparse 30.15% 10.56% 23.55% 6.88% 30% 45 1e-2 DSD 29.17% 10.13% 22.89% 6.47% 0% 45 1e-3 LLR 30.04% 10.49% 23.58% 6.84% 0% 90 1e-5 Improve (abs) 1.26% 0.63% 1.12% 0.55% Improve (rel) 4.14% 5.86% 4.66% 7.83%\nWe compared DSD v.s. conventional training for the same number of epochs by dropping the learning rate upon \"convergence\" and continuing to learn. The result is shown as LLR (lower the learning rate). The training epochs for LLR is equal to that of Sparse+re-Dense as a fair comparison. LLR can not achieve the same accuracy as DSD.\n_tate-arm Baseline: a boy O Baseline: a Baseline: two Baseline: a man and Baseline: a person in. in a red shirt is basketball player in dogs are playing a woman are sitting a red jacket is riding a climbing a rock a red uniform is together in a field. on a bench. bike through the wall. playing with a ball. woods. xSparse: a young OSparse: a basketball Sparse: two dogs OSparse: a man isSparse: a car drives girl is jumping off player in a blue are playing in a sitting on a bench throughamudpuddle.. a tree. uniform is jumping field. with his hands in the. over the goal. air. DSD: a car drives DSD: a young girl DSD: a basketballDSD: two dogs are ODSD: a man is sitting through a forest.. in a pink shirt is player in a white playing in the on a bench with his. swinging on a uniform is trying tograss. arms folded. swing. make a shot.\nFigure 3: Visualization of DSD training improving the performance of image captioning\nTable 5: DSD results on NeuralTalk\nNeuralTalk BLEU-1 BLEU-2 BLEU-3 BLEU-4 Sparsity Epochs LR Baseline 57.2 38.6 25.4 16.8 0 19 1e-2 Sparse 58.4 39.7 26.3 17.5 80% 10 1e-3 DSD 59.2 40.7 27.4 18.5 0 6 1e-4 Improve(abs) 2.0 2.1 2.0 1.7 1 3.5% 5.4% 7.9% 10.1% Improve(rel) -\nIn the pruning step, we pruned all layers except Ws, the word embedding lookup table, to 80%. sparse. We used a higher sparsity than CNN's experiments based on the validation set of flickr8k. We. retrained the remaining sparse network using the same weight decay and batch size as the original. paper. The learning rate is tuned based on the validation set, shown in Table[5] Retraining the sparse. network improved the BLUE score by [1.2, 1.1, 0.9, 0.7]. After getting rid of the sparsity constraint and retraining the dense network, the final results of DSD further improved the BLEU score by [2.0 2.1, 2.0, 1.7] over baseline.\nThe BLEU score is not the sole criteria measuring auto-caption system. We visualized the captior generated by DSD training in Figure[3] In the first image, the baseline model mistakes the girl with boy and the girl's hair with a rock wall; the sparse model can tell that it's a girl; and the DSD mode can further identify the swing. In the the second image, DSD training can more accurately tell th olayer is in a white uniform and trying to make a shot, rather than the baseline just saying he's i a red uniform and playing with a ball. The performance of DSD training generalizes beyond thes examples; more image caption results generated by DSD training are provided in the Appendix."}, {"section_index": "6", "section_name": "We explore DSD training on speech recognition tasks using both Deep Speech 1 (DS1) and Deep Speech 2 (DS2) networks (Hannun et al.(2014);Amodei et al.(2015))", "section_text": "The DS1 model is a 5 layer network with 1 Bidirectional Recurrent layer, as described in Table6 The training dataset used for this model is the Wall Street Journal (WSJ), which contains 81 hours of\nWe evaluated DSD training on RNN and LSTM beyond CNN. We applied DSD to NeuralTalk (Karpathy & Fei-Fei(2015), an LSTM for generating image descriptions. It uses a CNN as an image feature extractor and an LSTM to generate captions. To verify DSD training on LSTMs, we fixed the CNN weights and only train the LSTM weights. The baseline NeuralTalk model we used is the flickr8k_cnn_1stm_v1.p downloaded from|NeuralTalk Model Zoo.\nTable 6: Deep Speech 1 Architecture\nTable 7: DSD results on Deep Speech 1: Word Error Rate (WER\nspeech. The validation set consists of 1 hour of speech. The test sets are from WSJ'92 and WSJ'93. and contain 1 hour of speech combined. The Word Error Rate (WER) reported on the test sets for the baseline models is different from[Amodei et al.(2015) due to two factors. First, in DeepSpeech2 the models were trained using much larger data sets containing approximately 12,000 hours of. multi-speaker speech data. Secondly, WER was evaluated with beam search and a language model in DeepSpeech2; here the network output is obtained using only max decoding to show improvement in. the neural network accuracy, and filtering out the other parts..\nThe first dense phase was trained for 50 epochs. In the sparse phase, weights are pruned in the. Fully Connected layers and the Bidirectional Recurrent layer only (they are the majority of the. weights). Each layer is pruned to achieve the same 50% sparsity and trained for 50 epochs. In the final dense phase, the pruned weights are initialized to zero and trained for another 50 epochs. For. a fair comparison of baseline, we used Nesterov SGD to train, reduce the learning rate with each. re-training, and keep all other hyper parameters unchanged. The learning rate is picked using our. validation set.\nWe first wanted to compare the DSD results with a baseline model trained for the same number of epochs. The first 3 rows of Table7|shows the WER when the DSD model is trained for 50+50+50=150 epochs, and the 6th line shows the baseline model trained by 150 epochs (the Same #Epochs as DSD). DSD training improves WER by 0.13 (WSJ '92) and 1.35 (WSJ '93) given the same number of epochs as the conventional training.\nGiven a second DSD iteration, accuracy can be further improved. In the second DSD iteration, each layer is pruned away 25% of the weights. Similar to the first iteration, the sparse model and subsequent dense model are further retrained for 50 epochs. The learning rate is scaled down for each re-training step. The results are shown in Table[7] Compared with the fully trained and converged baseline, the second DSD iteration improves WER by 0.58 (WSJ '92) and 1.96 (WSJ '93), a relative improvement of 2.07% (WSJ '92) and 5.84% (WSJ '93). So, we can do more DSD iterations (DSDSD) to further improve the performance. Adding more DSD iterations has a diminishing return"}, {"section_index": "7", "section_name": "4.6 DEEPSPEECH 2", "section_text": "To show how DSD works on deeper networks, we evaluated DSD on the Deep Speech 2 (DS2). network, described in Table[8] This network has 7 Bidirectional Recurrent layers with approximately 67 million parameters, around 8 times larger than the DS1 model. A subset of the internal English training set is used. The training set is comprised of 2,100 hours of speech. The validation set is comprised of 3.46 hours of speech. The test sets are from WSJ'92 and WSJ'93, which contain 1 hour of speech combined.\nDeepSpeech 1 WSJ'92 WSJ '93 Sparsity Epochs LR Dense Iter 0 29.82 34.57 0% 50 8e-4 Sparse Iter 1 27.90 32.99 50% 50 5e-4 Dense Iter 1 27.90 32.20 0% 50 3e-4 Sparse Iter 2 27.45 32.99 25% 50 1e-4 Dense Iter 2 27.45 31.59 0% 50 3e-5 Baseline 28.03 33.55 0% 150 8e-4 Improve(abs) 0.58 1.96 1 - Improve(rel) 2.07 % 5.84 %\nTable 9shows the results of the two iterations of DSD training. For the first sparse re-training similar to DS1, 50% of the parameters from the Bidirectional Recurrent Layers and Fully Connected\nTable 8: Deep Speech 2 Architecture\nTable 9: DSD results on Deep Speech 2 (WER\nHere we show again that DSD can be applied multiple times or iteratively for further performance. gain. A second iteration of DSD training achieves better accuracy as shown in Table[9] For the second sparse iteration, 25% of parameters in the Fully Connected layer and Bidirectional Recurrent layers are pruned. Overall DSD training achieves relative improvement of 5.55% (WSJ '92) and 7.44% (WSJ '93) on the DS2 architecture. These results are in line with DSD experiments on the smaller DS1 network. We can conclude that DSD re-training continues to show improvement in accuracy. with larger layers and deeper networks.."}, {"section_index": "8", "section_name": "5 DISCUSSION", "section_text": "Dense-Sparse-Dense training changes the optimization process and improves the optimization perfor mance with significant margins by nudging the network with pruning and re-densifying. We conjecture that the following aspects contribute to the efficacy of DSD training..\nEscape Saddle Point: Based on previous studies, one of the most profound difficulties of optimizing deep networks is the proliferation of saddle points (Dauphin et al.(2014)). Advanced optimizatior methods have been proposed to overcome saddle points. For a similar purpose but with a different approach, the proposed DSD method overcomes the saddle points by pruning and re-densifying framework. Pruning the converged model perturbs the learning dynamics and allows the network to jump away from saddle points, which gives the network a chance to converge at a better local ol global minimum. This idea is also similar to Simulated Annealing ([Hwang(1988)). While Simulated Annealing randomly jumps with decreasing probability on the search graph, DSD deterministically deviates from the converged solution achieved in the first dense training phase by removing the small weights and enforcing a sparsity support. Similar to Simulated Annealing, which can escape sub-optimal solutions multiple times in the entire optimization process, DSD can also be applied iteratively to achieve further performance gains, as shown in the Deep Speech results.\nSignificantly Better Minima: After escaping saddle point, DSD achieved better minima. We measured both the training loss and validation loss, DSD training decreased the loss and error on both the training and the validation sets on ImageNet. We have also validated the significance of the improvements compared with conventional fine-tuning by t-test, shown in the appendix.\nRegularized and Sparse Training: The sparsity regularization in the sparse training step moves the optimization to a lower-dimensional space where the loss surface is smoother and tend to be more robust to noise. More numerical experiments verified that both sparse training and the final DSD reduce the variance and lead to lower error (shown in the appendix).\nDeepSpeech 2 WSJ'92 WSJ'93 Sparsity Epochs LR Dense Iter 0 11.83 17.42 0% 20 3e-4 Sparse Iter 1 10.65 14.84 50% 20 3e-4 Dense Iter 1 9.11 13.96 0% 20 3e-5 Sparse Iter 2 8.94 14.02 25% 20 3e-5 Dense Iter 2 9.02 13.44 0% 20 6e-6 Baseline 9.55 14.52 0% 60 3e-4 Improve(abs) 0.53 1.08 - Improve(rel) 5.55 % 7.44% 1\nLayers are pruned. The Baseline model is trained for 60 epochs to provide a fair comparison with DSD training. The baseline model shows no improvement after 40 epochs. With one iteration of DSD training, WER improves by 0.44 (WSJ '92) and O.56 (WSJ '93) compared to the fully trained baseline.\nRobust re-initialization: Weight initialization plays a big role in deep learning (Mishkin & Matas (2015)). Conventional training has only one chance of initialization. DSD gives the optimization a second (or more) chance during the training process to re-initialize from a more robust sparse training solution. We re-densify the network from the sparse solution which can be seen as a zero initializatior for pruned weights. Other initialization methods are also worth trying.\nBreak Symmetry: The permutation symmetry of the hidden units makes the weights symmetrical thus prone to co-adaptation in training. In DSD, pruning the weights breaks the symmetry of the hidden units associated with the weights, and the weights are asymmetrical in the final dense phase\nWe introduce DSD, a dense-sparse-dense training framework that regularizes neural networks by pruning and then restoring connections. Our method learns which connections are important during. the initial dense solution. Then it regularizes the network by pruning the unimportant connections and retraining to a sparser and more robust solution with same or better accuracy. Finally, the pruned connections are restored and the entire network is retrained again. This increases the dimensionality of parameters, and thus model capacity, from the sparser model..\nDSD training achieves superior optimization performance. We highlight our experiments using GoogLeNet, VGGNet, and ResNet on ImageNet; NeuralTalk on Flickr-8K; and DeepSpeech-1&2 on the WSJ dataset. This shows that the accuracy of CNNs, RNNs, and LSTMs can be significantly improved with DSD training. Our numerical results and empirical tests show the inadequacy of current training methods for which we have provided an effective solution."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Dario Amodei. Rishita Anubhai. Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speech recognition in. english and mandarin. arXiv preprint arXiv:1512.02595, 2015\nFacebook. Facebook.ResNet.Torch. https://github.com/facebook/fb.resnet.torch, 2016\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning trained quantization and huffman coding. International Conference on Learning Representations, 2016.\nAwni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, and Andrew Ng. Deep speech: Scaling up end-to-end speech recognition. arXiv, preprint arXiv:1412.5567, 2014.\nBabak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal brain surgeon Advances in neural information processing systems, pp. 164-164, 1993.\nYangqing Jia. BVLC caffe model zoo. http://caffe.berkeleyvision.org/model_zoo, 2013\nMinh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neur. machine translation. arXiv preprint arXiv:1508.04025, 2015.\nDmytro Mishkin and Jiri Matas. All you need is a good init. arXiv preprint arXiv:1511.06422, 2015\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15:1929-1958, 2014.\nXiao-Tong Yuan and Tong Zhang. Truncated power method for sparse eigenvalue problems. The Journal of Machine Learning Research, 14(1):899-925, 2013.\nann N Dauphin. Razvan Pascanu. Caglar Gulcehre. Kyunghyun Cho. Surya Ganguli. and Yoshua Bengio Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in neural information processing systems, pp. 2933-2941, 2014.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.\nYann LeCun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Advances in Neural Informatior Processing Systems, pp. 598-605. Morgan Kaufmann, 1990.\nZhaoran Wang, Quanquan Gu, Yang Ning, and Han Liu. High dimensional expectation-maximization algorithm Statistical optimization and asymptotic normality. arXiv preprint arXiv:1412.8729. 2014"}, {"section_index": "10", "section_name": "A. APPENDIX: SIGNIFICANCE OF DSD IMPROVEMENTS", "section_text": "DSD training improves the baseline model performance by consecutively pruning and re-densifying the networl. veights. We conducted more intensive experiments to validate that the improvements are significant and not du to any randomness in the optimization. In order to evaluate the significance, we repeated the baseline training. DSD training (retraining on baseline) and conventional fine-tuning (retraining on the same baseline) multiple. times. The statistical significance of DSD improvements are quantified on the Cifar-10 dataset using ResNet."}, {"section_index": "11", "section_name": "1. SIGNIFICANT IMPROVEMENTS ON CIFAR-1O USING RESNET-20", "section_text": "Cifar-10 is a smaller image recognition benchmark with 50,000 32x32 color images for training and 10,000 fo testing. Training on Cifar-10 is fast enough that it is feasible to conduct intensive experiments within a reasonabl time to evaluate DSD performance. The baseline models were trained with the standard 164 epochs and initia LR of 0.1 as recommended in the released code (Facebook|[2016). After 164 epochs, we obtained the model witl a 8.26% top-1 testing error that is consistent with the Facebook result. Initialized from this baseline model, w repeated 16 times of re-training using DSD training and 16 times using conventional fine-tuning. The DSD usec sparsity of 50% and 90 epochs (45 for sparse training and 45 for re-densing training). As a fair comparison, the conventional fine-tuning is also based on the same baseline model with the same hyper-parameters and setting (90 epochs, 45 LR of 0.001 and 45 LR of 0.0001).\nDetailed results are listed below. On Cifar-10 and using ResNet-20 architecture, the DSD training on averag. achieved Top-1 testing error of 7.89%, which is a 0.37% absolute improvement (4.5% relative improvement over the baseline model and relatively 1.1% better than the conventional fine-tuning. The experiment also show. that DSD training can reduce the variance of learning: the trained models after the sparse training and the fina. DSD training both have lower standard deviation of errors compared with their counterparts using conventiona fine-tuning.\nTable 10: Validation of DSD on Cifar10 data using ResNet-20\nResNet-20 Avg. Top-1 Err SD. Top-1 Err Sparsity Epochs LR Baseline 8.26% 0% 164 1e-1 - Direct Finetune (First half) 8.16% 0.08% 0% 45 1e-3 Direct Finetune (Second half) 7.97% 0.04% 0% 45 1e-4 DSD (Fist half, Sparse) 8.12% 0.05% 50% 45 1e-3 DSD (Second half, Dense) 7.89% 0.03% 0% 45 1e-4 Improve from baseline(abs) 0.37% Improve from baseline(rel) 4.5%\nWe used t-test (unpaired) to compare the top-1 testing error rate of the models trained using DSD and conventional methods. The results demonstrate the DSD training achieves significant improvements from both the baseline model (p<0.001) and conventional fine tuning (p<0.001).\np<0.001 9.0 p<0.001 8.5 p<0.001 %Jouug T-dq 8.26 7.97 8.0 7.89 7.5 7.0 Baseline Fine-fune DSD\nFigure 4: Significance of DSD improvements over baseline and fine-tune\nBased on the results above, DsD significantly improves conventional baseline training and is also significantly better and more robust than conventional fine-tuning..\nBaseline: a boy is swimming in a pool. Baseline: a group of people are. Baseline: two girls in bathing suits are OBaseline: a man in a red shirt and. ^Sparse: a small black dog is jumping standing in front of a building.. playing in the water. jeans is riding a bicycle down a street. Sparse: a group of people are standing. Sparse: two children are playing in the Sparse: a man in a red shirt and a. into a pool.. DSD: a black and white dog is swimming in front of a building.. sand. woman in a wheelchair. in a pool.. DSD: a group of people are walking in a. DSD: two children are playing in the. DSD: a man and a woman are riding o. park. V sand. a street. Baseline: a group of people sit on a. Baseline: a man in a black jacket and a. Baseline: a group of football players in OBaseline: a dog runs through the gras. bench in front of a building. black jacket is smiling. red uniforms. OSparse: a dog runs through the grass. OSparse: a group of people are Sparse: a man and a woman are standing CSparse: a group of football players in a DSD: a white and brown dog is runnin. standing in front of a building.. in front of a mountain. field. through the grass.. DSD: a group of people are standing DSD: a man in a black jacket is standing DSD: a group of football players in red in a fountain.. next to a man in a black shirt.. and white uniforms. CBaseline: a man in a red shirt is. XBaseline: a group of people are sitting in OBaseline: a man in a red jacket is Baseline:a young girl in a red dress is. standing on a rock.. a subway station. standing in front of a white building.. holding a camera.. OSparse: a man in a red jacket is xSparse: a man and a woman are sitting. OSparse: a man in a black jacket is. xSparse: a little girl in a pink dress is. standing on a mountaintop.. on a couch.. standing in front of a brick wall.. standing in front of a tree. DSD: a man is standing on a rock DSD: a group of people are sitting at a DSD: a man in a black jacket is ODSD: a little girl in a red dress is overlooking the mountains.. table in a room.. standing in front of a white building.. holding a red and white flowers.. OBaseline: a soccer player in a red OBaseline: a girl in a white dress is. OBaseline: a young girl in a swimming. Baseline: a soccer player in a red and. and white uniform is playing with a standing on a sidewalk. pool. white uniform is running on the field.. soccer ball. OSparse: a girl in a pink shirt is. xSparse: a young boy in a swimming. OSparse: a soccer player in a red unifor Sparse: two boys playing soccer. standing in front of a white building.. pool. is tackling another player in a white. DSD: two boys playing soccer. DSD: a girl in a pink dress is walking. ODSD: a girl in a pink bathing suit uniform. on a sidewalk.. jumps into a pool.. DSD: a soccer player in a red uniform kicks a soccer ball..\nBaseline: a man in a red shirt is. Baseline: a surfer is riding a wave. OBaseline: two young girls are posing Baseline: a snowboarder flies through. the air. sitting in a subway station.. Sparse: a man in a black wetsuit is for a picture. Sparse: a woman in a blue shirt is Sparse: a young girl with a blue shirt Sparse: a person is snowboarding surfing on a wave. down a snowy hill. standing in front of a store. DSD: a man in a black wetsuit is surfing. is blowing bubbles.. DSD: a man in a black shirt is. DSD: a young boy and a woman smile DSD: a person on a snowboard is. a wave. standing in front of a restaurant. for the camera. Ajumping over a snowy hill.. OBaseline: a group of people sit on a OBaseline: a little boy is playing with Baseline: a brown dog is running OBaseline: a man in a red shirt is. standing on top of a rock.. bench. a toy. through the grassy.. OSparse: a man in a red shirt is. OSparse: a group of people are sitting Sparse: a little boy in a blue shirt is Sparse: a brown dog is playing with playing with bubbles. a ball. standing on a cliff overlooking the. on a bench DSD: a group of children are sitting DSD: a baby in a blue shirt is playing. DSD: a brown dog is playing with a mountains. with a toy. DSD: a man is standing on a rock on a bench. ball. Baseline: a boy in a red shirt is Baseline: a man is standing on the Baseline: two people are riding a OBaseline: a black and white dog is jumping on a trampoline.. edge of a cliff. boat on the beach. running on the beach.. Sparse: a boy in a red shirt is OSparse: a man is standing on the. OSparse: two people are riding a wave OSparse: a black and white dog jumping in the air. shore of a lake.. on a beach. running on the beach. DSD: a boy in a red shirt is jumping. DSD: a man is standing on the shore DSD: a man in a yellow kayak is. DSD: a black dog is running on the off a swing. \"of the ocean. riding a wave. beach. OBaseline: a man and a dog are. CBaseline: a group of people are. CBaseline: a man in a red jacket is Baseline: a man in a red jacket and playing with a ball. standing in a room. riding a bike through the woods.. a helmet is standing in the snow.. xSparse: a man and a woman are. Sparse: a group of people gather. Sparse: a man in a red jacket is Sparse: a man in a red jacket and a playing tug of war. together. doing a jump on a snowboard.. helmet is standing in the snow.. DSD: a man and a woman are DSD: a group of people are posing DSD: a person on a dirt bike jumps DSD: a man in a red jacket is. playing with a dog. \"for a picture. standing in front of a snowy. over a hill mountain.\nxBaseline: a brown dog is running through the grassy. Sparse: a brown dog is playing with a ball. DSD: a brown dog is playing with a ball.\nBaseline: a little boy is playing with a toy. Sparse: a little boy in a blue shirt is. playing with bubbles.. DSD: a baby in a blue shirt is playing with a toy.\nxBaseline: two people are riding a boat on the beach Sparse: two people are riding a wav on a beach. DSD: a man in a yellow kayak is riding a wave.\nOBaseline: a black and white dog is running on the beach. Sparse: a black and white dog running on the beach. DSD: a black dog is running on the beach.\nBaseline: a man in a red jacket and a helmet is standing in the snow.. Sparse: a man in a red jacket and a. helmet is standing in the snow.. DSD: a man in a red jacket is standing in front of a snowy. mountain."}]
H13F3Pqll
[{"section_index": "0", "section_name": "INVERSE PROBLEMS IN COMPUTER VISION USINC ADVERSARIAL IMAGINATION PRIORS", "section_text": "Hsiao-Yu Fish Tung & Katerina Fragkiadaki\nGiven an image, humans effortlessly run the image formation process backwards in their minds: they can tell albedo from shading, foreground from background and imagine the occluded parts of the scene behind foreground objects. In this work, we propose a weakly supervised inversion machine trained to generate sim ilar imaginations that when rendered using differentiable, graphics-like decoders produce the original visual input. We constrain the imagination spaces by provid ing exemplar memory repositories in the form of foreground segmented objects albedo, shading, background scenes and imposing adversarial losses on the imagi nation spaces. Our model learns to perform such inversion with weak supervision without ever having seen paired annotated data, that is, without having seen the image paired with the corresponding ground-truth imaginations. We demonstrate our method by applying it to three Computer Vision tasks: image in-painting intrinsic decomposition and object segmentation, each task having its own dif ferentiable renderer. Data driven adversarial imagination priors effectively guide inversion, minimize the need for hand designed priors of smoothness or good con tinuation, or the need for paired annotated data.\nConsider Figure [1 We imagine a missing triangle occluding three small black circles rather thar three carefully arranged pacman shapes - which is what the pixels depict. In (b), we do not per. ceive two parts of the sea separated by a standing person, rather a continuous sea landscape. Ir. (c), we explain the input as a '\"'masked 8\" rather than two semicircles. Consistent explanations o. visual observations in terms of familiar concepts and memories we call \"imaginations\". Imagina. tions invert the image formation process and propose 3D shape, camera pose, scene layering, spatia. layout, albedo, shading, inpainted, un-occluded perceptions of the world, necessary for the under standing of the visual scene and interaction with it. Gestalt philosophers (Smith[(1988)) proposec. a set or principles to explain formation of such percepts, such as, closure, center surround pop-out. good continuity, smoothness etc, which many works attempt to hand design principles to incorpo. rate those into computational frameworks of e.g., perceptual grouping (Yu|(2003)). In this work, we present a learning-based inversion model that uses data-driven priors instead..\nWe propose a computational model that addresses inverse problems in Computer Vision using ad. versarial imagination priors. Figure|2lillustrates our model. It is comprised of a generator neural net-. work that given a visual input predicts visual imaginations, such as, in-painted image, un-occluded. background scene, object segmentation, albedo and shading etc. Relevant memories, assumed to\n2-28\n-28 ) a (b) (c)\nFigure 1: Humans come up with complete and plausible imaginations based on their familiar men ories, they imagine, rather than merely labeling pixels."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Renderer Generator (P) Recovered Input Input (x) Imagination(z) D1 Memory Retrieval Discriminator ( D ) Engine D2 (R) Relevant Memory Memory (M)\nArchitectures we explore ensure the original images can be reconstructed from the inferred imagina. tions using basic, parameter-free differentiable renderers. This particular choice of decoder function further enforces the imagination spaces to take the particular desired forms, along with the adversar- ial priors. We are inspired by work of capsules (Tieleman[(2014)) that first introduced such domain specific, graphics-like decoders for image generation. We empirically validate the choice of such. decoders against standard parametric deconvolutional networks employed by previous works, e.g.,. inverse graphics network ofKulkarni et al.(2015b).\nOur model can infers visual imaginations without having seen paired annotations, that is, each. input image paired with the corresponding ground-truth. Instead, repositories of relevant memories. suffice, in the form of collections of albedo, shading, segmented objects, complete background. scenes etc. This distinguishes it from previous works that rely on supervision for decomposing an. image into imaginations (Kulkarni et al.(2015b)) or that train image conditioned generator networks. using a combination of adversarial and L2 reconstruction loss on imaginations -such as works of Pathak et al.(2016) for in-painting, work of|Jiajun Wu|(2016) for 3D object reconstruction, work of Dong et al.(2015) for super-resolution. In these works, L2 loss is used to condition the imagination. on the input image, e.g., in Jiajun Wu(2016), the adversarial loss ensures the generated voxel grid. looks like a 3D object and the L2 loss ensures it is the 3D object corresponding to that particular. input image instead of an arbitrary one. Instead, we employ a different method of conditioning: we. add a reconstruction loss after our graphics-like decoder, when imaginations are projected (rendered. back) to image pixels, reconstructing the original image; our model is like an autoencoder in that. sense. In this way, we do not need paired supervision, we can take advantage of unlabelled data. and we do not discriminate between training and test phases: adversarial priors useful for inversion. can be employed at any time, and the relevant memory repositories may be updated. However, any. available annotated pairs can always be used to pretrain the generator network. We did considei. such small amount of pretraining in one of the considered tasks and this in this paper to emphasize. the power of adversarial priors.\nOur model enables feedback from the input image directly to its memory priors: the relevant memory. engine retrieves memories based on matching of attribute/ feature descriptors. In this way, priors are tailored to the visual input which alleviates the rare sample problem of traditional training methods.. which suffer from the imbalance of training data samples: some examples are way more typical. than others, and thus more represented on the neural network weights. Hard negative mining has. been used to fight such skewness of training distributions. We empirically show that fixing the prior.\nFigure 2: Our model consists of an imagination generator, a graphics-like imagination renderer a memory retrieval engine and discriminator networks for distribution matching between inferred imaginations and retrieved relevant memories. Here we show our model tailored to the task of figure-ground layer inference, where the imaginations are the segmented foreground object and the completed background scene.\nhave been acquired from past experience, are retrieved based on coarse attribute matching. Fully. convolutional discriminator networks match statistics of the generated imaginations with retrieved relevant memories. A non-parametric graphics-like differentiable renderer projects such imagina tions accordingly and reconstructs the original image. Our model is trained using a combination of. adversarial and reconstruction losses.\ndistribution instead of adapting it results in undesirable, wrong imaginations, and makes the balanc. of adversarial and reconstruction losses dependent on example by example basis..\nIn our model, distribution matching of predicted imaginations and retrieved memories concerns. local image statistics. In contrast to most previous use cases of adversarial networks, our work. (1) conditions imaginations to the input image -does not generate from random noise- and (2) has a feedback loop by projecting imaginations back to the image through rendering and L2 reconstruction. loss. Both (1) and (2) constrain the imagination space and thus our adversarial distribution matching. cares mostly about local statistics, rather than global structure, texture matching rather than semantic content. For example, for grass in-painting, we do not care whether the imagination looks similar or. exactly the same as a retrieve grass image, we only want to make sure each part in the imaginatior. follows a grass-like texture. We propose fully-convolutional discriminator networks, that predict. real versus fake binary tests densely across the feature grid, rather than once for the whole image. This accelerates training, makes our model robust to the size of the network input size and across. different field-of-views.\nIn summary, our contributions are as follows:\nWe demonstrate our model in the tasks of image in-painting, figure-ground layer extraction and intrinsic image decomposition. We show successful imagination prediction without using pairec ground-truth annotations. We are working towards updating the draft with inverse problems ir videos to convey the generality of the proposed model."}, {"section_index": "2", "section_name": "1 RELATED WORK", "section_text": "Concurrent work of Sgnderby et al.(2016) proposes a model for super-resolution that does not re quired paired supervision, similar to our model. They have an adversarial loss in the high resolution\nA weakly supervised model for inverse problems given visual input based on adversarial imagination priors and graphics-like decoders. . Relevant memory retrieval for informative adversarial priors.. . Fully-convolutional discriminator networks for matching local image statistic distributions robust to network input size and image field-of-view..\nVision as an inference problem Both Computer and Human Vision fields have worked towards. models that given visual observations attempt to infer hidden properties about the visual scene by in verting the image formation process, \"un'-doing camera projection, occlusions, motion blur, down-. sampling, image masking. Examples are inferring 3D shape and camera pose in videos or images. inTomasi & Kanade(1992), decoupling 3D shape, lighting and albedo interactions inBarron & Malik[(2013); Kong et al.[(2014), inferring scene depth segmentation layering in|Yang et al.[(2012) super-resolving low resolution input in|Yang et al.(2010), filling in pixels in masked (\"hole') images. (in-painting)Efros & Leung(1999) etc.\nMultimodality Inverse problems are ill-posed: There are many imagination solutions whose pro. jection or rendering would result in the same visual image. Multi-modality of the desired hidden representation causes methods that rely on maximum likelihood to suffer from regression-to-the-. mean problem. Despite this fact, direct feed-forward neural networks regressors or classifiers have been trained in a supervised way to achieve such inversion, e.g., depth estimation in|Eigen et al.. [2014), albedo estimation in [Narihira et al.[(2015b), volumetric inference in Firman et al.[(2016), super-resolution in Dong et al.(2015) etc. The argument is that with large enough receptive fields,. ambiguities of inversion are diminished. However, such approaches require human supervision, may. not generalize well enough to handle different inputs, adapt to the example at hand effectively, or achieve global consistency of the solution (Narihira et al.(2015a)).\nGenerative adversarial networks (Goodfellow et al.(2014); Sebastian Nowozin(2016); Radford et al.[(2015)) instead have shown to minimize Jensen-Shannon divergence between the matched distributions and exhibit a mode seeking behaviour (Theis et al.[(2016)) desirable for inversion. Our adversarial priors can be thought as a surrogate to true perceptual losses, which would involve humans in the loop and would be very expensive to obtain in practise.\nimage and their decoder is a downsampler. Their model is a special case of our model, in which we consider general graphics-like decoders, tailored to each task. Further, they do not consider relevant memory retrieval and do not consider fully-convolutional decoders\nPriors Other research approaches on inverse problems do not employ learning but rather rely on hand designed priors, such as sparsity in|Yang et al.(2010), spatial smoothness (for optical fow, depth, albedo etc), temporal smoothness (for shading in|Kong et al.(2014)), low-rank 3D shape or trajectory priors in|Akhter et al.(2008);Wu et al.(2016), deformable 3D scene models in|Kulkarni et al. (2015a). Such hand designed priors, though do not suffer from generalization issues, cannot exploit data available effectively.\nOur work proposes data driven priors implemented through adversarial distribution matching be tween inferred imaginations and retrieval memories. Such priors exploit unlabelled data available ir the form of imagination repositories, do not suffer from training and test discrepancies, do not neec paired supervision and alleviate the engineering burden of designing good prior models.\nFeedback Feedback is visual processing has been incorporated in recent computational model. through iterative processing, where each step produces a better estimate of the relevant memory let is be image reconstruction Raiko et al.(2014), body pose estimation Carreira et al.(2015) etc Such feedback is incorporated in our adversarial prior model through a memory retrieval mechanisr which uses coarse feature extraction and attributes on unoccluded parts of the visual input to retriev. relevant examples, and thus influence the reconstruction in an example by example case, alleviat ing the problem of data imbalance and finetuning, catastrophic forgetting, hard negative mining c. traditional training paradigms.\nDomain specific non-parametric decoders Model architectures we explore are based on the fact that the inferred imaginations are such that the original image can be reconstructed using basic, parameter-free operations, such as, camera projection, that project inferred 3D and camera pose to 2D scene Handa et al.[(2016), pointwise multiplication for image decomposition, layering that assembles different imaginations based on their depth and segmentation masks. Our work is inspired by work of Tieleman (2014) which proposes capsules, a model for image generation by assembling. 2D image pieces and their poses predicted from the encoder into one canvas.."}, {"section_index": "3", "section_name": "2 MODEL", "section_text": "Our model is illustrated in Figure [2 Given a set of images X = {xi, x2,.:: , xn}, and a mem. ory database M, a generator network inverts each image x into a set of imaginations Z1,Z2, :: .ZK. which, (1) when rendered back to pixels, the projection should match the corresponding input im age; and (2) the imagination statistics should match the distribution of relevant memories retrieved. from M through a memory retrieval engine. Our model is trained to minimize the combination o. (1) an image reconstruction loss and (2) an adversarial imagination loss that constraints the imag. ination space(s). The imagination spaces and renderer architecture depend on the inversion task We consider thress tasks in this work: 1) image in-painting, 2) intrinsic image decomposition, 3. figure-ground layer extraction.\nWe denote the generator as a mapping function from input to imaginations G(x), the renderer as a mapping function from the imagination to the input image P(z), the image retrieval engine as a mapping function from memories M and input image x to relevant memories R(M, x), and the discriminator for imposing distribution matching between imaginations and retrieved memories as D. In case of multiple imagination spaces (e.g., shading and albedo, in-painted background and foreground object mask etc.) we will use G,(x) to denote the i-th imagination proposed by the generator. Suppose there are K imagination spaces, the memory retrieval engine will need to re trieved K relevant memory that corresponds to each of the imaginations. Besides, we also need K discriminators to look after each generated imagination. Here we use R,(M, x) and D, to denote\n(a) 28x28x3 28x28x3 14x14x64 7x7x128 14x14x64 C 28x28x3 D P 28x28x3 X m X P(z) (b) C D1 P 112x112x3 56x56x64 28x28x128 14x14x256 7x7x512 14x14x256 28x28x128 56x56x64 112x112x3 D2 P(Z1,Z2) X (c 64x64x3 : 32x32x64 16x16x128 8x8x256 4x4x512 8x8x256 16x16x128 32x32x64 G Zm - -Zm - Z - D2 P(Z1,Z2) X Z2\nFigure 3: Model architecture for (a) image in-painting, (b) intrinsic image decomposition, and (c) figure-ground layer extraction.\nwhere the relative weight of reconstruction and adversarial losses\nthe corresponding retrieved relevant memory and discriminator for the i-th imagination space. Our loss reads as follows:\nn min maxExEx||P(G(x)) -x||2+ log D;(R;(M,x)) + log(1- D;(G;(x)) D G i=1 reconstruction loss adversarial loss\nGiven an image, the generator outputs one or more imaginations. In the tasks we consider, imag. inations have a retinotopic representation, that is, they have the same size as the input image. Our generators are convolutional/deconvolutional neural networks with skip-layer connections from the encoding to the decoding layers. Skip-layer connections much improve the precision of the pro duced imaginations. We share weights of the first convolutional layers across multiple imagination spaces. Figure[3 shows the generator architectures we used for the different inversion tasks.\nImage in-painting The input is a masked image x, an image whose content is covered by a black contiguent mask m. The task is to invert such masking and produce an imagination that corresponds to the complete (in-painted) image before the masking operation, as shown in Figure[3|(a). The ren dering function P in this case is defined as P(z) = mOz, where O denotes pointwise multiplication\nIntrinsic image decomposition Given an image x, the generator generates albedo z1 and shading. Z2, as shown in Figure3|(b). For Lambertian surfaces that the product of albedo and shading should recover the original image, we thus define our renderer to be: P(z1,z2) =z1 Oz2. Note that we need. two discriminator networks, one that controls the statistics distribution of generated albedos and one that controls the statistics distribution of generated shading imaginations. In practise, instead of pointwise multiplication, we used addition in the log space..\nFigure-ground layer extraction In this task, given an image, we want to invert the layering su. perimposition caused by the objects against their background and produce imaginations of the seg. mented objects and in-painted background scene. Given an image x, the generator outputs a fore ground segmentation mask zm, corresponding image foreground z1 = xOzm and an in-painted background z2 such that the in-painted background matches the relevant background memories anc the image foreground matches memories of segmented relevant objects with clean (black) back. ground, as shown in Figure 3|(c). Our renderer in this case is defined to be: P(z1,Z2,zm) =. (1-zm)Oz2+z1 O x, that is, it overlays the object on the in-painted background..\nWe propose fully-convolutional discriminator architectures for matching local image statistics be tween inferred imaginations and retrieved relevant memories. Fully-convolutional discriminators employ many -instead of one- classifiers centered at grid points of the feature maps, that calculate the confidence scores of being real of fake pattern for each of the local receptive fields, in dif ferent layers of the network. Fully-convolutional discriminators allow better generalization fron relevant memories to generated imaginations as they match only local statistics and not global pat terns. Further, they are much faster and more stable to train as the number of examples fed into th discriminator increases. In our experimental section, we show empirically that fully-convolutiona adversarial loss accelerates and stabilizes training."}, {"section_index": "4", "section_name": "2.4 MEMORY RETRIEVAL ENGINE R", "section_text": "Image in-painting In this case, we measure the L2 pixel distance between the visible part of the input image and images in our memory database, and retrieve the top nearest neighbors\nFigure-ground layer extraction Our foreground object memories are segmented objects, as showi in Figure3(c) z1. We retrieve relevant segmented objects according to an object detector output, tha makes sure our segmented object imaginations agree on the object category with retrieved memories (here the object category of interest is \"chair'). For the in-painted background memories, we use L2 pixel distance between the current image and images from the SUN scene dataset..\nIn any of the aforementioned inversion tasks, after the model starts to generate reasonable initial. imaginations, we can use those to retrieve more relevant memories. Such iterative feedback between memory and visual processing though very reasonable to do, we did not consider it in this work to keep the framework simple\nGiven an input image and a memory database of the same imagination types we want to generate,. e.g., albedos of natural images, the memory retrieval engine retrieves the most relevant memories. The details of memory retrieval depend on the inversion task.\nIntrinsic image decomposition We retrieve relevant shadings by L2 pixel matching between the grayscale version of the input image and albedo memories. We retrieve relevant albedos by comput ing pixel matching between the input image and albedo memories..\nWe show results of our model for (1) image in-painting, (2) intrinsic image decomposition and (3). figure-ground layer extraction. The corresponding model architectures are shown in Figure 3|and further training details are provided in the Appendix."}, {"section_index": "5", "section_name": "3.1 IMAGE IN-PAINTING", "section_text": "We used the MNIST dataset and masked parts of its digit images. Specifically, we randomly selected 2500 samples of digits 0, 1, 2, 3 from the dataset and overlayed a squared mask at the center over them to create our input images. Our memory database M contains 1o00 samples for each of the ten digits. We purposefully designed such distribution mismatch between the input image dataset and memory database to study the usefulness of retrieved memories under a controlled setup. The set of digit images contained in M does not intersect the set of images we used to create our input images In other words, the groundtruth imaginations for our input images are not contained in our memory database.\nFigure 4 shows the results of four in-painting models: (1) a baseline with L2 pixel loss between imaginations and retrieved relevant memories (BmemL2), (2) our model with memories retrieved uniformly at random from M rather than conditioned on input images (we supress our memory retrieval engine R) (Bmemrand), (3) our model with memories retrieved uniformly at random from M and with larger weight on the reconstruction loss (BmemrandHR), (4) our model.\nTreating retrieved relevant memories as the golden ground-truth produces blurry images, as shown in Figure4|Row 2. L2 matching optimizes the wrong objective, aside of the fact that it suffers from regression-to-the-mean error even with perfectly correct paired ground-truth, as noted in previous works (e.g., Sonderby et al.(2016)). Bmemrand produces imaginations that look like reasonable digits but do not match the corresponding input image, as shown in Figure 4|Row 3, rightmost column. Such discrepancy between memories and desired imagination distributions cannot be cor rected by increasing the reconstruction loss over the adversarial loss (BmemrandHR), shown in Figure 4|Row 4. Then, the resulting imaginations do not look like correct digits anymore. Our model correctly in-paints the masked digits, as shown in Figure4|Row 5.\nFor each input digit image we show in Figure 4|Row 6 the closest retrieved memory from ou engine R. By comparing the output of our model with the closest memory, we see that we learn tc\nFigure 4: Results of image in-painting on MNIST dataset. Row 2: BmemL2 treats retrieved relevar memories as ground-truth imagination and penalizes the l2 loss between them. Row 3: Bmemrand our model without memory retrieval but rather uniform at random memory access. Imagination do not respect the input image conditioning. Row 4: BmemrandHR: our model without memor retrieval but increased weight on the reconstruction loss. Imaginations do not look like correc digits. Row 5: Our model. It produces correct in shape and texture digit imaginations in contrast t the baselines above. Row 6: Top closest relevant memories retrieved by our engine. Row 7: Randon memory retrieval.\nInputs imagination BmemL2 a21/808011222202211010 36716069479420741106 Bmemrand BmemrandHR our model 32180802242023100 Retrieved Memory With R 22/200613257028100 Random draw 90518648081562465295193\ninterpolate on the memory space and form an imagination that fits the current input image, withou. copy pasting, as a nearest neighbor memory engine alone would do.."}, {"section_index": "6", "section_name": "3.2 INTRINSIC IMAGE DECOMPOSITION", "section_text": "We use the MIT intrinsic image dataset of Grosse et al.(2009). We use ten objects for training and. ten objects for testing. During training, our inputs are images of the training objects and our memory database contains albedos and shadings for the training objects. At test time, we just evaluate our generator on images of the test objects, without finetuning our model. We used random image cropping for data augmentation as described in the Appendix..\nFigure|5|Left shows results of our model which never uses paired annotations, that is, does not have access to pairing of each RGB image with its ground-truth albedo and shading. The results are comparable to an oracle model that has access to such paired supervision and optimizes a regression. loss, similar to previous work of Narihira et al.(2015b), shown in Figure5 Right. Our model. effectively generalizes to unseen objects (Figure5|Bottom Right). Figure 6|shows how using fully convolutional discriminators on albedo and shading stabilize training and converge faster..\nOur approach with unpaired groundtruth regression with full supervision A S R Testing on unseen objects A S R\nFigure 5: MIT intrinsic decomposition with unpaired shading and albedo. I: Input Image, A: In ferred albedo, S: Inferred shading, R: Reconstructed Image using A and S. Left: inferred albedo anc shading using our weakly supervised method. Right: inferred albedo and shading using a fully su pervised model that minimizes regression loss. The bottom part shows the results of decomposition. on unseen objects.\n0.20 0.18 Fully-Conn D 0.16 Fully-Conv D + Fully-Conn D 0.14 Fully-Conv D 0.12 0.10 0.08 0.06 0.04 0.02 0 5k 10k 15k 20k 25k\nFigure 6: L2 distance between inferred albedo imagination and the ground truth. Fully-convolutional discriminators (purple and green lines) converge faster than fully connected ones, that employ only one fake/real classifier per image.\n0.20 0.18 Fully-Conn D 0.16 Fully-Conv D + Fully-Conn D 0.14 Fully-Conv D 0.12 0.10 0.08 0.06 0.04 0.02 0 5k 10K 15k 20K 25k\nWe use the seeing 3D chairs dataset of|Aubry et al.(2014) as object memory database which contains 1200 different chairs and the SUN scene datasetXiao et al.(2010) as the background memory database which contains 131000 images. Our input images are generated by randomly selecting ar SUN image and cropping it to 64 64. After that, we overlaid an chair image on top of it as our input image.\nWe use 200 background images and 200 chair images to generate a small subset of 'labelled data\" where we provide the network with ground-truth of mask and background and train the network using regression loss, as described in the Appendix. Such small scale supervised pretraining suffices for stability of our model in this task; it is very realistic to assume the existence of such strong sparse supervision. Figure[7 shows the inferred mask and background we obtain.\nInput Inferred mask Extracted chair Inferred background Recovered Image"}, {"section_index": "7", "section_name": "4 CONCLUSION", "section_text": "We have presented a weakly supervised inverse model of images that predicts imaginations of hidden representations which then renders through image formation or layering to reconstruct the original. image. It regularizes the inferred hidden representations using convolutional adversarial priors by distribution matching against retrieved relevant memories. It does not assume paired supervision. and can handle multimodal imagination spaces. We have empirically validated are design choices of fully-convolutional adversarial discriminator networks and relevant memory retrieval. We believe the proposed learning paradigm better exploits unlabelled data in the form of images, depth maps,. albedo, shading or segmentation maps and complements well human paired annotations..\nWe are working towards updating the paper with two inversion problems in videos, visual odometry and motion object segmentation. Videos allow for much stronger observation module, with imagi nation projections from frame to frame, as well as temporal constraining of the imaginations in time Further, we are working towards quantifying generalization of our imaginations from training to test images, specifically measuring how well our model can do with increasing dissimilarity between memories in the database and input images."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Martn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrey Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath. Kudlur, Josh Levenberg, Dan Man, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike. Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent. Vanhoucke, Vijay Vasudevan, Fernanda Vigas, Oriol Vinyals, Pete Warden, Martin Wattenberg.\nFigure 7: Results for Figure-ground layer extraction. Row 1: input images. Rows 2,4: The segmen tation mask and in-painted background proposed by the generator. Row 5: By superimposing the. inferred mask on the in-painted background, the network outputs the recovered image, which should. match the input.\nIjaz Akhter, Yaser Ajmal Sheikh, Sohaib Khan, and Takeo Kanade. Nonrigid structure from motion in traiectory space. In Neural Information Processing Systems. December 2008\nMathieu Aubry, Daniel Maturana, Alexei Efros, Bryan Russell, and Josef Sivic. Seeing 3d chairs. exemplar part-based 2d-3d alignment using a large dataset of cad models. In CVPR, 2014\nJoao Carreira, Pulkit Agrawal, Katerina Fragkiadaki, and Jitendra Malik. Human pose estimatio. with iterative error feedback. 2015.\nMichael Firman, Oisin Mac Aodha, Simon Julier, and Gabriel J. Brostow. Structured Prediction o Unobserved Voxels From a Single Depth Image. In CVPR, 2016..\nRoger Grosse, Micah K. Johnson, Edward H. Adelson, and William T. Freeman. Ground-truth dataset and baseline evaluations for intrinsic image algorithms. In International Conference on Computer Vision, pp. 2335-2342, 2009. doi: http://dx.doi.org/10.1109/ICCV.2009.5459428\nAnkur Handa. Michael Blosch. Viorica Patraucean. Simon Stent. John McCormac. and Andrew J Davison. gvnn: Neural network library for geometric computer vision. CoRR, abs/1607.07405. 2016. URLhttp://arxiv.org/abs/1607.07405\nTianfan Xue William T. Freeman Joshua B. Tenenbaum Jiajun Wu, Chengkai Zhang. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. 2016.\nTejas D. Kulkarni, Pushmeet Kohli, Joshua B. Tenenbaum, and Vikash K. Mansinghka. Picture. A probabilistic programming language for scene perception. In CVPR, pp. 4390-4399. IEEE Computer Society, 2015a. ISBN 978-1-4673-6964-0. URL http://db1p.uni-trier. de/db/conf/cvpr/cvpr2015.html#KulkarniKTM15\nTejas D. Kulkarni, Will Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. Deep convolutional inverse graphics network. CoRR, abs/1503.03167, 2015b. URLhttp: //arxiv. org/abs/\nJonathan Barron and Jitendra Malik. Shape, illumination, and reflectance from shading Technical Report UCB/EECS-2013-117, EECS Department, University of California, Berke- ley, May 2013. URL http://www2.eecs.berkeley.edu/Pubs/TechRpts/2013/ EECS-2013-117.htm1\nan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair. Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling. C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Pro cessing Systems 27, pp. 2672-2680. Curran Associates, Inc., 2014. URL http: / /papers. ni nS cc/naner/5423-oen -net s.ndf\nDeepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei Efros. Context en coders: Feature learning by inpainting. 2016.\nBarry Smith. Gestalt Theory: An Essay in Philosophy. In Barry Smith (ed.), Foundations of Gestal Theory, pp. 11-81. Philosophia Verlag, December 1988..\nCasper Kaae Sonderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszar. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016.\nTijmen Tieleman. Optimizing neural networks that generate images. Ph.D. Thesis, 2014\nStella Yu. Computational models of perceptual organization. Technical report, Robotics Institute Carnegie Mellon University, 2003.\nRyota Tomioka Sebastian Nowozin, Botond Cseke. f-gan: Training generative neural samplers using variational divergence minimization. In Neural Information Processing Systems 2016. Neural Information Processing Systems, October 2016."}, {"section_index": "9", "section_name": "All models are implemented in Tensor Flow|Abadi et al.(2015", "section_text": "Image in-painting Details of the generator architecture are illustrated in Figure[3](a). The discrim inator consists of one convolutional layer and one fully-connected layer on top. Both the generator. and the discriminator use batch normalization with relu activation for the generator and leaky relu. activations for discriminator. We initialize all weights with sampling from normal distribution with. standard deviation 0.02. We use the Adam optimizer with a fixed learning rate of 0.0002.\nIntrinsic Image Decomposition For each of the images in the MIT dataset, we randomly crop a region of 112 112. For memory retrieval, we find the top nearest-neighbors from 100 crops, using the method described in Section2.4\nDetails of the generator architecture are illustrated in Figure[3|(b). Each convolutional layer will pass though batch normalization layer, leaky relu activation and max pooling layer before sending to the next convolutional layer. Discriminators for both albedo and shading contain four convolutiona layers with batch normalization leaky relu activations. The fully-convolutional adversarial loss i built on top of the fourth layer. We initialize all weights with sampling from normal distributioi with standard deviation 0.02. We use the Adam optimizer with a fixed learning rate of 10e-6. We put 0.1 weight on the l2 recovery loss.\nFigure-ground layer extraction Details of the generator architecture are illustrated in Figure[3|(c) Each convolutional layer passes though batch normalization layer, leaky relu activation and max pooling layer before sending to the next convolutional layer. Discriminator for both objects and. background contains four convolutional layers with batch normalization and leaky relu activations We initialize all weights with sampling from normal distribution with standard deviation 0.02.. The generator is pre-trained using 200 images annotated with groundtruth in-painted background and foreground object mask using an L2 pixel loss. Then, the model is finetuned using only the described. adversarial imagination loss and image reconstruction loss. We use the Adam optimizer with a fixed learning rate of 10e-5. Such pretraining, though small in scale, much helped stability of our model. We have also experimented with adding noise to retrieved memories to make the task of. the discriminator harder at the beginning of the training, as in Sonderby et al.(2016). Small scale. supervised pretraining suffices for stability of the model in this task, and it is very realistic to assume the existence of such strong sparse supervision.."}]
ry7O1ssex
[{"section_index": "0", "section_name": "GENERATIVE ADVERSARIAL NETWORKS AS VARIA TIONAL TRAINING OF ENERGY BASED MODELS", "section_text": "Shuangfei Zhai\nIBM T.J. Watson Research Center Yorktown Heights. NY 10598. USA\nIn this paper, we study deep generative models for effective unsupervised learn- ing. We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density p(x) is approximated by a variational distribution q(x) that is easy to sample from. The training of VGAN takes a two step procedure: given p(x) q(x) is updated to maximize the lower bound; p(x) is then updated one step with samples drawn from q(x) to decrease the lower bound. VGAN is inspired by the generative adversarial networks (GANs), where p(x) corresponds to the discrim inator and q(x) corresponds to the generator, but with several notable differences We hence name our model variational GANs (VGANs). VGAN provides a practi- cal solution to training deep EBMs in high dimensional space, by eliminating the need of MCMC sampling. From this view, we are also able to identify causes to the difficulty of training GANs and propose viable solutions."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Unsupervised learning is a long standing challenge of machine learning and deep learning. One major difficulty of effective unsupervised learning lies in the lack of an accurate distance metric. Euclidean distance has been widely adopted as the default metric from shallow methods, such as K-means and Gaussian mixture models, to deep models such as autoencoder variants (e.g.,Vincent et al.(2010)). From a probabilistic point of view, the use of Euclidean distance assumes Gaussian distributions (or mixtures thereof) in the input space, which is a strong assumption and is often times inaccurate for high dimensional data (such as images). Generative adversarial networks (GANs) Goodfellow et al.2014 are a particularly interesting approach as it does not assume the data dis- tribution to take any specific form, which therefore eliminates the need of a predefined distance metric of samples. GANs work as a mini-max two player game, where a generator G(z) is trained to generate samples that can fool the best discriminator D. When both G and D are formulated as deep convolutional networks, it is shown that the generator can learn to generate surprisingly real- istic looking images Radford et al.(2015). Energy-based models (EBMs) LeCun et al.(2006) are another powerful family of unsupervised learning models. Similarly to GANs, EBMs make mini- mal assumptions about the data distribution, as it directly parameterizes the negative long density of data E(x) = log p(x) as a deterministic function of x. It is clear that by properly choosing the capacity of E(x), an EBM can be trained to approximate an arbitrary density function perfectly well.\nIn this paper, we propose VGAN, which bridges GANs and EBMs and combines the benefits from both worlds. In particular, we show that the mini-max game of GANs is approximately equivalent to\n'Experimental code is available at https://github.com/Shuangfei/vgan\nYu Cheng\nIBM T.J. Watson Research Center Yorktown Heights. NY 10598. USA\nzhongfei@cs.binghamton.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "- E (x) terzied sampler from the model distribution defined by p(x) = From this view. GANs Le-E(x) dx provide a viable solution for the maximum likelihood estimation of EBMs, which is known to be challenging due to the difficulty of evaluating the partition function which integrates over the input. space. We discuss the important design choices of the energy functions in order to make VGAN nu-. merically stable, and propose a novel energy formulation that is bounded and explicitly multi-modal Moreover, from the EBM point of view, we are also able to identify the reasons that make GANs unstable to train, due to the missing of an entropy term of the generator distribution, which causes. the generator to collapse to a single or few local minima of the energy landscape. As a solution, we propose to parameterize the generator as a transition distribution (that is, pz(x|x) instead of pz (x)) in analogy to the one used in Gibbs sampling procedure. We show that this variant corresponds to. a variational version of contrastive divergence Hinton|(2002a), and circumvents the need of directly approximating the cumbersome entropy term. In our experiments on MNIST, CIFAR10, and SVHN. we show that we are able to learn generators that generate sharp and diversified images. Moreover the learned transition distributions are able to effectively capture the data manifold by consecutively. sampling realistic looking samples starting from testing images. Finally, as a quantitative evalua. tion of the learned model, we use the transition distribution as data augmentation, from which we are able to show consistent gains of classification accuracy with few training labels on MNIST and. SVHN.\nGenerative adversarial networks Goodfellow et al.(2014) work by solving the following mini-max game:\nmax min Ex~Pdata(x)[- log D(x)] - Ez~pz(z)[log(1 - D(G(z)] G D\nwhere pdata(x) is the data distribution; D(x) is the discriminator that takes as input a sample and outputs a scalar between [0,1]; G(z) is the generator that maps a sample z E Rd drawn from a simple distribution p(z) to the input space. Typically both D and G are parameterized as deep neural networks. Equation|1 suggests a training procedure consisting of two loops: in the inner loop D is trained till convergence given G, and in the outer loop G is updated one step given D (note that. in |Goodfellow et al.(2014), the authors propose to maximize log(D(G(z))) instead of - log(1 - D(G(z))) in the outer loop). As the two-player, mini-max game reaches the Nash equilibrium, G. defines an implicit distribution pg(x) that recovers the data distribution, i.e., pq(x) = Pdata(x)..\nAn EBM formulates a density function as:\nEx E(x) dx O-\nwhere E(x) is defined as the energy of input x. One particularly powerful case is deep energy base models (deep-EBMs)Ngiam et al.(2011);Zhai et al.(2016), where E(x) is directly parameterize as the output of a deep deterministic neural network. An obvious way to train an EBM is to minimiz the negative log likelihood (NLL):\nJ(E) = Ex~Pdata(x)[E(x)] + log\nDirectly minimizing J(E) is difficult due to the integration term over the input space. As a remedy one can rewrite Equation|3|as follows:\nJ(E) = Ex~pdata(x)[E(x)] +log[ Ex~pdata(x)[E(x)] + log[E, Ex~Pdata(x)[E(x)] + Ex~q(x =Ex~pdata(x)[E(x)]-Ex~q(x)[E(x)] + H(q)\nwhere q(x) is an arbitrary distribution which we call the variational distribution with H(q) denoting its entropy. Equation4is a natural application of the Jensen's inequality, and it gives a variational is a constant inde- q (x) pendent of x, i.e., q(x) x E(x), which implies that q(x) = p(x). This suggests an optimization procedure as follows:\nmin max Ex~pdata(x)[E(x)] - Ex~q(x)[E(x)] + H(q) E q\nwhere in the inner loop, given the energy model E(x), the variational lower bound is maximize w.r.t. q; the energy model then is updated one step to decrease the NLL with the optimal q..\nIn practice, q(x) can be chosen as a distribution that is easy to sample from and differentiable; the inner loop can be achieved by simple stochastic gradient descent. It turns out that the generator used in GANs exactly satisfies such requirements, which directly connects GANs to EBMs. To see this replace q(x) with the generator distribution pg(x) (that is the implicit data distribution produced by x = G(z), z ~ pz(z)), then Equation5|turns into:\nmin max Ex~Pdata(x)[E(x)] - Ex~pg(x)[E(x)] + H(pg) E G\nmin max Ex~paata(x)[- log D(x)] - Ez~pz(z)[- log D(G(z))] + H(pg) D G\n1. The order of minimization and maximization. GANs optimize D till convergence given G. while VGANs optimize G till convergence given D. The outcome of a GAN is then a generator. that can fool the best discriminator possible, while the outcome of a VGAN is an energy model. parameterized by D, and a variational distribution G that can sample from the exact distribution. defined by D. Also note that with the optimization procedure of GANs, there is no guarantee that D defines a viable EBM, as the variational lower bound can be arbitrarily low due to the swapping of the min and max loops.\nOne can now immediately recognize the resemblance of Equation[7|to Equation|1] Both of them take the form as a mini-max optimization problem, where D is trained to increase D(x) for x ~ pdata(x) and decrease D(x) for x ~ pg(x), while G is trained to increase D(x) for x ~ pq(x). In other. words, GAN behaves similarly to variational training of an EBM, where the variational distribution q(x) takes the specific form of pg(x) which is easy to sample from. In light of this connection,. we call the family of the models defined by Equation [6| as the variational generative adversarial. networks (VGANs). The nice property of VGANs over the traditional EBM training strategies is that it simplifies the sampling procedure by defining a differentiable, deterministic mapping from a. simple distribution (e.g., uniform distribution) to the input space..\nEBM is to minimize the energy of training data, this difference is significant in practice. To see this,. note that the optimum in Equation|5lis invariant w.r.t. an affine transformation of the energy; that is. let E* (x) be an optimal solution to Equation[5} then E(x) = aE*(x) + b is also an optimal solution for a E R+,b E R. This property makes unbounded energy inappropriate to use for VGANs, as. it often causes the scale of energy to explode. Even worse, the energy parameterization as that of. RBM's has stronger gradients as the energy decreases, and this essentially encourages the energy of. both training samples and generated samples to grow to negative infinity..\n3. The optimal energy assignment. A related problem to the energy parameterization is that, when. optimizing D, the term subject to expectation under p(z) of GANs is log(1 D(G(z))), whereas VGANs use log D(G(z)). While both have the same direction of gradients w.r.t. to D(x) and D(G(z)) (increasing the former and decreasing the latter), and the optimal solution to both models. is when pdata(x) = pg(x), they differ in the optimal D. The optimal D for GAN is fixed as. D(x) = 0.5.\n4. The entropy term of the generator distribution H(pg). Last but not the least, GANs do no1. include the entropy term while optimizing G. In VGANs, including the entropy term guarantees. that pq(x) recovers the density encoded by D, and that the variational lower bound is tightened. as such in the inner loop. Without the entropy term, G can be easily but misleadingly optimized. by collapsing into one of the few local minima of the energy landscape. In fact, this accounts foi most of the failures of training GANs, as pointed out in the GAN related literature Radford et a1 (2015); Salimans et al.](2016); Kim & Bengio(2016);Zhao et al.](2016). Of course, an immediate challenge that one needs to solve is the approximation of H(pq). This amounts to a well known. problem of differentiable entropy approximation (seeHyvarinen|(1999), for example). The fact tha1. the approximation needs not only to be accurate, but also to be easily optimized w.r.t. G makes it. even more intimidating and cumbersome"}, {"section_index": "3", "section_name": "BOUNDED MULTI-MODAL ENERGY", "section_text": "where W; E Rd, b; E R, $(x) is the feature mapping; H(p) is slightly overloaded as the entrop of a binomial distribution defined by p, i.e., H(p) = p log p - (1 p) log(1 p). This energy fo mulation can be viewed as an instance of the product of experts model (PoE)|Hinton (2002b), wher each set of parameters Wj, b; defines an expert. The nice property of this energy parameterizatio is that it is 1) bounded between [0, K], 2) with strong gradients in the high energy area (o() clos to 0.5) and with vanishing gradients at low energy area (o() close to 0 or 1), and 3) multi-modal b design. To see the last point, simply note that H(p) achieves its minimum with p = 0 and p = Thus for such a PoE energy with K experts, there exists 2K equally likely local minima by desigr With this energy formulation, it is also relatively easy to come up with a reasonable approximatio of H(pa), which is chosen as:\nK N H(pg)= 1 o(WF$(G(z))), H N j=1 i=1\nwhere x' denotes the ith training example. Although there is no theoretical guarantee that Equation|9 recovers the true entropy H(pg) to any extent, maximizing it serves the same purpose of encouraging\nThe first strategy we attempt to stabilize the generator is by designing a well behaved energy. such that the generator can be easily optimized. We start by noticing that the energy of the form log D(x) is inherently uni-modal. To see this, let D(x) = o(wI $(x) + b), where o is the sig-. moid function and $(x) denotes a feature mapping of x encoded by a deep neural network. Then in. order to maximize D(x) such as to minimize the energy, all the samples x are thus encouraged to be projected to be proportional to the weight vector w. This is not a problem with the regularization of H(pq), maximizing which diversifies $(x), but without or with a poor approximation of the entropy. term may cause the generator to collapse. Consequently, we propose a bounded multi-modal energy. formulation as follows:\nK E(x) =H(o(WF$(x)+bj)) j=1\nthe generated samples to be diverse, as H(pq) reaches its minimum if G(z) collapses to one single point. Moreover, in the outer loop while minimizing the NLL w.r.t. E, we find it helpful to also maximize H(Pdata) = j=1 H( =1 o(WT (x'))) as well, which acts as a regularizer to E to encourage the average activation of each expert o;() to close to O.5. The training algorithm of. VGAN with the proposed bounded multi-modal energy is summarized in Algorithm|1.\nAlgorithm 1 The optimization procedure of VGAN\nAlgorithmI 1 The optimization procedure of VGAN. 1: for number of training iterations do. 2: for k steps do 3: sample N noise data {z1, ..., zN }; update G by one step gradient ascent of. N NE(G(z') + H(ps) 4: end for 5: sample N training data {x', ..., xN}, sample N noise data {z', ..., zN} 6: update E with one step gradient descent of. N N 1 E(x E(G(z')) - H(Pdata) N N 7: end for"}, {"section_index": "4", "section_name": "6 VARIATIONAL CONTRASTIVE DIVERGENCE WITH TRANSITION DISTRIBUTIONS", "section_text": "Although it is possible to discourage the generator to collapse into a single output by carefully designing the energy function as described in Section|5l there is no good way to monitor the quality of the approximation of the entropy term other than manually observing the generated samples Also, there is no guarantee that the designed approximation is accurate such that the variational lower bound is tight enough to provide correct gradients for updating the energy parameters. In this section, we propose an additional approach to bypass the need of the cumbersome entropy approximation problem. The idea is that instead of generating samples directly from pz(z), we define a transition operator pg(x|x) conditioned on a training sample x. This corresponds to defining the variational distribution q(x) in Equation4as q(x) = S Pdata(x)pz(x|x)dx. If we further restrict the transition distribution pz(x|x) to be one that is closely centered around x, then the entropy term H(q) can be well approximated by the data entropy H(pdata), which is a constant. The variational lower bound is thus increased by increasing the energy for x ~ pg(x|x). Of course, this parameterizaton limits the shape of the varaitional distribution, and the variational lower bound might never be tightened especially in the early stage of training when the model distribution differs significantly from the data distribution; this nonetheless can provide meaningful gradients to update the energies. In fact, this sampling procedure is closely related to contrastive divergence (CD) Hinton (2010) (one step CD to be exact), whereas in CD the transition distribution can be easily obtained from specific types of EBMs (e.g., RBM). Our approach, on the other hand, uses a parameterized variational distribution to approximate the true transition distribution; we thus name it variational contrastive divergence (VCD).\nThe implementation of pq(x|x) is illustrated in Figure 1] Let h = Encode(x) be an encoder that maps an input x to a bottleneck vector h E Rd, and let x = Decode(h) be the output of a decoder that maps h to the input space. A sample from pg(x[x) can be drawn as follows: 1) generate a binomial vector m E Rd with probability 0.5; 2) generate a noise vector z ~ pz(z),z E Rd; 3) produce a new vector h = m * z + (1 - m) * h; and 4) obtain the generated sample by passing h to the same decoder x = Decode(h). The generator then tries to minimize the following objective:\np * Ex~Pdata(x),x~pg(x|x)[E(x)] + (1- p) * Ex~pdata(x)| Ix - x|\nN 1 E(G(z*)) +H(pg) N i=1\nN N 1 1 E(G(z')) - H(pdata) Ex N N i=1 i=1\nE(x) Ex') x Energy Energy Decoder Decoder Generator m Encoder x x x\nFigure 1: Illustration of VGAN with variational contrastive divergence. On the left panel, energies of real data x and generated data x are computed, with the generator shown on the right. For the generator on the right panel, each x ~ Pdata(x) is passed through an encoder to obtain h, which is. then passed through a decoder to achieve a reconstruction x. h is then mixed with a noise vector z of the same dimensionality by a randomly generated binary mask vector m to obtain h following. h = m * z + (1 - m) * h. h is then passed through the same decoder to obtain the generated sample.\nof the generation formula of h is also critical. Randomly replacing half of the dimensions of 1. with random noise z makes sure that h is sufficiently different from h. Otherwise, the autoencode can easily denoise h to make to collapse back to x, regardless of z. Also, mixing noise in th. bottleneck layer of an autoencoder makes the generation process easier, as it is known that with hig. level features the mixing rate of MCMC sampling is significantly higher than the in the input spac. Bengio et al. In addition, the formulation of our transition operator does not make any Gaussia. (or mixture of Gaussian) distribution assumptions, despite the use of the reconstruction error. Thi is due to the use of a deep decoder, such that the generated sample can be far away from the sampl conditioned on, when calculating the Euclidean distance. This conjecture is also supported in ou experiments, see Figure[5] The training algorithm for VCD is summarized in Table[2\nAlgorithm 2 The optimization procedure of VCD\n9CIIIIIZCCIOI 1: for number of training iterations do. 2: for k steps do 3: sample N training data {x1, ..., xN}, sample N noise data {z', ..., zN};. 4: sample N binary mask vectors; 5: update G by one step gradient ascent of. N 1 E(G(z',m)) + H(pg) N i=1 6: end for 7: sample N training data {x', ..., xN}; sample N noise data {z', ..., zN}; 8: sample N binary mask vectors; 9: update E with one step gradient descent of. N E(G(z,,m'))- H(Pdata) N N 10: end for\nN 1 1 E(G(z,m)) + H(pg N i=1\nN N 1 1 E(G(z,,m))- H(pdata N N i=1 i=1\nTable 1: CIFAR-10 Test error rates of the linear classifiers trained on the second to the top discrimi nator layer ((x)) of GAN and VGAN with generator update steps as 1 and 3.."}, {"section_index": "5", "section_name": "7.1 VGAN SAMPLES", "section_text": "As a proof of concept, in the first set of experiments, we test the efficacy of the proposed VGAN al. gorithm as in Algorithm1 To do this, we train a VGAN on the 50,000 training images of CIFAR-10 with a moderately sized energy (discriminator) and generator. The energy is encoded by a convolu. tional neural network (CNN) with two convolutional layers, two max pooling layers, and two fully. connected layers, where the last fully connected layer is used to compute the energy as defined ir. Equation|5|(with K=100). The generator is encoded by a deconvolutional neural network with twc consecutive fully connected layers, the latter of which is reshaped and followed by two deconvolu tional layers to perform upsampling convolution. Both the energy and the generator use ReLU as. nonlinearity, and only the generator is equipped with batch normalizationIoffe & Szegedy(2015) Both the energy and the generator are updated with AdadeltaZeiler(2012) using learning rate 0.1. As a direct comparison, we have also trained a GAN with the exact same architecture and training. protocol, except that the top layer of the discriminator is replaced with one single sigmoid unit. We. train both VGAN and GAN for 100 epochs, while varying the number of generator updates per it. eration from 1 to 3 (k in Algorithm|1). Note that the original GAN paper|Goodfellow et al.(2014. proposes to update the discriminator k steps per iteration, which we did the opposite. We show the. generated samples from the generator in Figure[2 Here the first row corresponds to k = 1, and the. second row corresponds to k = 3. For each row, on the left are 100 generations from GAN, anc. on the right are 100 generations from VGAN. We see that for both step numbers, VGAN is able to. generate visually appealing images that are difficult to distinguish from samples from the test set. GAN, on the other hand, clearly fails to generate diversified, or realistically looking images wher. k=1, but works much better when k=3. This can be easily understood from the variational point o. view, where a larger step k for generator makes the lower bound tighter, thus producing much stable. models.\nIn order to further justify the observations, we train two linear classifiers with the second to the top layer fully connected activation from the discriminator of both models (1204 dimensional), for k = 1, 3; the results are shown in Table 1 We see that thanks to the bounded multi-modal energy VGAN is able to benefit from more generator updates. GAN, on the other hand, fails to learn discriminative features, despite the appealing visual quality of generations when k=3. This also verifies our hypothesis discussed in Section[5] as the uni-modal nature of GAN discourages it from learning discriminative features at the top layer of its discriminator."}, {"section_index": "6", "section_name": "7.2 LEARNING WITH YCD", "section_text": "In the next set of experiments, we evaluate the variational contrastive divergence of our model. We train our models on three datasets: MNIST, CIFAR-10, and SVHN with 50,000, 40,000, 60,000 training images, respectively. For each dataset, we train a VGAN with variational contrastive di vergence, while varying the weight p in Equation 10 from the range {0, O.001, 0.01, 0.1, 1}. Note that in the extreme case when p = 0, VGAN degrades to training an EBM with negative samples obtained from an autoencoder. In the other extreme case when p = 1, the transition distributior pz(x|x) is not constrained to be centered around x, and is roughly equal to a regular VGAN. We set the dimensionality of h, m, z to be 256 for MNIST, and 2048 for CIFAR-1O and SVHN, and use tanh as its nonlinearity (ReLU is used for all other layers except for the top layer of the autoencode with uses sigmoid). pz(z) is set to be a uniform distribution drawn from [-1, 1] which matches the\nCaution should be taken when attempting to apply batch normalization to the energy (discriminator). Ar incorrect approach is to apply batch normalization to real data batch and generated batch separately, whicl essentially makes E different for real and generated data in the energy function E().\nFigure 2: Samples from the GAN (left), and generations of VGAN (right), with the same architec. ture. The first row corresponds to updating the generator one step at each iteration, and the second row corresponds to updating the generator three steps at each iteration.\nmagnitudes of h. The training protocol is the same as that described in Section|7.1|except that use k=1 throughout this set for computational reasons..\nWe first study the effect of varying p by looking at the MNIST examples in Figure [3] The firs. to third row corresponds to p = 0, 0.01, 1, respectively. The first to third column corresponds tc. validation samples, reconstructions, and conditional generations, respectively. We see from the firsi row (which equals to an unregularized autoencoder) that the generator fails to generate realistically looking images. The third row is able to generate realistic images conditioned on a sample, but. there is no resemblance between the generation and the sample conditioned on. The second row, or. the other hand, is able to both reconstruct the input sample, and also generate realistically looking. samples with the transition operator, with notable differences between the input and generation. We. have also observed similar trends on SVHN and CIFAR-10 results in Figure4] where only p = 0.01 is shown for the space concern.\nWe can also simulate a Markov Chain with the learned transition distribution, and we visualize the results on MNIST and SVHN in Figure 5] We see that the learned transition distribution can. smoothly vary the style, type, color, and etc. of the digits. Also note that the transitions are not restricted to the Euclidean neighborhood of the samples conditioned on, for example, changing of. colors should result in a large distance in the input space, which is our transition operator does not. have difficulty exploring.\nFinally, as a quantitative evaluation of the learned transition distribution, we attempt to use the generated conditional samples as data augmentation on MNIST and SVHN3] To be concrete, for each dataset we train two additional CNNs enhanced with batch normalization, dropout out and input Gaussian noising. We then minimize the follow loss function:\nFor each dataset we train on the first 1ooo training images, and use the validation set to select the bes model; we then report the test error of different configurations. The results are summarized in Tabl 2 We see that on both datasets, with a properly chosen p the generator is able to provide good gen erations to improve learning. On the other hand, with p = 0, which corresponds to sample from ai autoencoder, hurts performance. p = 1 completely messes up training as the generated samples ar not guaranteed to have the same label as the samples conditioned on. This shows that our transitior distribution is able to generate samples that are sufficiently different from training images to boos the performance. Although these numbers are by no means state-of-the-art results, we consider ther significant as a proof of concept, because our baseline models are already heavily regularized witl dropout and feature noising, which can be considered as data agnostic data augmentation. Also not that there are much space for improvements by leveraging the weights between the two terms il Equation[11] tuning the architecture of the energy model, the generator and the classifier model.\nFigure 3: Visualization of x, x, and x for p = 0, 0.01, 1 on MNIST. The first to third row corre sponds to rho = 0, 0.01, 1, respectively. The first to third column corresponds to samples from the. validation set x, reconstructions of the samples x, and the generated samples x..\n3 we are not able to obtain reasonable results on CIFAR-10, as our EMB suffers from noticeable underfitting identified by the large reconstruction errors in Figure4.\nN N 1 1 0.5 * Exi~p(x|xi)L(x,y N N i=1 i=1\n6526849548 6 5 844548 3797 84 A+ 34 95 O\n92592412.8123 5-02 1218.12 11218.1.23 A -AAM 70216 0781 07810 023429 891 -56\n750 S 9 3 5 5 5 5\nFigure 5: Simulating a Markov Chain with p,(x|x). We show 30 and 28 images form the validation set for MNIST and SVHN in the first row of each panel, respectively, followed by 9 Gibbs sampling steps. Note the smooth transition of digit types, shapes, and/or colors\nFigure 4: Visualization of x, x, and x for p = 0.01 on SVHN and CIFAR10. The first to third column corresponds to samples from the validation set x, reconstructions of the samples x, and the. generated samples x.\nTable 2: Semisupevised learning error rates by using the learned transition distribution for data augmentation.\nmodel MNIST-1000 SVHN-1000 No augmentation 2.2 19 VCD (p = 0) 2.9 26 VCD (p = 0.001) 2.0 20 VCD (p = 0.01) 1.7 18 VCD (p = 0.1) 1.9 17 VCD (p = 1) 21 37\nThere has also been a long standing interest in terms of EBMs and deep generative models in the. machine learning community, such as deep Boltzmann machines and deep belief networksSalakhut. dinov & Hinton) Hinton et al.(2006). The contribution of our framework from this aspect is to pro. pose a scalable training method to eliminate the need of MCMC sampling. Variational inference has. also been well studied in the literature, but most successfully in dealing with deep directed graphi cal models such as DBM Salakhutdinov & Hinton and variational autoencoder Kingma & Welling. (2013), where typically variational upper bounds are derived for NLL, instead of the lower bouna. in our work. Minimizing the variational lower bound is obviously more difficult to work with, as if. the bound is not tight enough, there is no guarantee that the original NLL is minimized..\nOur variational contrastive divergence is also related to GSNs Thibodeau-Laufer et al.(2014), as they both model a transition probability. However, GSNs adopt a transition distribution of forn. p(x|x), where x is produced by adding simple noises to training samples. This essentially limits the space of sampling limited to a Gaussian neighborhood of training examples, which our mode does not assume. VCD is also related to the adversarial autoencoder Makhzani et al.[(2015) as the. both include an autoencoder module, but with fundamental differences: the use of autoencoder ir our work is part of and to improve the EBM/GAN, while[Makhzani et al.[(2015) on the other hand requires another GAN besides the autoencoder..\nWe have proposed VGANs, a family of methodologies to train deep EBMs with an auxiliary varia tional distribution. We have drawn connection between deep EBMs and GANs, and propose practi cal solutions to stabilizing training. We show that our proposed bounded multi-modal energy com bined with variational contrastive divergence works well on generating realistically looking images. and recovering the data manifold by simulating a Markov Chain. We have also attempted to utilize the learned transition distributions to perform data augmentation in the context of semisupervised learning. and show consistent improvements.\nThere has been a recent surge on improving GANs Radford et al. (2015); Salimans et al.(2016); Zhao et al.(2016);Kim & Bengio(2016).Radford et al.(2015) proposes a set of techniques to sta- blize GANs, including using batch normlization, dropping pooling layers, reduced learning rate, and using strided convolutions, but there is little justification of the proposed designs. Our framework. however, directly addresses two most important issues, the energy parametrization and the entropy approximation, and allows the freedom of using the most conventional designs such as pooling and ReLU. Salimans et al.(2016) proposes several tricks to enhance the stability. For example, the proposed batch discrimination is in nature similar to our energy design, but with a much higher complexity.Kim & Bengio (2016);Zhao et al.[(2016) are the two most directly related efforts that connect GANs with EBMs. However, our work is the first to the best of our knowledge to identify the nature of the variational training of EBMs and to provide practical solutions in this view at the same time."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Geoffrey Hinton. A practical guide to training restricted boltzmann machines. Momentum, 9(1): 926, 2010.\nAapo Hyvarinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE transactions on Neural Networks, 10(3):626-634, 1999\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.\nTaesup Kim and Yoshua Bengio. Deep directed generative models with energy-based probability estimation. arXiv preprint arXiv:1606.03439, 2016\nYann LeCun, Sumit Chopra, and Raia Hadsell. A tutorial on energy-based learning. 2006\nRuslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. arXiv preprint arXiv:1606.03498. 2016\nEric Thibodeau-Laufer. Guillaume Alain, and Jason Yosinski. Deep generative stochastic network trainable by backprop. 2014.\nPascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371-3408, 2010.\nShuangfei Zhai, Yu Cheng, Weining Lu, and Zhongfei Zhang. Deep structured energy based model for anomaly detection. Proceedings of the 33nd International Conference on Machine Learning ICML 2016. New York City, NY, USA, June 19-24. 2016. pp. 1100-1109.2016.\nJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network arXiv preprint arXiv:1609.03126, 2016.\nGeoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771-1800, 2002a."}]
Hyvw0L9el
[{"section_index": "0", "section_name": "GENERATING INTERPRETABLE IMAGES WITH CONTROLLABLE E STRUCTURE", "section_text": "S. Reed, A. van den Oord, N. Kalchbrenner, V. Bapst, M. Botvinick, N. de Freitas Google DeepMind\n{reedscot, avdnoord, nalk, vbapst, botvinick, nandodefreitas}@google. con\nWe demonstrate improved text-to-image synthesis with controllable object loca tions using an extension of Pixel Convolutional Neural Networks (PixelCNN). Ir addition to conditioning on text, we show how the model can generate image conditioned on part keypoints and segmentation masks. The character-level tex encoder and image generation network are jointly trained end-to-end via maxi mum likelihood. We establish quantitative baselines in terms of text and structure conditional pixel log-likelihood for three data sets: Caltech-UCsD Birds (CUB) MPII Human Pose (MHP), and Common Objects in Context (MS-COCO).\nFigure 1: Examples of interpretable and controllable image synthesis. Left: MS-COCO, middle CUB, right: MHP. Bottom row shows segmentation and keypoint conditioning information..\nImage generation has improved dramatically over the last few years. The state-of-the-art images generated by neural networks in 2010, e.g. (Ranzato et al.l2010) were noted for their global structure and sharp boundaries, but were still easily distinguishable from natural images. Although we are far from generating photo-realistic images, the recently proposed image generation models using modern deep networks (van den Oord et al.2016cf Reed et al.]2016af Wang & Gupta]2016] Dinh et al.[2016f [Nguyen et al.2016) can produce higher-quality samples, at times mistakable for real.\nThree image generation approaches are dominating the field: generative adversarial networks (Goodfellow et al.]2014Radford et al. 2015] Chen et a1.[2016), variational autoencoders (Kingma & Welling2014] Rezende et al.2014 Gregor et al.|2015) and autoregressive models (Larochelle & Murray 2011 Theis & Bethge 2015 van den Oord et al.]2016b c). Each of these approaches have significant pros and cons, and each remains an important research frontier.\nResearchers have shown that it is possible to control and improve image generation by conditioning. on image properties, such as pose, zoom, hue, saturation, brightness and shape (Dosovitskiy et al. 2015] Kulkarni et al.]2015), part of the image (van den Oord et al.]2016b] Pathak et al.2016), surface normal maps (Wang & Gupta2016), and class labels (Mirza & Osindero2014) Van den Oord et al.1 2016c). It is also possible to manipulate images directly using editing tools and learned. generative adversarial network (GAN) image models (Zhu et al.2016)."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "A person on snow skis with a backpack. A white body and head with a bright orange A young girl is wearing a black ballerina outfit skiing down a mountain.. bill along with black coverts and rectrices.. and pink tights dancing. person person person beak tail beaktail beak tail head pelvishead pelvishead pelvis\nRealistic high-resolution image generation will impact media and communication profoundly. It will also likely lead to new insights and advances in artificial intelligence. Understanding how to control the process of composing new images is at the core of this endeavour.\nLanguage, because of its compositional and combinatorial power, offers an effective way of control. ling the generation process. Many recent works study the image to text problem, but only a handfu have explored text to image synthesis.Mansimov et al.(2015) applied an extension of the DRAW. mode1 of Gregor et al.(2015), followed by a Laplacian Pyramid adversarial network post-processin step (Denton et al.||2015), to generate 32 32 images using the Microsoft COCO dataset (Lin et al.. 2014). They demonstrated that by conditioning on captions while varying a single word in the cap. tion, we can study the effectiveness of the model in generalizing to captions not encountered in the. training set. For example, one can replace the word \"yellow\" with \"green\" in the caption \"A yellov. school bus parked in a parking lot' to generate blurry images of green school buses..\nReed et al.(2016a), building on earlier work (Reed et al.] 2016b), showed that GANs conditioned on captions and image spatial constraints, such as human joint locations and bird part locations,. enabled them to control the process of generating images. In particular, by controlling bounding. boxes and key-points, they were able to demonstrate stretching, translation and shrinking of birds. Their results with images of people were less successful.Yan et al.(2016) developed a layered. variational autoencoder conditioned on a variety of pre-specified attributes that could generate face. images subject to those attribute constraints.\nIn this paper we propose a gated conditional PixelCNN model (van den Oord et al.] 2016c) fo1 generating images from captions and other structure. Pushing this research frontier is important for several reasons. First, it allows us to assess whether auto-regressive models are able to match the GAN results of Reed et al.(2016a). Indeed, this paper will show that our approach with auto- regressive models improves the image samples of people when conditioning on joint locations and captions, and can also condition on segmentation masks. Compared to GANs, training the proposed model is simpler and more stable because it does not require minimax optimization of two deep networks. Moreover, with this approach we can compute the likelihoods of the learned models Likelihoods offer us a principled and objective measure for assessing the performance of different generative models, and quantifying progress in the field.\nFigure2lillustrates autoregressive density mod- eling via masked convolutions, here simplified to the 1D case. At training time, the convolu- tional network is given the sequence x1:T as both its input and target. The goal is to learn a density model of the form:\nT p(X1:T) =p(xt|X1:t-1) t=1\no ensure that the model is causal. that is that the prediction xt does not depend on x- for 7 t, while at the same time ensuring that Figure 2: Auto-regressive modelling the training is just as efficient as the training of nal with a masked convolutional nety standard convolutional networks.van den Oord et al.(2016c) introduce masked convolutions. Figure [2[shows, in blue, the active wei convolutional filters after multiplying them by masks. The filters connecting the inpu first hidden layer are in this case multiplied by the mask m = (1, 1, 0, 0, 0). Filters i layers are multiplied by m = (1, 1, 1, 0, 0) without compromising causality.\n' Obviously, this could be done in the 1D case by shifting the input, as in|van den Oord et al. (2016a\nSecond, by conditioning on segmentations and captions from the Microsoft COcO dataset we. demonstrate how to generate more interpretable images from captions. The segmentation masks enable us to visually inspect how well the model is able to generate the parts corresponding to each segment in the image. As in (Reed et al.|2016a), we study compositional image generation on the. Caltech-UCsD Birds dataset by conditioning on captions and key-points. In particular, we show. that it is possible to control image generation by varying the key-points and by modifying some of. the keywords in the caption, and observe the correct change in the sampled images..\nOOOOOOOOOO0OOO Outputlaye 00000000 OO OO Hidden laye O OO Hidden laye OOoooooooooooo Inputlayer\nFigure 2: Auto-regressive modelling of a 1D sig nal with a masked convolutional network.\nResidual connection ! Skip-connections Masked Masked 1x1Conv 1x1 Conv I+ Masked Masked ReL 1x1 Conv Softmax 1x1 Conv Output 1x1 Conv 1 1x1 Conv 111 Control inputs 1x1 Conv Text: A gray elephant standing next Masked Masked to a woman in a red dress. Nx1 Conv Shift up 1x1 Conv 2 f Image Masked 1x1 Conv 1xN Conv 1xN Conv structure: A k Layers Causal f = #feature maps Conv A Input\nFigure 3: PixelCNN with text and structure conditioning variables\nvan den Oord et al.(2016c) apply masked convolutions to generate colour images. For the input to first hidden layer, the mask is chosen so that only pixels above and to the left of the current pixel can influence its prediction (van den Oord et al.f2016c). For colour images, the masks also ensure that. the three color channels are generated by successive conditioning: blue given red and green, green. given red, and red given only the pixels above and to the left, of all channels..\nThe conditional PixelCNN model (Fig. 3) has several convolutional layers, with skip connections. so that outputs of each layer layer feed into the penultimate layer before the pixel logits. The inpu image is first passed through a causal convolutional layer and duplicated into two activation maps. v and h. These activation maps have the same width and height as the original image, say N N. but a depth of f instead of 3, as the layer applies f filters to the input.van den Oord et al.(2016c. introduce two stacks of convolutions, vertical and horizontal, to ensure that the predictor of the. current pixel has access to all the pixels in rows above; i.e. blind spots are eliminated..\nIn the vertical stack, a masked N N convolution is efficiently implemented with a 1 N convo lution with f filters followed by a masked N 1 convolution with 2f filters. The output activatior maps are then sent to the vertical and horizontal stacks. When sending them to the horizontal stack we must shift the activation maps, by padding with zeros at the bottom and cropping the top row, tc ensure that there is no dependency on pixels to the right of the pixel being predicted. Continuing or the vertical stack, we add the result of convolving 2f convolutional filters. Note that since the verti cal stack is connected to the horizontal stack and hence the ouput via a vertical shift operator, it cai afford to look at all pixels in the current row of the pixel being predicted. Finally, the 2f activatior maps are split into two activations maps of depth f each and passed through a gating tanh-sigmoic nonlinearity (van den Oord et al.]2016c).\nThe shifted activation maps passed to the horizontal stack are convolved with masked 1 1 convolu- tions and added to the activation maps produced by applying a masked 1 N horizontal convolution to the current input row. As in the vertical stack, we apply gated tanh-sigmoid nonlinearities before sending the output to the pixel predictor via skip-connections. The horizontal stack also uses residual connections (He et al.]2016). Finally, outputs v' and h' become the inputs to the next layer.\nAs shown in Figure [3] the version of model used in this paper also integrates global conditionin. information, text and segmentations in this example.\nSkip-connections Masked Masked x1Conv 1x1 Conv Masked Masked ReLL 1x1 Conv Softmax 1x1 Conv Output 1x1 Conv 1x1 Conv Control inputs 1x1Conv Text: A gray elephant standing next d to a woman in a red dress. nV 2 f Image Masked 1xN Conv structure: k Layers f = #feature maps\nIn our simple 1D example, if xt is discrete, say xt E {0,..., 255}, we obtain a classification problem, where the conditional density p(xt|x1:t-1) is learned by minimizing the cross-entropy loss. The depth of the network and size of the convolutional filters determine the receptive field. For example, in Figure2 the receptive field for xt is xt-6:t-1. In some cases, we may wish to expand the size of the receptive fields by using dilated convolutions (van den Oord et al.] 2016a).\nTo encode location structure in images we arrange the conditioning information into a spatial featur. map. For MS-COCO this is already provided by the 80-category class segmentation. For CUB anc. MHP we convert the list of keypoint coordinates into a binary spatial grid. For both segmentatior. and the keypoints in spatial format, the first processing layer is a class embedding lookup table, o equivalently a 1 1 convolution applied to a 1-hot encoding.\n: \"a gray elephant standing next to a woman in a red dress.\". information with local spatial informatio. Figure 4: Encoding text and spatial structure for image generation.\nFigure 4: Encoding text and spatial structure for image generation\nWe trained our model on three image data sets annotated with text and spatial structure\nKeypoint annotations for CUB and MHP were converted to a spatial format of the same resolution as the image (e.g. 32 32), with a number of channels equal to the maximum number of visible keypoints. A \"1\" in row i, column j, channel k indicates the visibility of part k in entry (i, j) of the image, and \"O' indicates that the part is not visible. Instance segmentation masks were re-sized to match the image prior to feeding into the network.\nWe trained the model on 32 32 images. The PixelCNN module used 10 layers with 128 feature maps. The text encoder reads character-level input, applying a GRU encoder and average pooling after three convolution layers. Unlike in Reed et al.(2016a), the text encoder is trained end-to- end from scratch for conditional image modeling. We used RMSprop with a learning rate schedule starting at 1e-4 and decaying to 1e-5, trained for 200k steps with batch size of 128.\nhead right arm 1x1 conv class pelvis embedding Depth concatenate left leg lookup spatial and text feature maps Dilated convolution layers. To right leg Pixel- Structure: Class segmentation or keypoint map. CNN Sequential encoding Spatial (GRU) tiling Conv. encoding Dilated same-conv layers: merge global text in a red dress..\nThe text is first encoded by a character-CNN-GRU as in (Reed et al.]2016a). The averaged embed ding (over time dimension) of the top layer is then tiled spatially and concatenated with the location pathway. This concatenation is followed by several layers of dilated convolution. These allow in- formation from all regions at multiple scales in the keypoint or segmentation map to be processed along with the text embedding, while keeping the spatial dimension fixed to the image size, using a much smaller number of layers and parameters compared to using non-dilated convolutions.\nThe MPII Human Pose dataset (MHP) has around 25K images of humans performing 410 different activities (Andriluka et al.]2014). Each person has up to 17 keypoints. We usec the 3 captions per image collected by (Reed et al.[2016a) along with body keypoint anno. tations. We kept only the images depicting a single person, and cropped the image centered around the person, leaving us 18K images. . The Caltech-UCSD Birds database (CUB) has 11,788 images in 200 species, with 10 cap tions per image (Wah et al.12011). Each bird has up to 15 keypoints. MS-COCO (Lin et al.|2014) contains 80K training images annotated with both 5 captions per image and segmentations. There are 80 classes in total. For this work we used class segmentations rather than instance segmentations for simplicity of the model.\nIn the following sections we demonstrate image generation results conditioned on text and both part. keypoints and segmentation masks. Note that some captions in the data contain typos, e.g. \"bird is read\"' instead of \"bird is red\", and were not introduced by the authors.\nIn this section we present results for MS-COCO, with a model conditioned on the class of object visible in each pixel. We also included a channel for background. Figure|5 shows several conditional samples and the associated annotation masks. The rows below each sample were generated by point- wise multiplying each active channel of the ground-truth segmentation mask by the sampled image Here we defined \"active' as occupying more than 1% of the image.\nEach group of four samples uses the same caption and segmentation mask, but the random seed is. allowed to vary. The samples tend to be very diverse yet still match the text and structure constraints Much larger examples are included in the appendix.\nThis laptop and monitor are The woman is riding her horse on A piece of cooked broccoli is on surrounded by many wires. the beach by the water. some cheese TV Person Broccoli Laptop Horse Pizza A person carrying their surfboard A bathroom with a vanity mirror next A young man riding a while walking along a beach to a white toilet. skateboard down a ramp. Person Toilet Person Surfboard Sink Skateboard Three men wearing black and ties A large cow walks over a fox in Two women in english riding stand and smile at something. the grass. outfits on top of horses. Person Dog Person . Tie Cow Horse\nFigure 5: Text- and segmentation-conditional general image samples\nWe observed that the model learns to adhere correctly to location constraints; i.e. the sampled images all respected the segmentation boundaries. The model can assign the right color to objects based or the class as well; e.g. green broccoli, white and red pizza, green grass. However, some foregrounc objects such as human faces appear noisy, and in general we find that object color constraints are not captured as accurately by the model as location constraints.\nWe observe that the model consistently associates keypoints to the apparent body part in the gen erated image; see \"beak\"' and \"tail' labels drawn onto the samples according to the ground-truth location. In this sense the samples are interpretable; we know what the model was meant to depici at salient locations. Also, we observe a large amount of diversity in the background scenes of each query, while pose remains fixed and the bird appearance consistently matches the text.\nA small sized bird with a yellow belly This is a colorful bird with a blue and Key- Key- Key- The head of the bird is read and the and black tipped head. green body and orange eyes. body is black and white. points points points beak beak beak beak tail beak tail beak tail beak tail beak beaktail beak tail beak tail beak tail .. This bird has a long, narrow, sharp A black nape contrasts the white plumage of The black bird looks like a crow has yellow beak ending in a black tip. this bird, who is stretching its wingspan. black beak, black wings and body. beak tail beak tail beak tail beak tail tail beak tail beak tail beak tailbeak beak tail beak tail beak tail beaktail A yellow bird with grey wings and a This small green bird with a thin curved A white body and head with a bright orang short and small beak. bill has a blue patch by its eyes. bill along with black coverts and rectrices. beak beak beak beak beak tail beak tail beak tail beak tail beak tail beak tail beak tail beaktail\nFigure 6: Text- and keypoint-conditiona1 bird samples (CUB data)\nChanging the random seed (within each block of four samples), the background details change. significantly. In some cases this results in unlikely situations, like a black bird sitting in the sky with wings folded. However, typically the background remains consistent with the bird's pose, e.g. including a branch for the bird to stand on..\nA man in a white shirt is holding a A man in a blue and white \"kuhl\" biker. A swimmer is at the bottom on a pool. Key- Key- Key- lacrosse racket. outfit is riding a bike up a hill. taking off his swim gear. points points points pelvishead pelvishead pelvishead pelvishead head head head head head head head head A woman in an orange shirt is standing. A man in a white button up shirt is standing. A man wearing a blue t shirt and shorts. in front of a sink behind an ironing board ironing a shirt.. is rock climbing. head pelvis head pelvis head pelvis head pelvis pelvishead pelvishead pelvishead pelvishead head pelvishead pelvishead pelvishead pelvis A man in a white shirt and black shorts holding a. A man in a blue hat is holding a shovel A man practices his ice skating, wearing tennis racket and ball about to hit it on a tennis court. in a dirt filled field. hockey boots, at an ice skating rink. pelvishead pelvishead pelvishead pelvishead head head head head pelvishead pelvishead pelvishead pelvishead\nFigure 7: Text- and keypoint-conditional samples of images with humans (MHP data)\nIn this section we show results on CUB and MHP, using bird and human part annotations. Figure|6 shows the results of six different queries, with four samples each. Within each block of four samples. the text and keypoints are held fixed. The keypoints are projected to 2D for visualization purposes. but note that they are presented to the model as a 32 32 K tensor, where K is the number of keypoints (17 for MHP and 15 for CUB).\nFigure 7|shows the same protocol applied to human images in the MHP dataset. This setting is probably the most difficult, because the training set size is much smaller than MS-COCO, but the. variety of poses and settings is much greater than in CUB. In most cases we see that the generated person matches the keypoints well, and the setting is consistent with the caption, e.g. in a pool,. outdoors or on a bike. However, producing the right color of specific parts, or generating objects associated to a person (e.g. bike) remain a challenge..\nWe found it useful to adjust the temperature of the softmax during sampling. The probability of drawing value k for a pixel with probabilities p is pT / , pT, where T is the inverse temperature.\nTable 1: Text- and structure-conditional negative log-likelihoods (nll) in nats/dim. Train, validatior and test splits include all of the same categories but different images and associated annotations\nHigher values for T makes the distribution more peaked. In practice we observed that larger 7 resulted in less noisy samples. We used T = 1.05 by default.\nIdeally, the model should be able to render any combination of of valid keypoints and text descriptior of a bird. This would indicate that the model has \"disentangled\"' location and appearance, and has. not just memorized the (caption, keypoint) pairs seen during training. To tease apart the influence of keypoints and text in CUB, in Figure 8|we show the results of both holding the keypoints fixed. while varying simple captions and fixing the captions while varying the keypoints..\nThis bird is bright yellow.. This bird is completely black This bird is bright red. This bird is completely green. This bird is bright blue.. This bird is all white..\nFigure 8: Columns: varying text while fixing pose. Rows (length 6): varying pose while fixing text Note that the random seed is held fixed in all samples.\nTo limit variation across captions due to background differences, we re-used the same random seed derived from each pixel's batch, row, column and color coordinates2l This causes the first few generated pixels in the upper-left of the image to be very similar across a batch (down columns in Figure[8), resulting in similar backgrounds\nIn each column, we observe that the pose of the generated birds satisfies the constraints imposed by the keypoints, and the color changes to match the text. This demonstrates that we can effec- tively control the pose of the generated birds via the input keypoints, and its color via the captions simultaneously. We also observe a significant diversity of appearance..\nHowever, some colors work better than others, e.g. the \"bright yellow\"' bird matches its caption, but. \"completely green' and \"all white\"' are less accurate. For example the birds that were supposed to be. white are shown with dark wings in several cases. This suggests the model has partially disentangled location and appearance as described in the text, but still not perfectly. One possible explanation is. that keypoints are predictive of the category of bird, which is predictive of the appearance (including. color) of birds in the dataset."}, {"section_index": "2", "section_name": "3.3 QUANTITATIVE RESULTS", "section_text": "Table[1shows quantitative results in terms of the negative log-likelihood of image pixels conditioned. on both text and structure, for all three datasets. For MS-COCO, the test negative log-likelihood is not included because the test set does not provide captions.\n2Implemented by calling np. random. seed((batch, row, col, color) ) before sampling\nThe quantitative results show that the model does not overfit, suggesting that in future research a use ful direction may be to develop higher-capacity models that are still memory- and computationally efficient to train."}, {"section_index": "3", "section_name": "3.4 COMPARISON TO PREVIOUS WORKS", "section_text": "Figure9 compares to MHP results from Reed et al.(2016a). In comparison to the approach ad- vanced in this paper, the samples produced by the Generative Adversarial What-Where Networks are significantly less diverse. Close inspection of the GAN image samples reveals many wavy arti- facts, in spite of the conditioning on body-part keypoints. As the bottom row shows, these artifacts can be extreme in some cases.\nKey- GAN (Reed 2016b) This work points A man in a orange jacket with sunglasses and a hat ski down a hill This guy is in black trunks and swimming underwater.. A tennis player in a blue polo shirt is looking down at the green court..\nFigure 9: Comparison to Generative Adversarial What-Where Networks (Reed et al.]2016a). GAN samples have very low diversity, whereas our samples are all quite different.."}, {"section_index": "4", "section_name": "4 DISCUSSION", "section_text": "In this paper, we proposed a new extension of PixelCNN that can accommodate both unstructured text and spatially-structured constraints for image synthesis. Our proposed model and the recent Generative Adversarial What-Where Networks both can condition on text and keypoints for image synthesis. However, these two approaches have complementary strengths. Given enough data GANs can quickly learn to generate high-resolution and sharp samples, and are fast enough at inference time for use in interactive applications (Zhu et al.|2016). Our model, since it is an extension of the autoregressive PixelCNN, can directly learn via maximum likelihood. It is very simple, fast and robust to train, and provides principled and meaningful progress benchmarks in terms of likelihood.\nWe advanced the idea of conditioning on segmentations to improve both control and interpretability of the image samples. A possible direction for future work is to learn generative models of segmen tation masks to guide subsequent image sampling. Finally, our results have demonstrated the ability of our model to perform controlled combinatorial image generation via manipulation of the input text and spatial constraints."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Mykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, and Bernt Schiele. 2d human pose estima tion: New benchmark and state of the art analysis. In CVPR, pp. 3686-3693, 2014.\nEmily L. Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image model using a Laplacian pyramid of adversarial networks. In NIPS, pp. 1486-1494, 2015\nAlexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with convolutional neural networks. In CVPR, pp. 1538-1546, 2015\nKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. DRAW A recurrent neural network for image generation. In 1CML, pp. 1462-1471, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residua networks. In ECCV, pp. 630-645, 2016.\nDiederik P. Kingma and Max Welling. Auto-encoding variational Bayes. In ICLR, 2014.\nHugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS 2011.\nAnh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing th preferred inputs for neurons in neural networks via deep generator networks. In NIPs, 2016.\nDeepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. Context encoders: Feature learning by inpainting. Preprint arXiv:1604.07379, 2016.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with dee convolutional generative adversarial networks. Preprint arXiv:1511.06434, 2015.\nMarc' Aurelio Ranzato, Volodymyr Mnih, and Geoffrey E. Hinton. Generating more realistic images using gated MRF's. In NIPS, pp. 2002-2010, 2010.\nScott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn ing what and where to draw. In NIPS, 2016a.\nScott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee Generative adversarial text-to-image synthesis. In ICML, pp. 1060-1069, 2016b.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, pp. 1278-1286, 2014.\nAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. WaveNet: A generative mode for raw audio. Preprint arXiv:1609.03499, 2016a.\nIan J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pp. 2672-2680. 2014.\nTejas D. Kulkarni, William F. Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. Deep convolu tional inverse graphics network. In NIPS, pp. 2539-2547, 2015\nElman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating images from captions with attention. In ICLR, 2015.\nAaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with PixelCNN decoders. In NIPs, 2016c..\nXiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversar ial networks. In ECCV, pp. 318-335, 2016.\nXinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2Image: Conditional imag generation from visual attributes. In ECCV, pp. 776-791, 2016\nJun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipu lation on the natural image manifold. In ECCV, pp. 597-613, 2016.\nthis laptop and monitor are surrounded by many wires\na couple of people standing in a field playing with a frisbee\na cat is drinking water from the toilet bow\na small child touches the trunk of an elephant standing behind an enclosure\nthe woman is riding her horse on the beach by the water\na man in a tie shaking a woman by the hand\nthis bird has a long, narrow, sharp yellow beak ending in a bla beak tail beak tail beak tail beaktail a small sized bird with a yellow belly and black tipped head beak beak beak beak the head of the bird is read and the body is black and white beaktail beak tai1 beak tail beaktail"}, {"section_index": "6", "section_name": "a man in a blue shirt is throwing a brown football on a fiel", "section_text": "head head head head\nthe lady with the blue jacket placed her hands around her shoulde\nhead pelvishead pelvishead pelvishead pelvis a man in a blue shirt and white cargo shorts holds up a carpe pelvishead pelvishead pelvishead pelvishead a man in khaki fishing gear and waders has caught a fish head head head head a man in a blue cap and grey shirt spray paints a house. head head head head a person in a black jacket and neon yellow pants is skiing do pelvishead pelvishead pelvishead pelvishead"}]
SJ25-B5eg
[{"section_index": "0", "section_name": "THE NEURAL NOISy CHANNEL", "section_text": "ei.yu@cs.ox.ac.uk, {pblunsom, cdyer, etg, tkocisky}@google. com\nWe formulate sequence to sequence transduction as a noisy channel decoding problem and use recurrent neural networks to parameterise the source and channel models. Unlike direct models which can suffer from explaining-away effects dur ing training, noisy channel models must produce outputs that explain their inputs and their component models can be trained with not only paired training samples but also unpaired samples from the marginal output distribution. Using a latent variable to control how much of the conditioning sequence the channel model needs to read in order to generate a subsequent symbol, we obtain a tractable and effective beam search decoder. Experimental results on abstractive sentence summarisation, morphological inflection, and machine translation show that noisy channel models outperform direct models, and that they significantly benefit from increased amounts of unpaired output data that direct models cannot easily use."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Beyond their data omnivorousness, noisy channel models have other benefits. First, the two com. ponent models mean that two different aspects of the transduction problem can be addressed inde. oendently. For example, in many applications, source models are language models and innovations. in these can be leveraged to obtain improvements in any system that uses them as a component. Second, the component models can have complementary strengths, since inference is carried ou. in the product space; this simplifies design because a single model does not have to get everything. oerfectly right. Third, the noisy channel operates by selecting outputs that both are a priori likel and that explain the input well. This addresses a failure mode that can occur in conditional models. in which inputs are \"explained away\" by highly predictive output prefixes, resulting in poor training. (Klein & Manning] 2001). Since the noisy channel formulation requires its outputs to explain the. observed input, this problem is avoided..\nIn principle, the noisy channel decomposition is straightforward; however, in practice, decoding (i.e., computing arg maxy p(x y)p(y)) is a significant computational challenge, and tractability. concerns impose restrictions on the form the component models can take. To illustrate, an appealing parameterization would be to use an attentional seq2seq network (Bahdanau et al.]2015) to model\nWork completed at DeepMind"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Recurrent neural network sequence to sequence models (Kalchbrenner & Blunsom]2013fs Sutskever et al.2014]Bahdanau et al.2015) are excellent models of p(output sequence y input sequence x), provided sufficient input-output (x, y) pairs are available for estimating their parameters. However,. n many domains, vastly more unpaired output examples are available than input-output pairs (e.g.. ranscribed speech is relatively rare although non-spoken texts are abundant; Swahili-English trans ations are rare although English texts are abundant; etc.). A classic strategy for exploiting both kinds of data is to use Bayes' rule to rewrite p(y | x) as p(x | y)p(y)/p(x), a factorisation which is. called a noisy channel model (Shannon1948). A noisy channel model thus consists of two compo-. nent models: the conditional channel model, p(x y), which characterizes the reverse transduction problem and whose parameters are estimated from the paired (x, y) samples, and the unconditional source model, p(y), whose parameters are estimated from both the paired and (usually much more numerous) unpaired samples\nthe channel probability p(x y). However, seq2seq models are designed under the assumption. that the complete conditioning sequence is available before any prefix probabilities of the output. sequence can be computed. This assumption is problematic for channel models since it means that. a complete output sequence must be constructed before the channel model can be evaluated (since. the channel model conditions on the output). Therefore, to be practical, the channel probability. must decompose in terms of prefixes of the conditioning variable, y. While the chain rule justifies. decomposing output variable probabilities in terms of successive extensions of a partial prefix, no. such convenience exists for conditioning variables. and approximations must be introduced.\nIn this work, we use a variant of the newly proposed online seq2seq model of Yu et al.(2016 which uses a latent alignment variable to enable its probabilities to factorize in terms of prefixes of both the input and output, making it an appropriate channel model (d2). Using this channel model, the decoding problem then becomes similar to the problem faced when decoding with direct models (d3). Experiments on abstractive summarization, machine translation, and morphologica. inflection show that the noisy channel can significantly improve performance and exploit unpaired output training samples and that models that combine the direct model and a noisy channel mode offer further improvements still (4).\nOur model is based on the Segment to Segment Neural Transduction model (SsNT) of Yu et al. 2016. At a high level, the model alternates between encoding more of the input sequence anc decoding output tokens from the encoded representation. This presentation deviates from the Yu e al's presentation so as to emphasize the incremental construction of the conditioning context that is enabled by the latent variable."}, {"section_index": "3", "section_name": "2.1 MODEL DESCRIPTION", "section_text": "Similar to other neural sequence to sequence models, SsNT models the conditional probabilit p(y x) of a output sequence y given a input sequence x..\np(y|x)=>`p(y,z|x) Z |y| I1 p(y,z|x) ~ -1,x (yjxj,3 j=1 alignment probability word probability\nWe explain the model in terms of its two components, starting with the word generation term. Ir. the SSNT, the input and output sequences x, y are encoded with two separate LSTMs (Hochre. iter & Schmidhuber1997), resulting in sequences of hidden states representing prefixes of these. sequences. In Yu et al.'s formulation, the input sequence encoder (i.e., the conditioning contex. encoder) can either be a unidirectional or bidirectional LSTM, but here we assume that it is a uni directional LSTM, which ensures that it will function well as a channel model that can comput. probabilities with incomplete conditioning contexts (this is necessary since, at decoding time, w. will be constructing the conditioning context incrementally). Let h; represent the input sequenc. encoding for the prefix x?. Since the final action at timestep j will be to predict y, it is convenien. to let s; denote the hidden state that excludes yj, i.e., the encoding of the prefix y-1\nThe probability of the next token y; is calculated by concatenating the aligned hidden state vector S; and hz, followed by a softmax layer,\np(yj|x7j,yj-1) ) x exp(Ww[hz.;s;+ bw)\nTo avoid having to observe the complete input sequence x before making a prediction of the begin ning of the output sequence, we introduce a latent alignment variable z which indicates when each token of the output sequence is to be generated as the input sequence is being read. Since we as- sume that the input is read just once from left to right, we restrict z to be a monotonically increasing alignment (i.e., Zj+1 z is true with probability 1), where z; = i denotes that the output token at position j (yj) is generated when the input sequence up through position i has been read. The SSNT model is:\nWe now discuss how the sequence of z,'s are generated. First, we remark that modelling this distri bution requires some care so as to avoid conditioning on the entire input sequence. To illustrate why one might induce a dependency on the entire input sequence in this model, it is useful to compare to a standard attention model. Attention models operate by computing a score using a representation of alignment candidate (in our case, the candidates would be every unread token remaining in the input). If we followed this strategy, it would be necessary to observe the full input sequence when making the first alignment decision.\nWe instead model the alignment transition from timestep j to j +1 by decomposing it into a sequence of conditionally independent sHiFT and EMiT operations that progressively decide whether to read another token or stop reading. That is, at input position i, the model decides to Emit, i.e., to set zj = i and predict the next output token y; from the word model, or it decides to shIfT, i.e., to read one more input token and increment the input position i i + 1. The probability ) is calculated using the encoder and decoder states defined above as:\np(ai,i= EMITx},y] ) = o(MLP(Wt[h;;s;+ bt))\nif i<Zj-1 p(ai,j = EMIT) if i = Zj-1 p(Zj=i|Zj-1,Yj-',x1) I=zj-1P(ai,j = SHIFT)) p(ai, = EMIT) ifi> Zj-1\nThe model is trained to minimize the negative log likelihood of the parallel corpus S\n0= logp(y|x;0) (x,y)ES log a([x], [yD) (x,y)ES\nIn this paper, we use a slightly different objective from the one described in|Yu et al.(2016). Rather than marginalizing over the paths that end in any possible input positions i=1 a(i, |yl), we require that the full input be consumed when the final output symbol is generated. This constraint biases away from predicting outputs without explaining them using the input sequence.\nThe probability of sHIFT is simply 1 - p(ai,j = Emit). In this formulation, the probabilities of. aligning z; to each alignment candidate i can be computed by reading just x? (rather than the entire seauence). The nrohahilitie.\nUsing the probabilities of the auxiliary ai,j variables, the alignment probabilities needed in Eq.1 are computed as:\nIn SSNT, the probability of generating each y; depends only on the current output position's align-. ment (z), the current output prefix (y?-), the input prefix up to the current alignment (x). It does. not depend on the history of the alignment decisions. Likewise, the alignment decisions at each po-. sition are also conditionally independent of the history of alignment decisions. Because of these independence assumptions, z can be marginalised using a O(x[2 : y) time dynamic programming. algorithm where each fill in a chart with computing the following marginal probabilities:.\na(i,j)=p(zj=i,y|x)=a(i',j-1)p(zj|zj-1,x,y p(yi 3 i'=1 alignment probability word probability\nThe gradients of this objective with respect to the component probability models can be computed. using automatic differentiation or using a secondary dynamic program that computes 'backward probabilities. We refer the reader to Section 3.1 of[Yu et al.(2016) for details.."}, {"section_index": "4", "section_name": "3 DECODING", "section_text": "We now turn to the problem of decoding, that is, of computing\ny = argmaxp(x y)p(y) y\nwhere we are using the SSNT model described in the previous section as the channel model and a language model that delivers prior probabilities of the output sequence in left-to-right order, i.e. p(yiyi-1)\nMarginalizing the latent variable during search is computationally hard (Sima'an]1996), and so we approximate the search problem as\ny = argmax maxp(x, z y)p(y) y Z\nTo reduce this computational effort, we make use of an auxiliary direct model q(y, z x) to explore probable extensions of partial hypotheses, rather than trying to perform an exhaustive search over the vocabulary each time we extend an item on the beam.\nAlgorithm[1 in Appendix[A] describes the decoding algorithm based on a formulation by Tillmann et al.(1997]. The idea is to create a matrix Q of partial hypotheses. Each hypothesis in cell (i, j). covers the first i words of the input (x?) and corresponds to an output hypothesis prefix of length. j (y). The hypothesis is associated with a model score. For each cell (i, j), the direct proposal. model first calculates the scores of possible extensions of previous cells that could then reach (i, j). by considering every token in the output vocabulary, from all previous candidate cells (i - 1, j).. That gives the top K1 partial output sequences. These partial output sequences are subsequently. rescored by the noisy channel model, and the K, best candidates are kept in the beam and used for further extension. The beam size K1 and K2 are hyperparameters to be tuned in the experiments.."}, {"section_index": "5", "section_name": "3.1 MODEL COMBINATION", "section_text": "The decoder we have just described makes use of an auxiliary decoding model. This means that as a generalisation, it is capable of decoding under an objective that is a linear combination of the direct model, channel model, language model and a bias for the output length\n= X1 logp(yj |x3) + X2 logp(x} |yD) + X3 logp(y) + X4|y]\nThe bias is used to penalize the noisy channel model for generating too-short (or long) sequences The X's are hyperparameters to be tuned using on a small amount of held-out development data."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "We evaluate our model on three natural language processing tasks, abstractive sentence summarisa-. tion. machine translation and morphological inflection generation. For each task, we compare the performance of the direct model, noisy channel model, and the interpolation of the two models..\n2In the experiments, we did not marginalize the probability of the direct model when calculating the genera search objective. We found that marginalizing the probability does not give better performance and make decoding extremely slow.\nHowever, even with this simplification, the search problem remains nontrivial. On one hand, we must search over the space of all possible outputs with a model that makes no Markovian assump-. tions. This is similar to the decoding problem faced in standard seq2seq transducers. On the other. hand, our model computes the probability of the given input conditional on the predicted output. hypothesis. Therefore, instead of just relying on a single softmax to provide a probability for every. output word type (as we conveniently can in the direct model), we must loop over each output word type, and run a softmax over the input vocabulary--a computational expense that is quadratic in the size of the vocabulary!\nModel # Parallel data # Data for LM RG-1 RG-2 RG-L direct (uni)* 1.0m 30.94 14.20 28.72 direct (bi) 1.0m 31.25 14.52 29.03 direct (bi) 3.8m 33.82 16.66 31.50 channel + LM + bias (uni)* 1.0m 1.0m 31.92 14.75 29.58 channel + LM + bias (bi) 1.0m 1.0m 31.96 14.89 29.51 direct + channel + LM + bias (uni) 1.0m 1.0m 33.07 15.21 30.29 direct + channel + LM + bias (bi) 1.0m 1.0m 33.18 15.65 30.45 channel + LM + bias (uni)* 1.0m 3.8m 32.59 15.05 30.06 channel + LM + bias (bi) 1.0m 3.8m 32.65 14.95 30.23 direct + LM + bias (bi) 1.0m 3.8m 31.25 14.52 29.03 direct + channel + LM + bias (uni) 1.0m 3.8m 33.16 15.63 30.53 direct + channel + LM + bias (bi) 1.0m 3.8m 33.21 15.65 30.60 chanel + LM + bias (bi) 3.8m 3.8m 34.12 16.41 31.38 direct + LM + bias (bi) 3.8m 3.8m 33.82 16.66 31.50 direct + channel + LM + bias (bi) 3.8m 3.8m 34.41 16.86 31.83"}, {"section_index": "7", "section_name": "4.1 ABSTRACTIVE SENTENCE SUMMARISATION", "section_text": "The same configuration is used to train the direct model and the channel model. The loss (Equatior. 2) is optimized by Adam (Kingma & Ba2015), with initial learning rate of 0.001. We use LSTMs. with 1 layer for both the encoder and decoders, with hidden units of 256. The mini-batch size is 32, and dropout of O.2 is applied to the input and output of LSTMs. For the language model, we. use a 2-layer LSTM with 1024 hidden units and 0.5 dropout. The learning rate is O.0001. All. the hyperparameters are optimised via grid search on the perplexity of the validation set. During. decoding, beam search is employed with the number of proposals generated by the direct model. Kj = 20, and the number of best candidates selected by the noisy channel model K2 = 10..\nTable[1|presents the ROUGE-F1 scores of the test set from the direct model, noisy channel model (channel + LM + bias), the interpolation of the direct model and the noisy channel model (direct + channel + LM + bias), and the interpolation of the direct model and language model (direct + LM + bias) trained on different sizes of data. The noisy channel model with the language model trained on the target side of the 1 million parallel data outperforms the direct model by approximately 1 point. Such improvement indicates that the language model helps improve the quality of the output sequence when no extra unlabelled data is available. Training the language model with all the headlines in the dataset, i.e. 3.8 million sentences, gives a further boost to the ROUGE score. This is in line with our expectation that the model benefits from adding large amounts of unlabelled data. The interpolation of the direct model, channel model, language model and bias of the output length achieves the best results - the ROUGE score is close to the direct model trained on all the parallel\nTable 1: ROUGE F1 scores on the sentence summarisation test set. The 'uni' and 'bi' in the paren theses denote the encoder for the model proposing candidates is a unidirectional LSTM or bidirec tional LSTM. Those rows marked with an * denote models that process their input online.\nSentence summarisation is the problem of constructing a shortened version of a sentence while. preserving the majority of its meaning. In contrast to extractive summarisation, which can only. copy words from the original sentence, abstractive summarisation permits arbitrary rewording of the. sentence. The dataset (Rush et al.|2015) that we use is constructed by pairing the first sentence and the headline of each article from the annotated Gigaword corpus (Graff et al.|2003] Napoles. et al.2012). There are 3.8m, 190k and 381k sentence pairs in the training, validation and test sets,. respectively. Yu et al.[(2016) filtered the dataset by restricting the lengths of the input and output sentences to be no greater than 50 and 25 tokens, respectively. From the filtered data, they further. sampled 1 million sentence pairs for training. We experimented on training the direct model and. channel model with both the sampled 1 million and the full 3.8 million parallel data. The language. model is trained on the target side of the parallel data, i.e. the headlines. We evaluated the generated. summaries of 2000 randomly sampled sentence pairs using full length ROUGE F1. This setup is. in line with the previous work on this task (Rush et al.]2015} Chopra et al.2016f Gulcehre et al.. 2016fYu et al.2016).\nModel # Parallel data # Unpaired data RG-1 RG-2 RG-L ABS+ 3.8m 29.55 11.32 26.42 RAS-LSTM 3.8m 32.55 14.70 30.03 RAS-Elman 3.8m 33.78 15.97 31.15 Pointing unkown words. 3.8m 35.19 16.66 32.51 ASC + FSC 1.0m 3.8m 31.09 12.79 28.97 ASC + FSC 3.8m 3.8m 34.17 15.94 31.92 direct + channel + LM + bias (bi) 1.0m 3.8m 33.21 15.65 30.60 direct + channel + LM + bias (bi) 3.8m 3.8m 34.41 16.86 31.83\nTable 2: Overview of results on the abstractive sentence summarisation task. ABS+ (Rush et al. 2015) is the attentive model with bag-of-words as the encoder. RAS-LSTM and RAS-Elman (Chopra et al.] 2016) are the sequence to sequence models with attention with the RNN cell im plemented as LSTMs and an Elman architecture (Elman1990), respectively. Pointing the unknown words (Gulcehre et al.[ 2016) uses pointer networks (Vinyals et al.]2015) to select the output to- ken from the input sequence in order to avoid generating unknown tokens. ASC + FSC (Miao &. Blunsom] 2016) is the semi-supervised model based on a variational autoencoder.\ndata. Although there is still improvement, when the direct model is trained with more data, th gap between the direct model and the noisy channel model is smaller. No gains is observed if the language model is combined with the direct model. We find that as we increase the weight of the language model, the result is getting worse.\nTable 2| surveys published results on this task, and places our best models in the context of the. current state-of-the-art results. ABS+ (Rush et al.]2015), RAS-LSTM and RAS-Elman (Chopra et al.[[2016) are different variations of the attentive models. Pointing the unkown words uses pointer networks (Vinyals et al.] 2015) to select the output token from the input sequence in order to avoid. generating unknown tokens. ASC + FSC (Miao & Blunsom]2016) is a semi-supervised model. based on a variational autoencoder. Trained on 1m paired samples and 3.8m unpaired samples, the noisy channel achieves comparable or better results than (direct) models trained with 3.8m paired. samples. Compared toMiao & Blunsom(2016), whose ASC + FSC models is an alternative strategy for using unpaired data, the noisy channel is significantly more effective - 33.21 versus 31.09 in. ROUGE-1.\nFinally, motivated by the qualitative observation that noisy channel model outputs were quite fluent. and often used reformulations of the input rather than a strict compression (which would be poorly scored by ROUGE), we carried out a human preference evaluation whose results are summarised in. Table[3] This confirms that noisy channel summaries are strongly preferred over those of the direct. model.\nTable 3: Preference ratings for 641 segments from the test set (each segment had ratings from at least 2 raters with > 50% agreement on the label and where one label had a plurality of the votes)."}, {"section_index": "8", "section_name": "4.2 MACHINE TRANSLATION", "section_text": "We next evaluate our models on a Chinese-English machine translation task. We used paralle. data with 184k sentence pairs (from the FBIS corpus, LDC2003E14) and monolingual data with 4.3 million of English sentences (selected from the English Gigaword). The training data is preprocesse. by lowercasing the English sentences, replacing digits with #' token, and replacing tokens appearing.\nModel count both bad 188 both good 106 direct > noisy channel 135 noisy channel > direct 212\nModel BLEU seq2seq w/o attention 11.19 seq2seq w/ attention 25.27 direct (bi) 23.33 direct + LM + bias (bi) 23.33 channel + LM + bias (bi) 26.28 direct + channel + LM + bias (bi) 26.44\nTable 4: BLEU scores from different models for the Chinese to English machine translation task\nless than 5 times with an UNK token. This results in vocabulary sizes of 30k and 20k for Chinese sentences and English sentences, respectively.\nThe models are trained using Adam (Kingma & Ba]2015) with initial learning rate of 0.001 for the direct model and the channel model, and O.0o01 for the language model. The LSTMs for the direct and channel models have 512 hidden units and 1 layer, and 2 layers with 1024 hidden units per layer for the language model. Dropout of O.5 on the input and output of LSTMs is set for all the model. training. The noisy channel decoding uses Kj = 20 and K2 = 10 as the beam sizes.\nTable 4|lists the translation performance of different models in BLEU scores. To set benchmarks we train the vanilla and attentional sequence to sequence models (Sutskever et al.]2014) Bahdanau et al.1[2015) using the same parallel data. For direct models, we leverage bidirectional LSTMs as the encoder for this task. We can see that the vanilla sequence to sequence model behaves poorly due tc the small amounts of parallel data. By contrast, the direct model (SsNT) and the attentional mode work relatively well, with the attentional model outperforming the SSNT direct model. Although these models both directly model p(y x), this result is unsurprising because the SSNT direct mode] is most effective when the alignment between sequences is largely monotonic, and Chinese-English translation word orders diverge considerably. However, despite this limitation, the noisy channe. model is approximately 3 points higher in BLEU than the direct model, and the combination of noisy channel and direct model gives extra boost. Confirming the empirical findings of prior work (and in line with theoretical predictions), the interpolation of the direct model and language mode is not effective."}, {"section_index": "9", "section_name": "4.3 MORPHOLOGICAL INFLECTION GENERATION", "section_text": "Morphological inflection is the task of generating a target (inflected form) word from a source word. (base form), given a morphological attribute, e.g. number, tense, and person etc.. It is useful for. reducing data sparsity issues in translating morphologically rich languages. The transformation from. the base form to the inflected form is usually to add prefix or suffix, or to do character replacement.. The dataset (Durrett & DeNero2013) that we use in the experiments is created from Wiktionary. including inflections for German nouns, German verbs, Spanish Verbs, Finnish noun and adjective,. and Finnish verbs. We only experimented on German nouns and German verbs, as German nouns is the most difficult task3l and the direct model does not perform as well as other state-of-the-. art systems on German verbs. The train/dev/test split for German nouns is 2364/200/200, and for. German verbs is 1617/200/200. There are 8 and 27 inflection types in German nouns and German. verbs, respectively. Following previous work, we learn a separate model for each type of inflection. independent of the other inflections. We report results on the average accuracy across different. inflections. Our language models were trained on word types extracted by running a morphological. analysis tool on the WMT 2016 monolingual data and extracting examples of appropriately inflected. word forms,4[After annotation the number of instances for training the language model ranged from. 300k to 3.8m for different inflection types in German nouns, and from 200 to 54k in German verbs..\nThe experimental setup that we use on this task is K1. = 60, K2 = 30,\nModel Acc. NCK15 88.60 FTND16 88.12 NCK15+ 89.90 FTND16+ 89.31 direct (uni) 82.25 direct (bi) 87.68 channel + LM + bias (uni) 78.38 channel + LM + bias (bi) 78.13 direct + LM + bias (bi) 90.31 direct + channel + LM + bias (uni) 88.44 direct + channel + LM + bias (bi) 90.94\nFigure 1: Accuracy on morphological inflection of German nouns (a), and German verbs (b) NCK15 (Nicolai et al.l|2015) and FTND16 (Faruqui et al.[|2016) are previous state-of-the-art on this task, with NCK15 based on feature engineering, and FTND16 based on neural networks. NCK15+ and FTND16+ are the semi-supervised setups of these models."}, {"section_index": "10", "section_name": "5 ANALYSIS", "section_text": "By observing the output generated by the direct model and noisy channel model, we find (in line with theoretical critiques of conditional models) that the direct model may leave out key information By contrast, the noisy channel model does seem to avoid this issue. To illustrate, in Example 1 (see Appendix B) in Table |5] the direct model ignores the key phrase 'coping with', resulting in incomplete meaning, but the noisy channel model covers it. Similarly, in Example 6, the direct model does not translate the Chinese word corresponding to 'investigation'. We also observe thai while the direct model mostly copies words from the source sentence, the noisy channel model prefers generating paraphrases. For instance, in Example 2, while the direct model copies the word 'accelerate' in the generated output, the noisy channel model generate 'speed up' instead. While one might argue that copying is a preferable compression technique than paraphrasing (as long as it produces grammatical outputs), it does show the power of these models.\nNoisy channel decompositions have been successfully used in a variety of problems, including speech recognition (Jelinek1998), machine translation (Brown et al.[1993), spelling correc tion (Brill & Moore2000), and question answering (Echihabi & Marcu]2003). The idea of adding language models and monolingual data in machine translation has been explored in earlier work. Gulcehre et al.(2015) propose two strategies of combining a language model with a neural se- quence to sequence model. In shallow fusion, during decoding the sequence to sequence model\nModel Acc. NCK15 97.50 FTND16 97.92 NCK15+ 97.90 FTND16+ 97.11 direct (uni) 87.85 direct (bi) 94.83 channel + LM + bias (uni). 84.42 channel + LM + bias (bi). 92.13 direct + LM + bias (bi). 94.83 direct + channel + LM + bias (uni) 92.20 direct + channel + LM + bias (bi). 97.15 (b)\nable[1summarises the results from our models. On both datasets, the noisy channel model (channe - LM + bias) does not perform as well as the direct model, but the interpolation of the direct mode. nd noisy channel model (direct + channel + LM + bias) significantly outperforms the direct model. The interpolation of the direct model and language model (direct + LM + bias) achieves better. esults than the direct model and the noisy channel model on German nouns, but not on Germar erbs. For further comparison, we also included the state-of-the-art results as benchmarks. NCK15. Nicolai et al.]2015) tackles the task based on the three-stage approach: (1) align the source and. arget word, (2) extract inflection rules, (3) apply the rule to new examples. FTND16 (Faruqui et al. 016) is based on neural sequence to sequence models. Both models (NCK15+ and FTND16+ erank the candidate outputs by the scores predicted from n-gram language models, together with. ther features.\n(direct model) proposes candidate outputs and these candidates are reranked based on the scores. calculated by a weighted sum of the probability of the translation model and that of the language. model. In deep fusion, the language model is integrated into the decoder of the sequence to sequence. model by concatenating their hidden state at each time step. Sennrich et al.(2016) incorporate target. language unpaired training data by doing back-translation to create synthetic parallel training data.. While this technique is quite effective, its practicality seems limited to problems where the inputs. and outputs contain roughly the same information (such as translation). Cheng et al.(2016) lever-. ages the abundant monolingual data by doing multitask learning with an autoencoding objective\nOur direct model (and channel model) shares the idea of introducing stochastic latent variables tc neural networks with several papers and marginalising these during training. Examples include. connectionist temporal classification (CTC) (Graves et al.] 2006) and the more recent segmental. recurrent neural networks (SRNN) (Kong et al.2016). Compared to these models, our direct model. has the advantage of capturing unbounded dependencies of output words. The direct model is closely. related to the sequence transduction model (Graves! 2012) in the way of modeling the probability of. predicting output tokens and marginalizing latent variables using dynamic programming. However. rather than modeling the joint distribution over outputs and alignments by inserting null symbols intc. the output sequence, our direct model defines a separate latent alignment variable, with alignment. distribution defined with neural networks. Similar to our work, the model in (Alkhouli et al.]2016. is decomposed into the alignment model and the model of word predictions. The two models are. trained separately and combined during decoding, with subsequent refinements using a Viterbi-EM. approximation. By contrast, in our direct and channel models, the latent and observed components. of the models are trained jointly using a dynamic program to exactly marginalise the unobserved. Variables."}, {"section_index": "11", "section_name": "7 CONCLUSION", "section_text": "We have presented and empirically validated a noisy channel transduction model that uses compo- nent models based on recurrent neural networks. This formulation lets us use unpaired outputs to estimate the parameters of the source model and input-output pairs to train the channel model. De- spite the channel model's ability to condition on long sequences, we are able to maintain tractable decoding by using a latent segmentation variable that breaks the conditioning context up into a series of monotonically growing segments. Our experiments show that this model makes excellent use of unpaired training data."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Tamer Alkhouli, Gabriel Bretschner, Jan-Thorsten Peter, Mohammed Hethnawi, Andreas Guta, an Hermann Ney. Alignment-based neural machine translation. In Proc. Machine Translation, 2016\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proc. ICLR, 2015.\nEric Brill and Robert C. Moore. An improved error model for noisy channel spelling correction. Ir Proc. ACL, 2000.\nPeter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert. L. Mercer. The math ematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19 263-311,1993.\nA number of papers have remarked on the tendency for content to get dropped (or repeated) in. translation.Liu et al.(2016) propose translating in both a left-to-right and a left-to-right direction. and seeking a consensus.Tu et al.(2016) propose augmenting a direct model's decoding objective. with a reverse translation model (similar to our channel model except it conditions on the direct model's output RNN's hidden states rather than the words); however, that work just reranks complete. translation hypotheses rather than developing a model that permits an incremental search..\nYong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Semi supervised learning for neural machine translation. In Proc. ACL, 2016.\nSumit Chopra, Michael Auli, and Alexander M. Rush. Abstractive sentence summarization with attentive recurrent neural networks. In Proc. NAACL. 2016.\nJeffrey L. Elman. Finding structure in time. Cognitive science, 14(2):179-211, 1990.\nManaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. Morphological inflection gener ation using character sequence to sequence learning. In Proc. HLT-NAACL, 2016..\nAlvin Grissom II, Jordan Boyd-Graber, He He, John Morgan, and Hal Daume III. Don't unti the final verb wait: Reinforcement learning for simultaneous machine translation. In Empirica Methods in Natural Language Processing, 2014.\nJiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O.K. Li. Learning to translate in real-time with neural machine translation. CoRR, abs/1610.00388, 2016\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nFrederick Jelinek. Statistical Methods for Speech Recognition. MIT, 1998\nDan Klein and Christopher D. Manning. Conditional structure versus conditional estimation in nl models. In Proc. EMNLP, 2001.\nLingpeng Kong, Chris Dyer, and Noah A Smith. Segmental recurrent neural networks. Proc. ICLR 2016.\nNavdeep Jaitly, David Sussillo, Quoc V Le, Oriol Vinyals, Ilya Sutskever, and Samy Bengio. A neural transducer. Proc. NIPS. 2016\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proc. ICIR 2015.\nLemao Liu. Masao Utiyama, Andrew Finch, and Eiichiro Sumita. Agreement on target-bidirectiona neural machine translation. In Proc. NAACL, 2016.\nGarrett Nicolai, Colin Cherry, and Grzegorz Kondrak. Inflection generation as discriminative string transduction. In Proc. HLT-NAACL, 2015.\nAlexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. In Proc. EMNLP, 2015.\nBaskaran Sankaran, Ajeet Grewal, and Anoop Sarkar. Incremental decoding for phrase-based sta tistical machine translation. In Proc. WMT, 2010.\nRico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. In Proc. ACL, 2016.\nIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks In Proc. NIPS, 2014.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proc. NIPs, 2015.\nLei Yu, Jan Buys, and Phil Blunsom. Online segment to segment neural transduction. In Proc EMNLP, 2016.\nKhalil Sima'an. Computational complexity of probabilistic disambiguation by means of tree grammars. In Proc. COLING, 1996.\nltemp tOpk(K1)yEV,kE[1,i]Q[k,J- 1]q(Zj =l|Zj-1 = k)q(y|yj bPtemp,Wtemp{ argtopk(K1)yEV,kE[1,i] Q[k,j-1]q(zj =i|zj-1 = k). q(y|yi-1,zj,x) Y getCandidateOutputs(bptemp, Wtemp) > Get partial candidate y Q[i,j] topk(K2)y;EYOxj,y1 bp[i,j], W[i,j] arg topk(K2)y1EY x?, y end for\nTable 5: Example outputs on the test set from the direct model and noisy channel model for the summarisation task and machine translation."}]
Sk36NgFeg
[{"section_index": "0", "section_name": "FILLING IN THE DETAILS: PERCEIVING FROM LOW FIDELITY VISUAL INPUT", "section_text": "Farahnaz A. Wick\nUniversity of Massachusetts Boston. Boston, MA 02125\nHumans perceive their surroundings in great detail even though most of our vi. sual field is reduced to low-fidelity color-deprived (e.g., dichromatic) input by. the retina. In contrast, most deep learning architectures deploy computational. resources homogeneously to every part of the visual input. Is such a prodigal deployment of resources necessary? In this paper, we present a framework for in-. vestigating the extent to which connectionist architectures can perceive an image. in full detail even when presented with low acuity, distorted input. Our goal is. to initiate investigations that will be fruitful both for engineering better networks. and also for eventually testing hypotheses on the neural mechanisms responsible for our own visual system's ability to perceive missing information. We find that networks can compensate for low acuity input by learning global feature functions that allow the network to fill in some of the missing details. For example, the net works accurately perceive shape and color in the periphery, even when 75% of the. input is achromatic and low resolution. On the other hand, the network is prone to similar mistakes as humans; for example, when presented with a fully grayscale landscape image, it perceives the sky as blue when the sky is actually a red sunset."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Yet, even without resorting to overt shifts of attention, we still perceive the world in high detail. This is somewhat remarkable if you consider that the human retina receives an estimated 10 million. bits per second which far exceeds the computational resources available to our visual system to assimilate at any given time (Koch et al.]2006). Our own fovea takes up only 4% of the entire. retina (Michels & CP Rice[1990) and is solely responsible for sharp central full color vision with. maximum acuity; acuity which diminishes rapidly with eccentricity from the fovea. (Cowey &\nMichael L. Wick\nUniversity of Massachusetts Amherst. Amherst. MA 01003"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Most deep learning architectures process every visual input component when performing a task; for example, the input layer of many ImageNet architectures considers all pixels in every region of a pre. processed image when learning an image classifier or making classification decisions (Krizhevsky et al.]2012] Simonyan & Zisserman2014] Szegedy et al.]2014).In contrast, the human visual system has just a small fovea of high resolution chromatic input allowing it to more judiciously budget computational resources (Lenniel2003). In order to receive additional information in the. field of view, we make either covert or overt shifts of attention. Overt shifts of attention or eye- movements allow us to bring the fovea over particular locations in the environment that are relevant to current behavior. To avoid the serial nature of processing as demanded from overt shifts of attention, our visual system can also engage in covert shifts of attention in which the eyes remain fixated on one location but attention is deployed to a different location..\nRolls 1974). As a result, visual performance is best at the fovea and progressively worse towards the periphery (Low[1946). Indeed, our visual cortex is receiving distorted color-deprived visual input except for the central two degrees of the visual field (Hansen et al.2009). Additionally, we have blind spots in the retina that receive no visual input. Yet, we are mostly unaware of these distortions. Even when confronted with actual blurry or distorted visual input, our visual system is good at extracting the scene contents and context. For instance, our system can recognize faces and emotions expressed by those faces in resolutions as low as 16 x 16 pixels (Sinha et al.| 2006) We can reliably extract contents of a scene from the gist of an image (Oliva2005) even at low resolutions(Potter & Levy1969; Judd et al.]2011). Recently, Ullman et al.(Ullman et al.]2016) has shown that our visual system is capable of recognizing contents of images from critical feature configurations (called minimal recognizable images or MIRCs) that current deep learning systems cannot utilize for similar tasks. These MIRCS resemble foveations on an image and their results reveal that the human visual system employs features and processes that are not used by current deep networks. Similarly, little attention has been given to how these networks deal with distorted or noisy inputs. We draw inspiration from the abilities of the human visual system and propose a framework to study questions related to whether an artificial neural network can learn to perceive an image from low fidelity input.\nIn this paper, we want to understand what kind of information can be gleaned from low-fidelity inputs. What can be gleaned from a single foveal glimpse? What is the most predictive region of an image? We present a framework for studying such questions based on autoencoders. In contrast to traditional or denoising autoencoders (Vincent et al.] 2008), which learn to reconstruct the original image in the presence of random salt and pepper noise, our autoencoders attempt to reconstruct original high-detail inputs from systematically corrupted lower-detail foveated versions of those images (that is, images that are entirely low detail except perhaps a small \"fovea\"' of high detail). Thus, we have taken to calling them defoveating autoencoders (DFAE). We find that even relatively simple DFAE architectures are able to perceive color, shape and contrast information, but fail to recover high-frequency information (e.g., textures) when confronted with extremely impoverished input. Interestingly, as the amount of detail present in the input diminishes, the structure of the learnt features becomes increasingly global.\nCorrupting the input or hidden layers via noise to improve neural networks is an area of active. 2012 study(Bishop 1995 eCun et al. 1989 Vincent et al., 2010 Rifai et al. 2011 X1e et al. Schuler et al.| 2013). A highly related framework is denoising autoencoders in which the input. (or sometimes hidden layer) is corrupted and the network must reproduce the non-corrupted output. (Vincent et al.|[2010). However, the form of our input corruption is systematic, not random. Further,. the emphasis of our work is to understand to what extent a given architecture can perform perceptual. filling-in (Komatsu2006), our brain's ability to perceive content that is not explicitly present in our. visual input, from retina-like distorted inputs. We study this capability by varying the type of input. distortion and examining (qualitatively and quantitatively) the ability of the network to perceive\nNote that one can consider a low-resolution image as yet another type of noisy input. Thus, another area of related work is image super-resolution in which the goal is to learn a transform from a low- resolution image to a high-resolution image(Behnke2001, Cui et al.2014] Dong et al.]2014) and also image denoising (Jain & Seung 2009). These are exciting applications for deep learning. but again, our emphasis is on studying a specific set of scientific questions rather than engineering a solution to a specific image-cleaning problem.\nThere has been tremendous interest in applying attention to deep learning architectures (Larochel & Hinton2010f Mnih et al.2014]Bahdanau et al.2014]Xu et al.2015). Such work has lead t improvements in tasks ranging from machine translation (Bahdanau et al.| 2014) to image captio1 ing (Xu et al.]2015). In many of these frameworks, attention interacts with the limited resource in various ways (either by sequentially directing attention (Mnih et al.] 2014) or by allowing th network to recall relevant information from the past history (Xu et al.[|2015)). Our work is compl mentary in that we aim to study the limited resource itself: in this work, a single foveal glimpse. V further suggest a way of extending our framework to incorporate attention, but we save investigatic for future work."}, {"section_index": "3", "section_name": "FRAMEWORK: DEFOVEATING AUTOENCODERS (DFAE)", "section_text": "We now present a framework for studying the extent to which neural networks can \"perceive\"' an. image given various types of low-detail (or foveated) inputs. We begin by specifying a space of. neural network architectures and by precisely defining a notion of perceives that we can measure.. It is important that the framework is general and not dependent on a specific task such as image. classification in which, for example, the ability to learn domain-specific discriminating features. might make it easy to solve the classification problem without fully modeling the structure of the input. This is undesirable because then we are unable to trust classification accuracy as a reliable. surrogate for perceiving.\nWith this in mind, we focus instead on generative models of the raw input data itself, specificall. autoencoders (AE). The AE's hidden units h are analogous to the intermediate neurons in our visua. system that capture features and structure of the visual input. Similarly, the AE's weights W forg. visual memories of the training set and are thus analogous to long-term memory. When these weight are properly trained, the activations of the hidden units reflect how the network is perceiving a nove. input. However, since these units are not directly interpretable, we indirectly measure how wel. the network perceives by evaluating the similarity between the original and generated (high-detail. images: the more similar the images are, the better the network is able to perceive..\nMore formally, let x be the original input image and x = $(x) be a lower-detail foveated ver. sion of that image. That is, a version of the image which is mostly low-detail (e.g., downsam-. pled, black-and-white, or both) except for possibly a small portion which is high-detail (mimicking. our own fovea). For example, if we encode images as vectors of floats between O and 1 (reflect- ing pixel intensities in RGB or grayscale) then we might define a class of foveation functions as. : 0, 1|n -> 0, 1|m s.t. m < n and the foveation function might downsample the original image according to the eccentricity from the image center while also removing most of the vector com- ponents corresponding to color. We then employ the autoencoder to defoveate x by generating a. high-quality output image y = f(x; W) in which, for example, y E [0, 1]n. Finally, we can then. measure the similarity between y and x as (1) a surrogate for how well the network perceives from. the foveated input and (2) part of a loss function to train the network..\nIn summary, DFAEs simply comprise\nn our experiments, we study DFAEs with fully connected layers. That is, DFAEs of the form\nh(0) =tanh(W(0) x = (x h(i) =tanh(w(i) fori=1,...,k-1 W(k) k- y =\nwhere o is the logistic function. The sigmoid in the final layer conveniently allows us to compare the pixel intensities between the generated image y and the original image x directly, without having. to post-process the output values. We experiment with the number of hidden units per-layer as wel. as the number of layers. For training, we could employ the traditional mean-squared (MsE) error o1. cross-entropy loss, but we found that the domain-specific loss function of peak signal-to-noise ratic.\n1. A foveation function that filters an original image by removing detail (color, downsampling blurring, etc). We will later make this the independent variables in our experiments so we can study the effect of different types of input distortion. 2. An autoencoder network that inputs the low-detail foveated input, but is trained to output the high-detail original image. 3. A loss function for measuring the quality of the reconstruction against the original image and for training the network.\nNote that much like denoising autoencoders, these autoencoders reconstruct an original image from a corrupted version. However, the form of corruption is a systematic foveation instead of random noise (Vincent et al. 2008). Thus, we have termed these models defoveating autoencoders or DFAEs\n(PSNR) yielded much better training behavior. The PSNR between generated image a y = f((x and its original input x is defined as follows:\n1 LH(x,y) = l0g10 where MSE(x,y) = n-1(x - y)? r - MSE(x,y)\nNetwork parameters were initialized at random in the range [-0.1,0.1] and loss was minimized b stochastic gradient descent with adagrad updates (Duchi et al.|2011).\nThe above architecture is useful for studying single foveations, which is the primary focus of this work. However, we remark that it is straightforward to augment DFAEs with recurrent connections to handle a sequence of foveations similar to what has has been done for solving classification tasks with attention (Mnih et al.]2014). First, augment the foveation function to include a locus l on which the fovea is centered. Second, a saccade function s(ht; Ws) predicts such a locus from the DFAE's current hidden states ht, and finally we make the hidden state recurrent via a function. g(ht-1, Wg). Putting this all together yields the following architecture:.\nxt = $(xt,lt) foveate the image at location l ht = fe(g(ht-1;Wg),xt;W) encode: compute new hidden states lt = s(ht; Ws) compute new locus for next foveation Yt = fd(ht) decode: reconstruct high detail image\nxt = $(xt,lt) foveate the image at location l ht = fe(g(ht-1;Wg),xt;W) encode: compute new hidden states lt = s(ht; Ws) compute new locus for next foveation Yt = fd(ht) decode: reconstruct high detail image"}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "We are interested in the question of whether an artificial neural network can perceive an image from. foveated input images and we evaluate this ability by measuring how well the autoencoders can. reconstruct a full-resolution image from the low-detail foveated input. In these experiments, we fix the architecture of our network to the family described in the previous section and vary the type of. foveation, the number of hidden units and the number of layers and study the learnt features and. reconstruction accuracy. We address the following questions: Can the network perceive aspects of. the image that are not present in the input? What can it perceive and under what conditions? Can. the network perceive color in the periphery? How much capacity is required to perceive missing. details? What is the nature of network's learnt solution to the problem of filling-in?."}, {"section_index": "5", "section_name": "4.1 FOVEATION FUNCTIONS", "section_text": "In our experiments, we study several different foveation functions. In many cases, downsampling is employed as part of the foveation function for which we employ the nearest neighbor interpolatior algorithm. We chose nearest neighbor because it is especially brutal in that it does not smooth ovei neighboring pixels when interpolating. Foveation functions include:\nNow the DFAE can handle a sequence of foveations, making it to possible to study the effects of overt attention and also explore the ability to learn from a sequence of foveations (like a language model) rather than a ground-truth image (the analog of which is not available to a human). However. Interactions between attention and input acuity are beyond the scope of this work..\ndownsampled factor d (DS-D): no fovea is present, the entire image is uniformly down sampled by a factor of d using the nearest neighbor interpolation method. For example, a factor of 4 transforms a 28x28 image to a 7x7 image and approximately 94% of the pixels are removed. For color images we downsample each channel (RGB) separately, resulting in color distortion. We test factors of 2, 4, 7 (MNIST) and 2, 4, 8 (CIFAR). scotoma r (SCT-R): entire regions (r = 25%, 50% and 75%) of the image are removed (by setting the intensities to O) to create a blind spot/region, but the rest of the image remains at the original resolution. We experiment with the location of the scotoma (centered or not).\nMNIST CIFAR100 Color CIFAR100 Grayscale 20 10 10 15 8 8 6 10 D factor 8 factor8 5 factor7 2 factor 4 factor 4 factor.7 faftora origina originaT 0 cubic 0 bicubic 0 nearest bilinear bicubic DFAE nearest bilinear cubic DFAE nearest bilinear bicubic cubic DFAE Interpolation function Interpolation function Interpolation function\nMNIST CIFAR100 Color CIFAR100 Grayscale 20 10 10 % erner aannner erer 8 15 8 6 6 10 4 factor8 factor 8 5 factor 7 2 factor 4 factor 4 factor 4 factor factor2 fartora original 0 originaT bilinear bicubic cubic DFAE bilinear bicubic cubic DFAE 0 nearest nearest nearest bilinear bicubic cubic DFAE Interpolation function Interpolation function Interpolation function\nFigure 1: Baseline comparison for downsampled input (DS-D): DFAE on MNIST (resp. CIFAR100 grayscale, CIFAR100 color) with 800 hidden units (resp. 1100, 3100 hidden units)..\noriginal original factor2 factor 2 bilinear bilinear DFAE DFAE factor 4 factor 4 bilinear bilinear DFAE DFAE factor 7 factor 8 bilinear bilinear DFAE DFAE (a) MNIST reconstruction (b) CIFAR100 reconstruction\norigina factor 2 factor 4 factor 7\noriginal factor 2 factor 4 factor 7\nFigure 3: DFAE with 800 units on downsampled (DS-D) MNIST: features become increasingly global as downsampling factor increases..\n'Note that the achromatic periphery we study here is a more severe distortion than the human periphery. which has dichromatic color reception; though this varies from one individual to another.\nFigure 2: Downsampled (DS-D) MNIST and CIFAR100 images reconstruction by DFAE. The top row shows the original image. Each row shows the downsampled input that was used during training,. followed by reconstruction images by the bilinear algorithm and DFAEs..\nfovea r (FOV-R): only a small fovea r of high resolution (r = 25% or 6%); the rest of the image is downsampled by a factor of 4.. . achromatic r (ACH-R): only a region of size r has color; color is removed from the periphery by averaging the RGB channels into a single grayscale channel.. fovea-achromatic r (FOVA-R): combines the fovea r with the achromatic region: only. the foveated region is in color, the rest of the image is in grayscale and downsampled by a. factor of 4\nactor 2 frigona factor 2 iginal original 100 80 1000riginal 300-150 Numbrn0-300 en units: 1st laye 300-600 aye 800-1600 100 1100 3700 100 1100 1300\nFigure 4: Error rates of the MLP DFAE are not effected much by network depth or input colo.\noriginal original 25% 25% DFAE DFAE 50% 50% DFAE DFAE 75% 75% DFAE DFAE 5% centere 75% cent DFAE DFAE (a) SCT-R input reconstruction (b) FOV-R input reconstruction (c) Error rates for SCT-R & FOV.\noriginal original 6 removed 25% 25% DFAE DFAE 50% 50% DFAE DFAE 75% 75% DFAE DFAE 75% centered 75% centered 25% DFAE 1300 DFAE of hidden units"}, {"section_index": "6", "section_name": "4.2 DATASETS AND PRE-PROCESSING", "section_text": "We used two datasets in our experiments: MNIST and CIFAR100. The MNIST database consists of 28 x 28 handwritten digits and has a training set of 60,000 examples and a test set of 10,000 examples. Therefore each class has 6000 examples. The CIFAR100 dataset consists of 32 x 32 color images of 100 classes. Some examples of classes are: flowers, large natural outdoor scenes insects, people, vehicles etc. Each class has 600 examples. The training set consists of 50,000 images and the test set consists of 10,000 images. We trained DFAEs on the MNIST and CIFAR100 dataset (in grayscale and color). We normalized the datasets so that the pixel values are between ( and 1 and additionally, zero-centered them. This step corresponds to local brightness and contras normalization. Aside from this, we do no other preprocessing such as patch extraction or whitening"}, {"section_index": "7", "section_name": "4.3 THE EFFECT OF INPUT ACUITY ON THE NETWORK", "section_text": "The purpose of this experiment is to study how various levels of input acuity affect the performanc of the network for the case in which no fovea is available. In addition to input acuity (downsampling factors of 1,2,4,7,8), the other variables to consider are the number of hidden units per layer and th number of layers. In pilot experiments we determined that when the number of hidden units was less than the downsampled input size, DFAEs performed poorly; further, that -unlike CNNs for image classification---additional hidden layers (beyond two) did not improve performance much. There fore, we report results with overcomplete representations of just one or two hidden layers. First to establish baselines, we compare one of the networks to various interpolation-based upsampling methods available in image editing software (nearest-neighbor, bilinear, bicubic). The single laye DFAE outperforms these standard algorithms on both datasets (see Figure[1).\nFor qualitative evaluation, Figure[2|contains examples of the reconstructed images by a single laye. DFAE. The images produced by the DFAE is compared to upsampled reconstructions via bilinea. interpolation. Especially on MNIST, DFAEs can correctly extract the contents of a downsamplec input even when 94% of pixels are removed. A compelling example is that even when faced with a. blank input (due to aggressive nearest-neighbor downsampling), as seen in Figure[2a| the DFAE car. correctly perceive the digit 1. Though in general, the performance of DFAEs suffered when the inpu. was downsampled beyond a factor of 4. When the DFAE made predictions based on the extremely impoverished input, most of the reconstructions were incorrect. Yet, interestingly, the incorrec reconstructions still resembled digits. We hypothesize that this is due to the global feature function.\nriginal (a) MNIST: 1-layer (b) MNIST: 2-layer (c) CIFAR100 color (d) CIFAR100 grayscale\nFigure 5: Reconstruction examples and errors rates of 1-layer DFAE with foveated input types"}, {"section_index": "8", "section_name": "4.4 RECONSTRUCTING FOVEATED INPUTS", "section_text": "Until now, we evaluated DFAEs on uniformly downsampled images, but we are especially interested. in the case for which some high resolution input is available, more closely resembling the retina. In this section, we evaluate DFAEs on foveated inputs, SCT-R and FOV-R, as described in Section4.1\neach feature is the the final output layer weights corresponding to a particular hidden unit\ninput output input output input output input output input output full color. no color 25% color center 6% color center (a) Color reconstruction of ACH-R inputs input output input output input output input output full color. no color 25% color center 6% color center (b) Color reconstruction of FOVA-R inputs full colo 25% color at center 1no color 6% color at center full color 25% color at center no color 6% color at center 1% Number of hidden units. Number of hidden units (c) Color reconstruction of foveated inputs. (d) Color reconstruction of grayscale inputs\nFigure 6: Color reconstruction examples and errors rates of 1-layer DFAF\nlearnt by the network. Indeed, we observe in Figure3|that for MNIST, as the downsampling factor increases, the global structure in the features|also increases: when the input was downsampled by a factor of two (resp. four, seven), it was forced to learn stroke like features (resp. full/partial digits, superimposed digits). A curiously similar result was observed by Vincent et al. Vincent et al. (2010), where their denoising autoencoder learnt global structures when it was trained on randomly noisy inputs.\nOn the other hand, many filters learnt on CIFAR100 images were not directly interpretable. Though. in some cases the network learnt global features such as a specific color gradients or locally circular blobs which probably enabled it to be better at reconstructing low frequency shape information, color gist, and landscapes particularly well. The reconstructed natural images as seen in Figure|2b show that the DFAE learnt a smoothing and centering function. The DFAEs employed here could predict the shape of objects in the natural images but not the high frequency details (e.g., texture).\nThe scotoma allows us to isolate the contribution of the fovea alone (without help from the periph ery). Variable-sized areas of region (r = 25%, 50%, 75%, 75% centered) were removed from the original input. The location of removal was chosen randomly from the four quadrants of the inpu1 image, except for the condition where 75% of the image around the center was removed. Since\na majority of the input images have a subject of interest, we tested if the central region containe enough information to reconstruct the rest of the image.\nThe reconstructions in Figure 5ashow the DFAE does not perform well when r > 25%. When r = 50%, the DFAE can only reconstruct landscapes, shape and symmetry, demonstrating its ability to extract low frequency information. When r = 75% and 75% centered, the reconstruction process breaks down and the DFAE cannot predict the input beyond the given region of information. The filters learnt under these conditions look similar to the downsampled condition, but with larger blobs.\nIn FOv-R inputs, r is the same as SCT-R inputs and we chose to use downsampling factor 4 fo. regions outside the fovea since previous experiments revealed that DFAEs cannot reconstruct inputs. downsampled beyond this factor. Figure|5b|shows the reconstructed images from FOV-R inputs an. Figure 5c show the error rate of reconstruction. The cluster of red lines with lower error rates shov that the DFAE performed considerably well with FOV-R than SCT-R inputs. The performance wa. better ( 1% error for r = 75% centered) than an DFAE trained with uniformly downsampled input (1.5% error). This result is not surprising, given that FOv-R contains additional information fron. regions outside the fovea. These results suggests that a small number of foveations containing ricl. details might be all these neural networks need to extract contents of the input in higher detail.."}, {"section_index": "9", "section_name": "4.5 RECONSTRUCTING COLOR FROM FOVEATED INPUTS", "section_text": "We investigated this question using ACH-R and FOVA-R inputs described in section 5.1. The regions of color tested were r = 0% or no color, 6%, 25% and 100% or full color. Figure|6a|and|6b show examples of color reconstructions of the these input types. When the DFAE is trained with full color ACH-R inputs, it can make mistakes in reconstructing the right color as seen in Figure|6a For example: it colors the yellow flower as pink and the purplish-red landscape as blue. When the input is grayscale (no color, r = O%), the colorizations are gray, muted, sepia toned or simply incorrect in the case of landscapes. But if there is a fovea of color, the single layer DFAE can reconstruct the colorizations correctly. Of course, if the fovea of color is reduced, i.e. 6%, the color reconstruction accuracy falls off but not too drastically. For example, it predicts a yellowish tone for the sunflower among a bed of brown leaves. The critical result is that the performance difference between 100% or full colored inputs and foveated color inputs is small as seen in Figure |6c|and|6d These results suggest that color reconstructions can be just as accurate if these networks can figure out the color of one region of the image accurately as opposed to every region in the image. Similar to the human visual system, these networks are capable of determining accurate colors in the periphery if color information is available at foveation."}, {"section_index": "10", "section_name": "5 DISCUSSION AND CONCLUSIONS", "section_text": "We presented a framework for studying whether a given neural network architecture can perceive. visual details not explicitly present in the input. Using the framework, we studied a simple fully. connected network and found it could fill in missing details such as shape, color and contrast, but. not texture. We found that the network compensates for missing details by learning global features. which when appropriately activated by the surround, effectively infer the missing information. Thus.. the network performs perceptual filling-in even though the layers lack the lateral connections and. grid-like structure that have been postulated in the isomorphic filling in theory (von der Heydt et al.. 2003). Further, the network appears to learn correlations between color and shape: when presented. a sunflower input in which only the dark brown center and the inner yellow petals were in color the network correctly perceived that the surrounding leaves are green; in the case of landscapes, the. network incorrectly perceived the sky as blue, even when a small fovea of red sunset is present in. the input. Upon inspection, the network indeed learns global color features and these features may.\nIt is well known that the human visual system loses chromatic sensitivity towards the periphery of the retina. Recently, there has been interest in how deep networks, specifically convolutional neural networks (CNNs), can learn to color grayscale images (Dahl) and learn artistic style Gatys et al. (2015). Specifically in|Dahl|reconstructions from grayscale images, numerous cases of the colorized images produced were muted or sepia colored. The problem of colorization which is inherently ill- posed was treated as a classification task in these studies. Can DFAEs perceive color if it is absent in the input?\nbe activated by input patterns corresponding to certain shapes causing the network to occasionally over-generalize, but further investigation is needed to fully confirm this hypothesis\nA potentially interesting line of future research would be to employ the DFAE framework to test the existing hypotheses for how our own visual system performs perceptual filling in: e.g., the process by which our brain infers information that is not explicitly present in our sensory input from the surround (Komatsu2006). The specific architectures we studied here are a far cry from the true mechanisms, but in future work we could implement several architectures that capture the essence oi competing theories: Which neural architectures exhibit similar behaviors and abilities as humans? Will the architecture be fooled in the same way as humans by optical illusions such as the Cornswee illusion or the Troxler effect? When does it perceive \"The Dress' as blue and gold or white anc black? By answering questions such as these, the DFAE framework could provide evidence for one theory (or neural mechanism) over another, and these theories can be further investigated with behavioral and neurological experiments."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio.. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. URL http://arxiv.0rg/abs/1409.0473.\nSven Behnke. Learning iter uction in the neural abstraction pyramid. International Journal of Computational Intelligenc and Applications, 1(04):427-438, 2001.\nZhen Cui, Hong Chang,. Shiguang Shan, Bineng Zhong, and Xilin Chen.. Deep network cascade for image super-resolution. In Computer Vision-ECCV 2014, pp. 49-64. Springer, 2014.\nChao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Learning a deep convolutional network for image super-resolution. In Computer Vision-ECCV 2014, pp. 184-199. Springer, 2014.\nViren Jain and Sebastian Seung. Natural image denoising with convolutional networks. In Advances in Neural Information Processing Systems pp. 769-776, 2009.\nKristin Koch, Judith McLean, Ronen Segev, Michael A Freed, Michael J Berry, Vijay Balasubramanian, and Peter Sterling. How much the eye tells the brain. Current Biology, 16(14): 1428-1434, 2006.\nehiko Komatsu. The neural mechanisms of perceptual filling-in. Nature reviews neuroscience, 7(3):220-231, 2006\nAlex Krizhevsky, Ilya Sutskever and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.\nYann LeCun, Bernhard Boser, John S Denker, Donnie Hende. oward, Wayne Hubbard, and Lawrence D Jackel. Backpropa. gation applied to handwritten zip code recognition. Neural computation, 1(4):541-551, 1989\nPeter Lennie. The cost of cortical computation. Current biology, 13(6):493-497, 2003.\nVolodymyr Mnih, Nicolas Heess, Alex tention. In Z. Ghahramani, M. Welling aves. and koray kavukcuoglu. Recu C. Cortes, N.D. Lawrence, and K.Q. Weinberger (eds.), Advances in Neural Information Processing Systems 27, pp. 2204-2212. 2014.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal o Machine Learning Research, 12:2121-2159, 2011.\nThorsten Hansen, Lars Pracejus, and Karl R Gegenfurtner. Color perception in the intermediate periphery of the visual field. Journal of Vision 9(4):26-26, 2009.\nHugo Larochelle and Geoffrey E Hinton. Learning to combine foveal glimpses with a third-order boltzmann machine. In Advances in neura information processing systems, pp. 1243-1251, 2010.\nAude Oliva. Gist of the scene. Neurobiology of attention, 696(64):251-258, 2005.\nMary C Potter and Ellen I Levy. Recognition memory for a rapid sequence of pictures. Journal of experimental psychology, 81(1):10, 196\nSalah Rifai, Pascal Vincent, Xavier Bengio CO1 : Explicit invariance during featur extraction. In Proceedings of the 28th Inter tional Confer on Machine Learning (ICML-11), pp. 833-840, 2011\nChristian J Schuler, Harold Christopher Burger, Stefan Harmeling, and Bernhard Scholkopf. A machine learning approach for non-blind imag deconvolution. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 1067-1074. IEEE, 2013.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, arXiv:1409.1556, 201\nPawan Sinha, Benjamin Balas, Yuri Ostrovsky, and Richard Russell. riC Face recognition by humans: Nineteen results all computer visio researchers should know about. Proceedings of the IEEE, 94(11):1948-1962, 2006.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, an Andrew Rabinovich. Going deeper with convolutions. CoRR, abs/1409.4842, 2014.\nShimon Ullman, Liav Assif, Ethan Fetaya, and Daniel Harari. Atoms of recognition in human and computer vision. Proceedings of Nationg Academy of Sciences, pp. in press, 2016.\nPascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learnin useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371-3408, 2010\nRudiger von der Heydt, Howard S Friedmar and Hong Zhou. Searching for the neural mechanisms of color filling-in. Filling-in: Fron perceptual completion to cortical reorganization, pp. 106-127, 2003.\nJunyuan Xie, Linli Xu, and Enhong Chen. Image denoising and inpainting with deep neural networks. In Advances in Neural Informatio Processing Systems, pp. 341-349, 2012.\nKelvin Xu, Jimmy Ba, Ryan Kit nghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. Show attend and tell: Neural image caption ge leration with visual attention. 2015."}]
ryh9pmcee
[{"section_index": "0", "section_name": "ENERGY-BASED GENERATIVE ADVERSARIAL NET WORKS", "section_text": "Junbo Zhao, Michael Mathieu and Yann LeCun\nDepartment of Computer Science, New York University Facebook Artificial Intelligence Research.\n{jakezhao, mathieu, yann}@cs.nyu.edu\nWe introduce the \"Energy-based Generative Adversarial Network' model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the probabilistic GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. Viewing the discrimi- nator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder architecture, with the energy being the reconstruction error, in place of the dis- criminator. We show that this form of EBGAN exhibits more stable behavior than regular GANs during training. We also show that a single-scale architecture can be trained to generate high-resolution images."}, {"section_index": "1", "section_name": "1.1 ENERGY-BASED MODEL", "section_text": "The essence of the energy-based model (LeCun et al.2006) is to build a function that maps eac point of an input space to a single scalar, which is called \"energy\". The learning phase is a data driven process that shapes the energy surface in such a way that the desired configurations get as signed low energies, while the incorrect ones are given high energies. Supervised learning falls int this framework: for each X in the training set, the energy of the pair (X, Y) takes low values whe Y is the correct label and higher values for incorrect Y's. Similarly, when modeling X alone withi an unsupervised learning setting, lower energy is attributed to the data manifold. The term con trastive sample is often used to refer to a data point causing an energy pull-up, such as the incorrec Y's in supervised learning and points from low data density regions in unsupervised learning."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Generative Adversarial Networks (GAN) (Goodfellow et al.||2014) have led to significant improve- ments in image generation (Denton et al.2015Radford et al.2015} Im et al.]2016]Salimans et al.[[2016), video prediction (Mathieu et al.1|2015) and a number of other domains. The basic idea of GAN is to simultaneously train a discriminator and a generator. The discriminator is trained to distinguish real samples of a dataset from fake samples produced by the generator. The generator uses input from an easy-to-sample random source, and is trained to produce fake samples that the discriminator cannot distinguish from real data samples. During training, the generator receives the gradient of the output of the discriminator with respect to the fake sample. In the original formula tion of GAN in|Goodfellow et al.(2014), the discriminator produces a probability and, under certain conditions, convergence occurs when the distribution produced by the generator matches the data distribution. From a game theory point of view, the convergence of a GAN is reached when the generator and the discriminator reach a Nash equilibrium."}, {"section_index": "3", "section_name": "1.3 ENERGY-BASED GENERATIVE ADVERSARIAL NETWORKS", "section_text": "In this work, we propose to view the discriminator as an energy function (or a contrast function). without explicit probabilistic interpretation. The energy function computed by the discriminator can be viewed as a trainable cost function for the generator. The discriminator is trained to assign low energy values to the regions of high data density, and higher energy values outside these regions. Conversely, the generator can be viewed as a trainable parameterized function that produces samples. in regions of the space to which the discriminator assigns low energy. While it is often possible to convert energies into probabilities through a Gibbs distribution (LeCun et al.] 2006), the absence of normalization in this energy-based form of GAN provides greater flexibility in the choice of. architecture of the discriminator and the training procedure..\nThe probabilistic binary discriminator in the original formulation of GAN can be seen as one way among many to define the contrast function and loss functional, as described in|LeCun et al.(2006) for the supervised and weakly supervised settings, and Ranzato et al.(2007) for unsupervised learn ing. We experimentally demonstrate this concept, in the setting where the discriminator is an auto- encoder architecture, and the energy is the reconstruction error. More details of the interpretation of EBGAN are provided in the appendixB\nOur main contributions are summarized as follows:\nLet pdata be the underlying probability density of the distribution that produces the dataset. The generator G is trained to produce a sample G(z), for instance an image, from a random vector z, which is sampled from a known distribution pz, for instance N(0, 1). The discriminator D takes either real or generated images, and estimates the energy value E E R accordingly, as explained later. For simplicity, we assume that D produces non-negative values, but the analysis would hold as long as the values are bounded below."}, {"section_index": "4", "section_name": "2.1 OBJECTIVE FUNCTIONAL", "section_text": "The output of the discriminator goes through an objective functional in order to shape the energy function, attributing low energy to the real data samples and higher energy to the generated (\"fake') ones. In this work, we use a margin loss, but many other choices are possible as explained in|LeCun et al.(2006). Similarly to what has been done with the probabilistic GAN (Goodfellow et al.|2014), we use a two different losses, one to train D and the other to train G, in order to get better quality gradients when the generator is far from convergence. Given a positive margin m, a data sample x and a generated sample G(z), the discriminator loss L p and the generator loss Lg are formally defined by:\nwhere [-1+ = max(0, .). Minimizing Lg with respect to the parameters of G is similar to maximiz. ing the second term of Lp. It has the same minimum but non-zero gradients when D(G(z)) > m\nAn energy-based formulation for generative adversarial training A proof that under a simple hinge loss, when the system reaches convergence, the generator of EBGAN produces points that follow the underlying data distribution. An EBGAN framework with the discriminator using an auto-encoder architecture in which the energy is the reconstruction error. A set of systematic experiments to explore hyper-parameters and architectural choices that produce good result for both EBGANs and probabilistic GANs. A demonstration that EBGAN framework can be used to generate reasonable-looking high resolution images from the ImageNet dataset at 256 256 pixel resolution, without a multi- scale approach.\np(x,z) = D(x) + [m- D(G(z))]+ Lg(z) = D(G(z))\nProof. First we observe that\nV(G*,D) Pdata(x)D(x)dx + pz(z) [m- D(G*(z))]+ d data(x)D(x) + pG*(x) [m- D(x)]\nV(G*,D*) m Pdata(x)<pg*(x)Pdata(x)dx + m Pdata(x)>PG*(x)PG*(x m m Pdata(x)<PG*(x)(Pdata(x) - PG*(x))dx m + m lata(x)<pG*(x)(Pdata(x) - PG*(x))dx\nJ x Thus by (6) (x)[m- D*(x)]+dx V(G*, D*)\nThus, m V(G*, D*) < m i.e. V(G*, D*) = m. Using equation|10 we see that can only happen because Pdata and pg are probabilities densities, see lemma2Jin the appendix[Afor details).\nTheorem 2. A Nash equilibrium of this system exists and is characterized by (a) pG* = Pdata (almost everywhere) and (b) there exists a constant y E [0, m] such that D*(x) = y (almost every where)\n1This is assuming there is no region where pdata(x) = 0. If such a region exists, D*(x) may have any value in [0, m for x in this region.\nIn this section, we present a theoretical analysis of the system presented in section[2.1 We show. that if the system reaches a Nash equilibrium, then the generator G produces samples that are indis. tinguishable from the distribution of the dataset. This section is done in a non-parametric setting.. i.e. we assume that D and G have infinite capacity..\nGiven a generator G, let pg be the density distribution of G(z) where z ~ pz. In other words, pg is the density distribution of the samples generated by G. We define V(G,D) = Jr.z Lp(x,z)Pdata(x)pz(z)dxdz and U(G,D) = J Lg(z)pz(z)dz. We train the discriminator D to minimize the quantity V and the generator G to minimize the quantity U. A Nash equilibrium of the system is a pair (G*, D*) that satisfies:\nV(G*,D*) < V(G*,D) VD U(G*,D*) < U(G,D*) VG\nPG*(x)D*(x)dx < Pdata(x)D*(x)dx. J x\nIn our experiments, the discriminator D is structured as an auto-encoder\nFigure 1: EBGAN architecture with an auto-encoder discriminator\nThe diagram of the EBGAN model with an auto-encoder discriminator is depicted in figure|1] The choice of the auto-encoders for D may seem arbitrary at the first glance, yet we postulate that it is conceptually more attractive than a binary logistic network:.\nRather than using a single bit of target information to train the model, the reconstruction-based output offers a diverse targets for the discriminator. With the binary logistic loss, only two targets are possible, so within a minibatch, the gradients corresponding to different samples are most likely far from orthogonal. This leads to inefficient training, and reducing the minibatch sizes is often not an option on current hardware. On the other hand, the reconstruction loss will likely produce very different gradient directions within the minibatch, allowing for larger minibatch size without loss of efficiency. Auto-encoders have traditionally been used to represent energy-based model and arise naturally When trained with some regularization terms (see section 2.3.1), auto-encoders have the ability to learn an energy manifold without supervision or negative examples. This means that even when an EBGAN auto-encoding model is trained to reconstruct a real sample, the discriminator contributes to discovering the data manifold by itself. To the contrary, without the presence of negative examples from the generator, a discriminator trained with binary logistic loss becomes pointless.\nWe argue that the energy function (the discriminator) in the EBGAN framework is also seen as being regularized by having a generator producing the contrastive samples, to which the discrim- inator ought to give high reconstruction energies. We further argue that the EBGAN framework allows more flexibility from this perspective, because: (i)-the regularizer (generator) is fully train able instead of being handcrafted; (ii)-the adversarial training paradigm enables a direct interaction between the duality of producing contrastive sample and learning the energy function.\nIn our experiments, the discriminator D is structured as an auto-encoder: D(x) =||Dec(Enc(x)) - x| (13) G Enc Dec MSE >E \"D\nD(x) =Dec(Enc(x))- x]\nOne common issue in training auto-encoders is that the model may learn little more than an identity function, meaning that it attributes zero energy to the whole space. In order to avoid this problem. the model must be pushed to give higher energy to points outside the data manifold. Theoretical and experimental results have addressed this issue by regularizing the latent representations (Vincent et al.2010] Rifai et al.]2011; MarcAurelio Ranzato & Chopra2007 Kavukcuoglu et al.2010) Such regularizers aim at restricting the reconstructing power of the auto-encoder so that it can only attribute low energy to a smaller portion of the input points.\nWe propose a \"repelling regularizer' which fits well into the EBGAN auto-encoder model, purposely keeping the model from producing samples that are clustered in one or only few modes of pdata. Another technique \"minibatch discrimination\"' was developed by Salimans et al.(2016) from the same philosophy.\nImplementing the repelling regularizer involves a Pulling-away Term (PT) that runs at a represen tation level. Formally, let S E RsxN denotes a batch of sample representations taken from the. encoder output layer. Let us define PT as:.\nST Sj 1 fPT(S) = N(N-1) i j#i"}, {"section_index": "5", "section_name": "3 RELATED WORK", "section_text": "Our work primarily casts GANs into an energy-based model scope. On this direction, the approaches studying contrastive samples are relevant to EBGAN, such as the use of noisy samples (Vincent et al. 2010) and noisy gradient descent methods like contrastive divergence (Carreira-Perpinan & Hinton 2005). From the perspective of GANs, several papers were presented to improve the stability of GAN training, (Salimans et al.f2016]Denton et al.]2015] Radford et al.]2015] Im et al.]2016 Mathieu et al.2015).\nKim & Bengio(2016) propose a probabilistic GAN and cast it into an energy-based density estimator by using the Gibbs distribution. Quite unlike EBGAN, this proposed framework doesn't get rid of the computational challenging partition function, so the choice of the energy function is required to be integratable."}, {"section_index": "6", "section_name": "4.1 EXHAUSTIVE GRID SEARCH ON MNIST", "section_text": "Histograms We plot the histogram of I' scores in figure[2 We further separated out the optimizatior. related setting from GAN's grid (optimD, optimG and 1r) and plot the histogram of each sub grid individually, together with the EBGAN I' scores as a reference, in figure[3] The number o1. experiments for GANs and EBGANs are both 512 in every subplot. The histograms evidently show. that EBGANs are more reliably trained..\n2This form of the \"inception score' is only used to better analyze the grid search in the scope of this work but not to compare with any other published work.\nPT operates on a mini-batch and attempts to orthogonalize the pairwise sample representation. It is inspired by the prior work showing the representational power of the encoder in the auto-encoder alike model such as Rasmus et al.(2015) and Zhao et al.(2015). The rationale for choosing the cosine similarity instead of Euclidean distance is to make the term bounded below and invariant to scale. We use the notation \"EBGAN-PT' to refer to the EBGAN auto-encoder model trained with this term. Note the PT is used in the generator loss but not in the discriminator loss.\nIn this section, we study the training stability of EBGANs over GANs on a simple task of MNIST. digit generation with fully-connected networks. We run an exhaustive grid search over a set of architectural choices and hyper-parameters for both frameworks..\nFormally, we specify the search grid in table[1] We impose the following restrictions on EBGAN models: (i)-using learning rate O.001 and Adam (Kingma & Ba 2014) for both G and D; (ii)- nLayerD represents the total number of layers combining Enc and Dec. For simplicity, we fix Dec to be one layer and only tune the Enc #layers; (iii)-the margin is set to 10 and not being tuned. To analyze the results, we use the inception score (Salimans et al.||2016) as a numerical means reflecting the generation quality. Some slight modification of the formulation were made to make figure|2|visu- ally more approachable while maintaining the score's original meaning, I' = Ex K L(p(y)||p(y|x))2 (more details in appendix|C). Briefly, higher I' score implies better generation quality.\nDigits generated from the configurations presenting the best inception score are shown in figure\n60.0% 60.0% 50.0% GAN GAN-layers<=4 GAN-layers<=3 EBGAN EBGAN-layers<=4 EBGAN-layers<=3 50.0% 50.0% 40.0% 40.0% 40.0% fage 30.0% eneenn 30.0% -20.0% 20.0% 20.0% 10.0% 10.0% 10.0% 0.0%.0 0.0%.0 0.0%8.0 1.5 inception_score inception_score inception_score\n60.0% 60.0% 50.0% GAN GAN-layers<=4 GAN-layers<=3 EBGAN EBGAN-layers<= EBGAN-layers<=3 50.0% 50.0% 40.0% 40.0% 40.0% fage 80.0 30.0% 30.0% 20.09 20.0% 20.0% 10.0% 10.0% 10.0% 0.0%8.0 0.0%.0 0.0%.0 inception_score 1.5 inception_score inception score\nFigure 2: (Zooming in on pdf file is recommended.) Histogram of the inception scores from the grid search. The x-axis carries the inception score I and y-axis informs the portion of the models (in percentage) falling into certain bins. Left (a): general comparison of EBGANs against GANs Middle (b): EBGANs and GANs both constrained by nLayer [GD] <=4; Right (c): EBGANs anc GANs both constrained by nLayer [GD] <=3.\n80.0% 70.0% 60.0% 50.0% GAN-Ir1.00e-02_optimD-adam_optimGadan GAN-Ir1.00e-02_optimD-sgd_optimGadam GAN-Ir1.00e-02_optimD-adam_optimGsgd GAN-Ir1.00e-02_optimD-sgd_optimGsgd 70.0% EBGAN 60.0% EBGAN EBGAN EBGAN 50.0% 40.0% 60.09 40.09 50.0% 20.0 10.0% h inception_score GAN-Ir1.00e-03_optimD-adam_optimGada GAN-Ir1.00e-03_optimD-sgd_optimGada GAN-Ir1.00e-03_optimD-sgd_optimGsgd EBGAN EBGAN EBGAN 60.0% EBGAN 40.0% 50.0% 40.0% 20.0% 10.0% 10.0% 0.0 0.0% inception_score 45.0 50.0% 50.0% GAN-Ir1.00e-04_optimD-adam_optimGadan GAN-lr1.00e-04_optimD-sgd_optimGadan GAN-Ir1.00e-04_optimD-adam_optimGsgd GAN-Ir1.00e-04_optimD-sgd_optimGsgd 40.0% EBGAN EBGAN EBGAN 35.0% 40.0% 40.0% 30.0% 25.0% 0.0 15.0% 10.0% 10.0% 10.0% 10.0% 0.0 0.0 inception_score 0.0% 0.0% inception_score inception_score inception_score\nFigure 3: (Zooming in on pdf file is recommended.) Histogram of the inception scores groupec by different optimization combinations, drawn from opt imD, opt imG and 1r (See text).\nWe explore the potential of using the EBGAN framework for semi-supervised learning on permutation-invariant MNIST, collectively on using 100, 200 and 1000 labels. We utilized a bottom-\nTable 1: Grid search specs\nSettings Description EBGANs GANs nLayerG number of layers in G [2, 3, 4, 5] [2, 3, 4, 5] nLayerD number of layers in D [2, 3, 4, 5] [2, 3, 4, 5] sizeG number of neurons in G [400, 800, 1600, 3200] [400, 800, 1600, 3200] sizeD number of neurons in D [128, 256, 512, 1024] [128, 256, 512, 1024] dropoutD if to use dropout in D [true, false] [true, false] optimD to use Adam or SGD for D adam [adam, sgd] optimG to use Adam or SGD for G adam [adam, sgd] 1r learning rate 0.001 [0.01, 0.001, 0.0001] #experiments: 512 6144\nFigure 4: Generation from the grid search on MNIST. Left(a): Best GAN model; Middle(b): Bes EBGAN model. Right(c): Best EBGAN-PT model.\nlayer-cost Ladder Network (LN) (Rasmus et al.l|2015) with the EGBAN framework (EBGAN-LN) Ladder Network can be categorized as an energy-based model that is built with both feedforward. and feedback hierarchies powered by stage-wise lateral connections coupling two pathways\nOne technique we found crucial in enabling EBGAN framework for semi-supervised learning is to. gradually decay the margin value m of the equation[1 The rationale behind is to let discriminator punish generator less when pg gets closer to the data manifold. One can think of the extreme case. where the contrastive samples are exactly pinned on the data manifold, such that they are \"not con- trastive anymore\". This ultimate status happens when m = 0 and the EBGAN-LN model falls back. to a normal Ladder Network. The undesirability of a non-decay dynamics for using the discriminator in the GAN or EBGAN framework is also indicated by Theorem2 on convergence, the discrimi-. nator reflects a flat energy surface. However, we posit that the trajectory of learning a EBGAN-LN. model does provide the LN (discriminator) more information by letting it see contrastive samples Yet the optimal way to avoid the mentioned undesirability is to make sure m has been decayed to 0 when the Nash Equilibrium is reached. The margin decaying schedule is found by hyper-parameter. search in our experiments (technical details in appendixD)..\nFrom table 2] it shows that positioning a bottom-layer-cost LN into an EBGAN framework prof-. itably improves the performance of the LN itself. We postulate that within the scope of the EBGAN. framework, iteratively feeding the adversarial contrastive samples produced by the generator to the energy function acts as an effective regularizer; the contrastive samples can be thought as an exten-. sion to the dataset that provides more information to the classifier. We notice there was a discrepancy. between the reported results between[Rasmus et al.(2015) and Pezeshki et al.(2015), so we report both results along with our own implementation of the Ladder Network running the same setting.. The specific experimental setting and analysis are available in appendix D\nSOCuS. model 100 200 1000 LN bottom-layer-cost, reported in|Pezeshki et al.(2015 1.690.18 1.050.02 LN bottom-layer-cost, reported in Rasmus et al.(2015 1.090.32 0.900.05 LN bottom-layer-cost, reproduced in this work (see appendix|D 1.360.21 1.240.09 1.040.06 LN bottom-layer-cost within EBGAN framework 1.040.12 0.990.12 0.890.04 Relative percentage improvement 23.5% 20.2% 14.4%\nWe apply the EBGAN framework with deep convolutional architecture to generate 64 64 RGB images, a more realistic task, using the LSUN bedroom dataset (Yu et al.|2015) and the large-scale face dataset CelebA under alignment (Liu et al.]2015). To compare EBGANs with DCGANs (Rad- ford et al.|2015), we train a DCGAN model under the same configuration and show its generation side-by-side with the EBGAN model, in figures5|and 6] The specific settings are listed in appendix\nC 7 8 E 3 5 0 ? 5 7 2 3 4/ 3 0 S 6 9 D 8 X 9 S 5 6 3 3 3 M 5 8 3 7 S 2 . 4 * 8 3 S 3 9 5 1 f 3\nTable 2: The comparison of LN bottom-layer-cost model and its EBGAN extension on PI-MNIST semi-supervised task. Note the results are error rate (in %) and averaged over 15 different random. seeds.\nFigure 5: Generation from the LSUN bedroom dataset. Left(a): DCGAN generation. Right(b) EBGAN-PT generation.\nFigure 6: Generation from the CelebA dataset. Left(a): DCGAN generation. Right(b): EBGAN-PT generation.\nFinally, we trained EBGANs to generate high-resolution images on ImageNet (Russakovsky et al.. 2015). Compared with the datasets we have experimented so far, ImageNet presents an extensively. larger and wilder space, so modeling the data distribution by a generative model becomes very chal-. lenging. We devised an experiment to generate 128 128 images, trained on the full ImageNet-1k dataset, which contains roughly 1.3 million images from 1000 different categories. We also trained a network to generate images of size 256 256, on a dog-breed subset of ImageNet, using the wordNet IDs provided byVinyals et al.](2016). The results are shown in figures[7|and[8] Despite the difficulty of generating images on a high-resolution level, we observe that EBGANs are able to learn about the fact that objects appear in the foreground, together with various background com-. ponents resembling grass texture, sea under the horizon, mirrored mountain in the water, buildings etc. In addition, our 256 256 dog-breed generations, although far from realistic, do reflect some knowledge about the appearances of dogs such as their body, furs and eye..\nFigure 7: ImageNet 128 128 generations using an EBGAN-PT\nFigure 8: ImageNet 256 256 generations using an EBGAN-PT"}, {"section_index": "7", "section_name": "ACKNOWLEDGMENT", "section_text": "We thank Emily Denton, Soumith Chitala, Arthur Szlam, Marc'Aurelio Ranzato, Pablo Sprech mann, Ross Goroshin and Ruoyu Sun for fruitful discussions. We also thank Emily Denton and Tian Jiang for their help with the manuscript.\nCarreira-Perpinan, Miguel A and Hinton, Geoffrey. On contrastive divergence learning. In AISTATS, volume 10 pp. 33-40. Citeseer, 2005.\nWe bridge two classes of unsupervised learning methods - GANs and auto-encoders - and revisit the. GAN framework from an alternative energy-based perspective. EBGANs show better convergence pattern and scalability to generate high-resolution images. A family of energy-based loss functionals. presented inLeCun et al.(2006) can easily be incorporated into the EBGAN framework. For the. future work, the conditional setting (Denton et al.]2015f |Mathieu et al.|2015) is a promising setup to explore. We hope the future research will raise more attention on a broader view of GANs from the energy-based perspective.\nIm, Daniel Jiwoong, Kim, Chris Dongjoo, Jiang, Hui, and Memisevic, Roland. Generating images with recur rent adversarial networks. arXiv preprint arXiv:1602.05110, 2016\nKavukcuoglu, Koray, Sermanet, Pierre, Boureau, Y-Lan, Gregor, Karol, Mathieu, Michael, and Cun, Yann I Learning convolutional feature hierarchies for visual recognition. In Advances in neural information pro. cessing systems, pp. 1090-1098, 2010.\nLeCun, Yann, Chopra, Sumit, and Hadsell, Raia. A tutorial on energy-based learning. 2006.\niu, Ziwei, Luo, Ping, Wang, Xiaogang, and Tang, Xiaoou. Deep learning face attributes in the wild. I Proceedings of the IEEE International Conference on Computer Vision, pp. 3730-3738, 2015.\nMarcAurelio Ranzato, Christopher Poultney and Chopra, Sumit. Efficient learning of sparse representation with an energy-based model. 2007.\nMathieu, Michael, Couprie, Camille, and LeCun, Yann. Deep multi-scale video prediction beyond mean squar error. arXiv preprint arXiv:1511.05440, 2015.\nPezeshki. Mohammad, Fan, Linxi, Brakel, Philemon, Courville, Aaron, and Bengio, Yoshua. Deconstructin the ladder network architecture. arXiv preprint arXiv:1511.06430, 2015.\nRadford, Alec, Metz, Luke, and Chintala, Soumith. Unsupervised representation learning with deep convolu tional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015\nRanzato, Marc'Aurelio, Boureau, Y-Lan, Chopra, Sumit, and LeCun, Yann. A unified energy-based frameworl for unsupervised learning. In Proc. Conference on AI and Statistics (AI-Stats), 2007..\nSalimans, Tim, Goodfellow, Ian, Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, and Chen, Xi. Improvec techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.\nVinyals, Oriol, Blundell, Charles, Lillicrap, Timothy, Kavukcuoglu, Koray, and Wierstra, Daan. Matchin. networks for one shot learning. arXiv preprint arXiv:1606.04080, 2016.\nZhao, Junbo, Mathieu, Michael, Goroshin, Ross, and Lecun, Yann. Stacked what-where auto-encoders. arXi preprint arXiv:1506.02351, 2015.\nIoffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.\nKim, Taesup and Bengio, Yoshua. Deep directed generative models with energy-based probability estimation arXiv preprint arXiv:1606.03439, 2016\nVincent, Pascal, Larochelle, Hugo, Lajoie, Isabelle, Bengio, Yoshua, and Manzagol, Pierre-Antoine. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research. 11(Dec):3371-3408. 2010..\nLemma 1. Let a, b 0, y(y) = ay + b [m y]+. The minimum of y on [0, +oo) exists and is. reached in m if a < b, and it is reached in O otherwise (the minimum may not be unique).\nProof. Let's assume that dx = 0. Then\nc)(p(x)-q(x))dx (x)<q(x))(p(x) -q(x))dx lp(x)<q(x)(p(x)- q(x))dx )<q(x) +1p(x)= )(p(x) - q(x))dx p(x)-q(x))dx+ p(x)=q(x)px)-q(x))d 0+0=0\nSo fx 1p(x)>q(x)(p(x) - q(x))dx = 0 and since the term in the integral is always non-negative, 1p(x)>q(x)(p(x) - q(x)) = 0 for almost all x. And p(x) - q(x) = 0 implies 1p(x)>q(x) = 0, so 1p(x)>q(x) = 0 almost everywhere. Therefore f 1p(x)>q(x)dx = 0 which completes the proof, given the hypothesis.\nProof of theorem2 The sufficient conditions are obvious. The necessary condition on G* comes from theorem[1] and the necessary condition on D* (x) m is from the proof of theorem|1 Let us now assume that D*(x) is not constant almost everywhere and find a contradiction. If it is not, then there exists a constant C and a set S of non-zero measure such that Vx E S, D*(x) < C and Vx S, D*(X) > C. In addition we can choose S such that there exists a subset S' C S of non-zero measure such that pdata(x) > 0 on S' (because of the assumption in the footnote). We can build a generator Go such that pG,(x) Pdata(x) over S and pG, (x) < Pdata(x) over S'. We compute\nU(G*, D*) - U(Go, I Pdata -PGo)D*(x)dx (21 Pdata PGo)(D*(x) - C)dx (22 (Pdata - PGo)(D*(x) - C)dx + (Pdata - PGo)(D*(x) R (23"}, {"section_index": "8", "section_name": "TWO INTERPRETATIONS OF GANS", "section_text": "GANs can be interpreted in two complementary ways. In the first interpretation, the key componen. is the generator, and the discriminator plays the role of a trainable objective function. Let us imagine that the data lies on a manifold. Until the generator produces samples that are recognized as being or the manifold, it gets a gradient indicating how to modify its output so it could approach the manifold. In such scenario, the discriminator acts to punish the generator when it produces samples that are. outside the manifold. This can be understood as a way to train the generator with a set of possible. desired outputs (e.g. the manifold) instead of a single desired output as in traditional supervised. learning.\nFor the second interpretation, the key component is the discriminator, and the generator is merely. trained to produce contrastive samples. We show that by iteratively and interactively feeding con trastive samples, the generator enhances the semi-supervised learning performance of the discrimi. nator (e.g. Ladder Network), in section|4.2.\nFor training both EBGANs and GANs for the grid search, we use the following setting.\nWe evaluate the models from the grid search by calculating a modified version of the inception score I' = EK L(p(y)||p(y|x)), where x denotes a generated sample and y is the label predicted by a MNIST classifier that is trained off-line using the entire MNIST training set. Two main changes were made upon its original form: (i)-we swap the order of the distribution pair; (ii)-we omit the e() operation. The modified score condenses the histogram in figure 2|and figure 3 It is also worth noting that although we inherit the name \"inception score\" from Salimans et al.(2016), the evaluation isn't related to the \"inception\" model trained on ImageNet dataset. The classifier is a regular 3-layer ConvNet trained on MNIST.\nThe generations showed in figure4are the best GAN or EBGAN (obtaining the best I' score) fron the grid search. Their configurations are:.\nfigure4(a): nLayerG=5, nLayerD=2, sizeG=1600, sizeD=1024, dropoutD=0 optimD=SGD, optimG=SGD,1r=0.01. figure(b): nLayerG=5, nLayerD=2, sizeG=800, sizeD=1024. dropoutD=0 optimD=ADAM, optimG=ADAM, 1r=0.001, margin=10. figure4(c): same as (b), with ApT = 0.1..\nWe use a deep convolutional generator analogous to DCGAN's and a deep convolutional auto encoder for the discriminator. The auto-encoder is composed of strided convolution modules ir the feedforward pathway and fractional-strided convolution modules in the feedback pathway. We leave the usage of upsampling or switches-unpooling (Zhao et al.]2015) to future research. We also followed the guidance suggested byRadford et al.(2015) for training EBGANs. The configuration of the deep auto-encoder is:\nBatch normalization (Ioffe & Szegedy2015) is applied after each weight layer, except for the. generator output layer and the discriminator input layer (Radford et al.]2015). Training images are scaled into range [-1,1]. Correspondingly the generator output layer is fol- lowed by a Tanh function.. ReLU is used as the non-linearity function.. Initialization: the weights in D from (0, 0.002) and in G from V(0, 0.02). The bias are initial-. ized to be 0.\nEncoder: (64) 4c2s-(128) 4c2s-(256) 4c2s Decoder: (128) 4c2s-(64) 4c2s-(3) 4c2s\nwhere \"(64) 4c2s\" denotes a convolution/deconvolution layer with 64 output feature maps and kernel size 4 with stride 2. The margin m is set to 80 for LSUN and 20 for CelebA."}, {"section_index": "9", "section_name": "IMAGENET", "section_text": "128 x 128 model: - Generator: (1024) 4c-(512) 4c2s-(256) 4c2s-(128) 4c2s- (64) 4c2s-(64) 4c2s-(3) 3c - Noise #planes: 100-64-32-16-8-4 - Encoder: (64) 4c2s-(128) 4c2s-(256) 4c2s-(512) 4c2s - Decoder: (256) 4c2s-(128) 4c2s-(64) 4c2s-(3) 4c2s -Margin: 40 256 x 256 model: - Generator: (2048) 4c-(1024) 4c2s-(512) 4c2s-(256) 4c2s-(128) 4c (64) 4c2s-(64) 4c2s-(3) 3c - Noise #planes: 100-64-32-16-8-4-2 - Encoder: (64) 4c2s-(128) 4c2s-(256) 4c2s-(512) 4c2s - Decoder: (256) 4c2s-(128) 4c2s-(64) 4c2s-(3) 4c2s Margin: 80\nNote that we feed noise into every layer of the generator where each noise component is initialized into a 4D tensor and concatenated with current feature maps in the feature space. Such strategy is also employed bySalimans et al.(2016)\nAs statedinsection 4.2 we chose a bottom-layer-cost Ladder Network as our base-. line model. Specifically, we utilize an identical architecture as reported in both papers. (Rasmus et al.] 2015, Pezeshki et al.]2015); namely a fully-connected network of size. 784-1000-500-250-250-250, with batch normalization and ReLU following each linear layer. To obtain a strong baseline, we tuned the weight of the reconstruction cost with values the meantime, we also tuned the learning rate with values {0.002, 0.001, 0.0005, 0.0002, 0.0001}. We adopted Adam as the optimizer with 1 being set to 0.5. The minibatch size was set to 100. All the experiments are finished by 120,000 steps. We use the same learning rate decay mechanism as in the published papers - starting from the two-thirds of total steps (i.e., from step #80,oo0) to linearly decay the learning rate to 0. The result reported in section|4.2|was done by the best tuned setting. 1000 , lr = 0.0002. AL2 7841"}, {"section_index": "10", "section_name": "EBGAN-LN MODEL", "section_text": "We place the same Ladder Network architecture into our EBGAN framework and train this EBGAN LN model the same way as we train the EBGAN auto-encoder model. For technical details, we. started training the EBGAN-LN model from the margin value 16 and gradually decay it to O within. the first 60,o00 steps. By the time, we found that the reconstruction error of the real image had. already been low and reached the limitation of the architecture (Ladder Network itself); besides the. generated images exhibit good quality (shown in figure|10). Thereafter we turned off training the. generator but kept training the discriminator for another 120,000 steps. We set the initial learning. rates to be 0.0005 for discriminator and 0.00025 for generator. The other setting is kept consistent. with the best baseline LN model. The learning rate decay started at step #120,000 (also two-thirds. of the total steps).\nNotice that we used the 2828 version (unpadded) of the MNIST dataset in the EBGAN-LN experiment. For the EBGAN auto-encoder grid search experiments, we used the zero-padded version, i.e., size 32 32. No phenomenal difference has been found due to the zero-padding. . We generally took the l2 norm of the discrepancy between input and reconstruction for the loss term in the EBGAN auto-encoder model as formally written in section 2.1 However, for the EBGAN-LN experiment, we followed the original implementation of Ladder Network using a vanilla form of l2 loss. Borrowed from Salimans et al.(2016), the batch normalization is adopted without the learned parameter y but merely with a bias term . It still remains unknown whether such trick could affect learning in some non-ignorable way, so this might have made our baseline model not a strict reproduction of the published models byRasmus et al.(2015) and Pezeshki et al.(2015).\nIt is crucial to set a proper energy margin value m in the framework of EBGAN, from both theoretical and experimental perspective. Hereby we provide a few tips.\nAbstracting away from the practical experimental tips, the theoretical understanding of EBGAN ir section|2.2 also provides some insight for setting a feasible m. For instance, as implied by Theore. 2 setting a large m results in a broader range of to which D*(x) may converge. Instability may. come after an overly large y because it generates two strong gradients pointing to opposite directions. from loss[1] which would demand more finicky optimization setting.."}, {"section_index": "11", "section_name": "LSUN AUGMENTED VERSION TRAINING", "section_text": "For LSUN bedroom dataset, aside from the experiment on the whole images, we also train an EBGAN auto-encoder model based on dataset augmentation by cropping patches. All the patches are of size 64 64 and cropped from 96 96 original images. The generation is shown in figure|11"}, {"section_index": "12", "section_name": "COMPARISON OF EBGANs AND EBGAN-PTs", "section_text": "To further demonstrate how the pull-away term (PT) may influence EBGAN auto-encoder model training, we chose both the whole-image and augmented-patch version of the LSUN bedroom dataset, together with the CelebA dataset to make some further experimentation. The compari son of EBGAN and EBGAN-PT generation are showed in figure|12] figure|13|and figure[14] Note\nDelving into the formulation of the discriminator loss made by equation 1] we suggest a numerical balance between its two terms which concern real and fake sample respectively. The second term is apparently bounded by [0, m] (assuming the energy function D(x) is. non-negative). It is desirable to make the first term bounded in a similar range. In theory the upper bound of the first term is essentially determined by (i)-the capacity of D; (ii)-the complexity of the dataset. In practice, for the EBGAN auto-encoder model, one can run D (the auto-encoder) alone on the real sample dataset and monitor the loss. When it converges, the consequential loss. implies a rough limit on how well such setting of D is capable to fit the dataset. This. usually suggests a good start for a hyper-parameter searching on m.. m being overly large results in a training instability/difficulty, while m being too small is. prone to the mode-dropping problem. This property of m is depicted in figure[9. . One successful technique, as we introduced in appendix D] is to start from a large m. and gradually decayed it to O along training proceeds. Unlike the feature matching semi-. supervised learning technique proposed by Salimans et al.(2016), we show in figure|10. that not only does the EBGAN-LN model achieve a good semi-supervised learning perfor-. mance, it also produces satisfactory generations..\n3 6 S 6 8 5\nFigure 9: Generation from the EBGAN auto-encoder model trained with different m settings. From top to bottom, m is set to 1, 2, 4, 6, 8, 12, 16, 32 respectively. The rest setting is nLayerG=5. nLayerD=2, sizeG=1600, sizeD=1024, dropoutD=0, optimD=ADAM, optimG=ADAM 1 r=0.001.\n3 s 5 3 9 6 43 7 47 3 4 9 9 5 3 6 3 8 8 S 2 8 0 4 8\nFigure 10: Generation from the EBGAN-LN model. The displayed generations are obtained by an identical experimental setting described in appendix D] with different random seeds. As we mentioned before, we used the unpadded version of the MNIST dataset (size 28 28) in the EBGAN LN experiments.\nthat all comparison pairs adopt identical architectural and hyper-parameter setting as in section4 The cost weight on the PT is set to 0.1.\nFigure 11: Generation from augmented-patch version of the LSUN bedroom dataset. Left(a): DC GAN generation. Right(b): EBGAN-PT generation.\nFigure 12: Generation from whole-image version of the LSUN bedroom dataset. Left(a): EBGAN Right(b): EBGAN-PT.\nFigure 13: Generation from augmented-patch version of the LSUN bedroom dataset. Left(a) EBGAN. Right(b): EBGAN-PT\nFigure 14: Generation from the CelebA dataset. Left(a): EBGAN. Right(b): EBGAN-PT"}]
SJQNqLFgl
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Many recent articles discuss new architectures for neural networking, especially regarding Residual. Networks (He et al.(2015] 2016); Larsson et al.(2016);Zhang et al.(2016);Huang et al.(2016b)) Although the literature covers a wide range of network architectures, we take a high-level view of the architectures as the basis for discovering universal principles of the design of network architecture. We discuss 14 original design patterns that could benefit inexperienced practitioners who seek to. incorporate deep learning in various new applications. This paper addresses the current lack of guidance on design, a deficiency that may prompt the novice to rely on standard architecture, e.g.,. Alexnet, regardless of the architecture's suitability to the application at hand..\nThis abundance of research is also an opportunity to determine which elements provide benefits in. what specific contexts. We ask: Do universal principles of deep network design exist? Can these principles be mined from the collective knowledge on deep learning? which architectural choices. work best in any given context? Which architectures or parts of architectures seem elegant?.\nDesign patterns were first described by Christopher Alexander (Alexander(1979)) in regards tc the architectures of buildings and towns. Alexander wrote of a timeless quality in architecture that \"lives\"' and this quality is enabled by building based on universal principles. The basis of design patterns is that they resolve a conflict of forces in a given context and lead to an equilibriun analogous to the ecological balance in nature. Design patterns are both highly specific, making them clear to follow, and flexible so they can be adapted to different environments and situations Inspired by Alexander's work, the \"gang of four\" (Gamma et al.(1995)) applied the concept o1 design patterns to the architecture of object-oriented software. This classic computer science bool describes 23 patterns that resolve issues prevalent in software design, such as \"requirements always change\"'. We were inspired by these previous works on architectures to articulate possible desigr patterns for convolutional neural network (CNN) architectures.\nDesign patterns provide universal guiding principles, and here we take the first steps to defining network design patterns. Overall, it is an enormous task to define design principles for all neural networks and all applications, so we limit this paper to CNNs and their canonical application of image classification. However, we recognize that architectures must depend upon the application by defining our first design pattern: Design Pattern 1: Architectural Structure follows the Application"}, {"section_index": "1", "section_name": "DEEP CONVOLUTIONAL NEURAL NETWORK DESIGN PATTERNS", "section_text": "Nicholay Topin\nUniversity of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "(we leave the details of this pattern to future work). In addition, these principles allowed us to dis cover some gaps in the existing research and to articulate novel networks (i.e, Fractal of FractalNets Stagewise Boosting and Taylor Series networks) and units (i.e., freeze-drop-path). We hope the rule of thumb articulated here are valuable for both the experienced and novice practitioners and that ou preliminary work serves as a stepping stone for others to discover and share additional deep learning design patterns."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "To the best of our knowledge, there has been little written recently to provide guidance and under. standing on appropriate architectural choices The book \"Neural Networks: Tricks of the Trade'. (Orr & Muller! 2003) contains recommendations for network models but without reference to the vast amount of research in the past few years. Perhaps the closest to our work is Szegedy et al (2015b) where those authors describe a few design principles based on their experiences..\nMuch research has studied neural network architectures, but we are unaware of a recent survey of the field. Unfortunately, we cannot do justice to this entire body of work, so we focus on recent. innovations in convolutional neural networks architectures and, in particular, on Residual Networks. (He et al.]2015) and its recent family of variants. We start with Network In Networks (Lin et al.. 2013), which describes a hierarchical network with a small network design repeatedly embedded. in the overall architecture.Szegedy et al.(2015a) incorporated this idea into their Inception ar-. chitecture. Later, these authors proposed modifications to the original Inception design (Szegedy. et al.]2015b). A similar concept was contained in the multi-scale convolution architecture (Liao & Carneiro[[2015). In the meantime, Batch Normalization (Ioffe & Szegedy2015) was presented as a unit within the network that makes training faster and easier..\nBefore the introduction of Residual Networks, a few papers suggested skip connections. Skip con nections were proposed byRaiko et al.(2012). Highway Networks (Srivastava et al.]2015) use a gating mechanism to decide whether to combine the input with the layer's output and showed how these networks allowed the training of very deep networks. The DropIn technique (Smith et al. 2015, 2016) also trains very deep networks by allowing a layer's input to skip the layer. The concep1 of stochastic depth via a drop-path method was introduced by Huang et al.(2016b).\nResidual Networks were introduced by He et al. (2015), where the authors describe their networl that won the 2015 ImageNet Challenge. They were able to extend the depth of a network from ten to hundreds of layers and in doing so, improve the network's performance. The authors followed uj with another paper (He et al.|2016) where they investigate why identity mappings help and repor results for a network with more than a thousand layers. The research community took notice of this architecture and many modifications to the original design were soon proposed.\nThe Inception-v4 paper (Szegedy et al.]2016) describes the impact of residual connections on their Inception architecture and compared these results with the results from an updated Inception design The Resnet in Resnet paper (Targ et al.2016) suggests a duel stream architecture.Veit et al.(2016 provided an understanding of Residual Networks as an ensemble of relatively shallow networks These authors illustrated how these residual connections allow the input to follow an exponential number of paths through the architecture. At the same time, the FractalNet paper (Larsson et al. 2016) demonstrated training deep networks with a symmetrically repeating architectural pattern. As described later, we found the symmetry introduced in their paper intriguing. In a similar vein. Convolutional Neural Fabrics (Saxena & Verbeek! 2016) introduces a three dimensional network, where the usual depth through the network is the first dimension.\n'After submission we became aware of an online book being written on deep learning design patterns at http://www.deeplearningpatterns.com\nWide Residual Networks (Zagoruyko & Komodakis|2016) demonstrate that simultaneously increas-. ing both depth and width leads to improved performance. In Swapout (Singh et al.||2016), each layer can be dropped, skipped, used normally, or combined with a residual. Deeply Fused Nets (Wang et al.|2016) proposes networks with multiple paths. In the Weighted Residual Networks paper (Shen & Zeng||2016), the authors recommend a weighting factor for the output from the convolutional lay- ers, which gradually introduces the trainable layers. Convolutional Residual Memory Networks (Moniz & Pal]2016) proposes an architecture that combines a convolutional Residual Network with\nan LSTM memory mechanism. For Residual of Residual Networks (Zhang et al.2016), the authors propose adding a hierarchy of skip connections where the input can skip a layer, a module, or any number of modules. DenseNets (Huang et al.[2016a) introduces a network where each module is densely connected; that is, the output from a layer is input to all of the other layers in the module. In the Multi-Residual paper (Abdi & Nahavandi]2016), the authors propose expanding a residual block width-wise to contain multiple convolutional paths. Our Appendix|A|describes the close relationship between many of these Residual Network variants.\nWe reviewed the literature specifically to extract commonalities and reduce their designs down tc fundamental elements that might be considered design patterns. It seemed clear to us that in review ing the literature some design choices are elegant while others are less so. In particular, the pattern. described in this paper are the following:\n1. Architectural Structure follows the Applicatic 2.Proliferate Paths 3. Strive for Simplicity. 4.Increase Symmetry 5.Pyramid Shape 6. Over-train 7. Cover the Problem Space. 8. Incremental Feature Construction 9. Normalize Layer Inputs. 10. Input Transition 11. Available Resources Guide Layer Widths. 12. Summation Joining 13. Down-sampling Transition 14. Maxout for Competition.\nSeveral researchers have pointed out that the winners of the ImageNet Challenge (Russakovsk. et al. | 2015) have successively used deeper networks (as seen in,Krizhevsky et al.[(2012),Szegedy et al.(2015a), Simonyan & Zisserman(2014), He et al.(2015)). It is also apparent to us from the ImageNet Challenge that multiplying the number of paths through the network is a recent trend tha. is illustrated in the progression from Alexnet to Inception to ResNets. For example,Veit et al.(2016 show that ResNets can be considered to be an exponential ensemble of networks with differen lengths. Design Pattern 2: Proliferate Paths is based on the idea that ResNets can be an exponentia ensemble of networks with different lengths. One proliferates paths by including a multiplicity o. branches in the architecture. Recent examples include FractalNet (Larsson et al.2016), Xceptior. (Chollet 2016), and Decision Forest Convolutional Networks (Ioannou et al. 2016).\nScientists have embraced simplicity/parsimony for centuries. Simplicity was exemplified in the. paper '\"Striving for Simplicity\" (Springenberg et al.2014) by achieving state-of-the-art results with. fewer types of units. Design Pattern 3: Strive for Simplicity suggests using fewer types of units. and keeping the network as simple as possible. We also noted a special degree of elegance in the FractalNet (Larsson et al.2016) design, which we attributed to the symmetry of its structure.. Design Pattern 4: Increase Symmetry is derived from the fact that architectural symmetry is typically considered a sign of beauty and quality. In addition to its symmetry, FractalNets also adheres to the. Proliferate Paths design pattern so we used it as the baseline of our experiments in Section|4.\nAn essential element of design patterns is the examination of trade-offs in an effort to understand the relevant forces. One fundamental trade-off is the maximization of representational power versus\nAnother trade-off in deep learning is training accuracy versus the ability of the network to generalize. to non-seen cases. The ability to generalize is an important virtue of deep neural networks. Reg. ularization is commonly used to improve generalization, which includes methods such as dropout (Srivastava et al.2014a) and drop-path (Huang et al.2016b). As noted bySrivastava et al.2014b dropout improves generalization by injecting noise in the architecture. We believe regularization. techniques and prudent noise injection during training improves generalization (Srivastava et al. 2014b] Gulcehre et al.2016). Design Pattern 6: Over-train includes any training method where. the network is trained on a harder problem than necessary to improve generalization performance of inference. Design Pattern 7: Cover the Problem Space with the training data is another way to improve generalization (e.g., Ratner et al.2016||Hu et al. 2016||Wong et al.2016||Johnson-Roberson et al. 2016). Related to regularization methods, cover the problem space includes the use of noise. (Rasmus et al.2015]|Krause et al.2015 |Pezeshki et al.2015), synthetic data, and data augmentation such as random cropping, flipping, and varying brightness, contrast, and the like.."}, {"section_index": "4", "section_name": "3.2 DETAILED ARCHITECTURE DESIGN", "section_text": "A common thread throughout many of the more successful architectures is to make each layer \"job\" easier. Use of very deep networks is an example because any single layer only needs t. incrementally modify the input, and this partially explains the success of Residual Networks, since. in very deep networks, a layer's output is likely similar to the input; hence adding the input to th. layer's output makes the layer's job incremental. Also, this concept is part of the motivation behin. design pattern 2 but it extends beyond that. Design Pattern 8: Incremental Feature Constructioi. recommends using short skip lengths in ResNets. A recent paper (Alain & Bengio(2016)) showec. in an experiment that using an identity skip length of 64 in a network of depth 128 led to the firs. portion of the network not being trained.\nDesign Pattern 9: Normalize Layer Inputs is another way to make a layer's job easier. Normalization of layer inputs has been shown to improve training and accuracy but the underlying reasons are not clear (Ioffe & Szegedy 2015]Ba et al. 2016] Salimans & Kingma 2016). The Batch Normalization paper (Ioffe & Szegedy 2015) attributes the benefits to handling internal covariate shift, while the. authors of streaming normalization (Liao et al. 2016) express that it might be otherwise. We feel. that normalization puts all the layer's input samples on more equal footing (analogous to a units. conversion scaling), which allows back-propagation to train more effectively..\nSome research, such as Wide ResNets (Zagoruyko & Komodakis 2016), has shown that increasing the number of channels improves performance but there are additional costs with extra channels The input data for many of the benchmark datasets have 3 channels (i.e., RGB). Design Patterr 10: Input Transition is based on the common occurrence that the output from the first layer of a CNN significantly increases the number of channels from 3. A few examples of this increase in channels/outputs at the first layer for ImageNet are AlexNet (96 channels), Inception (32), VGG (224), and ResNets (64). Intuitively it makes sense to increase the number of channels from 3 in the first layer as it allows the input data to be examined many ways but it is not clear how many outputs are best. Here, the trade-off is that of cost versus accuracy. Costs include the number of parameters in the network, which directly affects the computational and storage costs of training and inference Design Pattern 11: Available Resources Guide Layer Widths is based on balancing costs against ar application's requirements. Choose the number of outputs of the first layer based on memory and computational resources and desired accuracy."}, {"section_index": "5", "section_name": "3.2.1 JOINING BRANCHES: CONCATENATION, SUMMATION/MEAN, MAXOUT", "section_text": "When there are multiple branches, three methods have been used to combine the outputs; concate. nation. summation (or mean). or Maxout. It seems that different researchers have their favorites an there hasn't been any motivation for using one in preference to another. In this Section, we propose. some rules for deciding how to combine branches..\nelimination of redundant and non-discriminating information. It is universal in all convolutional neural networks that the activations are downsampled and the number of channels increased from the input to the final layer, which is exemplified in Deep Pyramidal Residual Networks (Han et al. (2016). Design Pattern 5: Pyramid Shape says there should be an overall smooth downsampling combined with an increase in the number of channels throughout the architecture.\nSummation is one of the most common ways of combining branches. Design Pattern 12: Summatior Joining is where the joining is performed by summation/mean. Summation is the preferred joining mechanism for Residual Networks because it allows each branch to compute corrective terms (i.e. residuals) rather than the entire approximation. The difference between summation and mean (i.e.. fractal-join) is best understood by considering drop-path (Huang et al.2016b). In a Residual Net. work where the input skip connection is always present, summation causes the layers to learn the. residual (the difference from the input). On the other hand, in networks with several branches, where any branch can be dropped (e.g., FractalNet (Larsson et al.[(2016))), using the mean is preferable as. it keeps the output smooth as branches are randomly dropped..\nSome researchers seem to prefer concatenation (e.g., Szegedy et al.(2015a)). Design Pattern 13 Down-sampling Transition recommends using concatenation joining for increasing the number ol outputs when pooling. That is, when down-sampling by pooling or using a stride greater than 1, a good way to combine branches is to concatenate the output channels, hence smoothly accomplishing both joining and an increase in the number of channels that typically accompanies down-sampling"}, {"section_index": "6", "section_name": "4.1 ARCHITECTURAL INNOVATIONS", "section_text": "In elucidating these fundamental design principles, we also discovered a few architectural innova tions. In this section we will describe these innovations..\nFirst, we recommended combining summation/mean, concatenation, and maxout joining mecha. nisms with differing roles within a single architecture, rather than the typical situation where only one is used throughout. Next, Design Pattern 2: Proliferate Branches led us to modify the overall sequential pattern of modules in the FractalNet architecture. Instead of lining up the modules for. maximum depth, we arranged the modules in a fractal pattern as shown in|1b which we named a. Fractal of FractalNet (FoF) network, where we exchange depth for a greater number of paths.."}, {"section_index": "7", "section_name": "4.1.1 FREEZE-DROP-PATH AND STAGEWISE BOOSTING NETWORKS (SBN)", "section_text": "Drop-path was introduced by Huang et al.(2016b), which works by randomly removing branches. during an iteration of training, as though that path doesn't exist in the network. Symmetry consider. ations led us to an opposite method that we named freeze-path. Instead of removing a branch from the network during training, we freeze the weights, as though the learning rate was set to zero. A. similar idea has been proposed for recurrent neural networks (Krueger et al.2016)..\nThe potential usefulness of combining drop-path and freeze-path, which we named freeze-drop-path is best explained in the non-stochastic case. Figure|1|shows an example of a fractal of FractalNet architecture. Let's say we start training only the leftmost branch in Figure|1b|and use drop-path on all of the other branches. This branch should train quickly since it has only a relatively few parameters compared to the entire network. Next we freeze the weights in that branch and allow the next branch to the right to be active. If the leftmost branch is providing a good function approximation. the next branch works to produce a \"small'' corrective term. Since the next branch contains more layers than the previous branch and the corrective term should be easier to approximate than the original function, the network should attain greater accuracy. One can continue this process from left to right to train the entire network. We used freeze-drop-path as the final/bottom join in the FoF architecture in Figure[1b|and named this the Stagewise Boosting Networks (SBN) because they are analogous to stagewise boosting (Friedman et al. 2001). The idea of boosting neural networks is not new (Schwenk & Bengio2ooo) but this architecture is new. In Appendix B|we discuss the implementation we tested.\nMaxout has been used for competition, as in locally competitive networks (Srivastava et al.2014b. and competitive multi-scale networksLiao & Carneiro(2015). Design Pattern 14: Maxout for Com-. petition is based on Maxout choosing only one of the activations, which is in contrast to summation. or mean where the activations are \"cooperating\"; here, there is a \"competition' with only one \"win-. ner'. For example, when each branch is composed of different sized kernels, Maxout is useful for incorporating scale invariance in an analogous way to how max pooling enables translation invari-. ance.\nconvolutional layer pooling layer prediction layer joining layer FractalNet module. (a) FractalNet module b) Fractal of FractalNet architectur\nFigure 1: (a) The FractalNet module and (b) the FoF architecture\nTable 1: Comparison of test accuracy results at the end of the training, averaged over three runs\nArchitecture CIFAR-10 (%) CIFAR-100 (%) FractalNet 93.28 0.22 72.39 0.10 FractalNet + Concat 93.05 0.05 72.57 0.31 FractalNet + Maxout 91.99 0.33 69.98 0.24 FractalNet + Avg pooling 94.25 0.09 72.81 0.52 FoF 92.31 0.25 72.81 0.28 SBN 90.69 0.62 68.70 0.86 TSN 90.7 0.04 68.36 0.10\nThe experiments in this section are primarily to empirically validate the architectural innovations described above but not to fully test them. We leave a more complete evaluation to future work\nTable[1and Figures2|and 3|compare the final test accuracy results for CIFAR-10 and CIFAR-100 in a number of experiments. An accuracy value in Table|1|is computed as the mean of the last 6 test\n(b) Fractal of FractalNet architectur\nSince neural networks are also function approximators, it is a short leap from FoFs and SBNs to consider the branches of that network as terms in a Taylor series expansion. Hence, the Taylor series implies squaring the second branch before the summation joining unit, analogous to the second order term in the expansion. Similarly, the third branch would be cubed. We call this \"Taylor Series Networks\" (TSN) and there is precedence for this idea in the literature with polynomial networks (Livni et al. 2014) and multiplication in networks (e.g.Lin et al. 2015 The implementation details of this TSN are discussed in the Appendix.\n0.8 0.7 M 0.8 0.6 0.6 05 eunre 0.4 A resr 0.4 est 0.3 FractalNet -FractalNet 0.2 0.2 Concat Concat Maxout 0.1 Maxout Average Pool Average Pool 0 0 0 2 4 6 8 10 0 2 4 6 8 10 Iteration x104 Iteration x104 (a) Cifar-10 (b) Cifar-100\nFigure 2: Test accuracy of the original FractalNet compared with replacing some of the fractal-joins with Concatenation or Maxout, and when replacing max pooling with average pooling\n0.8 0.8 0.6 Teenreeaeey 0.6 0.4 est 0.4 -FractalNet FractalNet 0.2 0.2 FoF FoF SBN SBN -TSN TSN 0 0 0 2 4 6 8 10 0 2 4 6 8 10 Iteration x104 Iteration X104 (a) Cifar-10 (b) Cifar-100\n.0 0.6 0.4 0.2 0\nFigure 3: Test accuracy of the original FractalNet compared with FoF, SBN, and TSN network.\naccuracies computed over the last 3,O00 iterations (out of 100,O00) of the training. The results from the original FractalNet (Larsson et al.2016) are given in the first row of the table and we use this as. our baseline. The first four rows of Table 1and Figure2 compare the test accuracy of the origina. FractalNet architectures to architectures with a few modifications advocated by design patterns. The. first modification is to use concatenation instead of fractal-joins at all the downsampling locations. in the networks. The results for both CIFAR-10 (2a) and CIFAR-100 (2b|indicate that the results are equivalent when concatenation is used instead of fractal-joins at all the downsampling locations. in the networks. The second experiment was to change the kernel sizes in the first module from. 3x3 such that the left most column used a kernel size of 7x7, the second column 5x5, and the third . used 3x3. The fractal-join for module one was replaced with Maxout. The results in Figure2lare a. bit worse, indicating that the more cooperative fractal-join (i.e., mean/summation) with 3x3 kernels. has better performance than the competitive Maxout with multiple scales. Figure2|also illustrates. how an experiment replacing max pooling with average pooling throughout the architecture changes. the training profile. For CIFAR-10, the training accuracy rises quickly, plateaus, then lags behind. the original FractalNet but ends with a better final performance, which implies that average pool. ing might significantly reduce the length of the training (we plan to examine this in future work). This behavior provides some evidence that \"cooperative\"' average pooling might be preferable to. \"competitive\" max pooling\nTable[1and Figure3|compare the test accuracy results for the architectural innovations described ir Section 4.1 The FoF architecture ends with a similar final test accuracy as FractalNet but the SBN and TSN architectures (which use freeze-drop-path) lag behind when the learning rate is dropped However, it is clear from both Figures 3a|and 3b|that the new architectures train more quickly thar. FractalNet. The FoF network is best as it trains more quickly than FractalNet and achieves simila accuracy. The use of freeze-drop-path in SBN and TSN is questionable since the final performance. lags behind FractalNet, but we are leaving the exploration for more suitable applications of these new architectures for future work.."}, {"section_index": "8", "section_name": "5 CONCLUSION", "section_text": "In this paper we describe convolutional neural network architectural design patterns that we discov. ered by studying the plethora of new architectures in recent deep learning papers. We hope these. design patterns will be useful to both experienced practitioners looking to push the state-of-the-ari and novice practitioners looking to apply deep learning to new applications. There exists a large. expanse of potential follow up work (some of which we have indicated here as future work). Oui. effort here is primarily focused on Residual Networks for classification but we hope this preliminary work will inspire others to follow up with new architectural design patterns for Recurrent Neural Networks, Deep Reinforcement Learning architectures, and beyond.."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors want to thank the numerous researchers upon whose work these design patterns are based and especially Larsson et al. 2016 for making their code publicly available. This work was supported by the US Naval Research Laboratory base program.."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifie probes. arXiv preprint arXiv:1610.01644, 2016\nJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprini arXiv:1607.06450, 2016.\nJerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning, vo ume 1. Springer series in statistics Springer, Berlin, 2001.\nErich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Design patterns: elements oj reusable obiect-oriented software. Pearson Education India. 1995\nCaglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. Noisy activation functions arXiv preprint arXiv:1603.00391, 2016\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nDongyoon Han, Jiwhan Kim, and Junmo Kim. Deep pyramidal residual networks. arXiv preprin arXiv:1610.02915, 2016.\nYani Ioannou, Duncan Robertson, Darko Zikic, Peter Kontschieder, Jamie Shotton, Matthew Brown. and Antonio Criminisi. Decision forests. convolutional networks and the models in-between arXiv preprint arXiv:1603.01250, 2016\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.\nJonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig. James Philbin, and Li Fei-Fei. The unreasonable effectiveness of noisy data for fine-grainec recognition. arXiv preprint arXiv:1511.06789, 2015.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nRoi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training neural networks. In Advances in Neural Information Processing Systems, pp. 855-863, 2014..\nGenevieve B Orr and Klaus-Robert Muller. Neural networks: tricks of the trade. Springer, 2003\nMohammad Pezeshki. Linxi Fan, Philemon Brakel, Aaron Courville, and Yoshua Bengio. Decon structing the ladder network architecture. arXiv preprint arXiv:1511.06430, 2015.\nTapani Raiko, Harri Valpola, and Yann LeCun. Deep learning made easier by linear transformations in perceptrons. In A1STATS, volume 22, pp. 924-932, 2012\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi supervised learning with ladder networks. In Advances in Neural Information Processing Systems pp. 3546-3554, 2015.\nAlexander Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, and Christopher Re. Data program ming: Creating large training sets, quickly. arXiv preprint arXiv:1605.07723, 2016.\nTim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac celerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. 2014\nLeslie N Smith, Emily M Hand, and Timothy Doster. Gradual dropin of layers to train very deep neural networks. arXiv preprint arXiv:1511.06951, 2015.\nJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving fo simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.\nNitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958, 2014a\nRupesh Kumar Srivastava, Jonathan Masci, Faustino Gomez, and Jurgen Schmidhuber. Understand ing locally competitive networks. arXiv preprint arXiv:1410.1165, 2014b.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du. mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9. 2015a. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re\nRupesh K Srivastava, Klaus Greff, and Jurgen Schmidhuber. Training very deep networks. In Advances in neural information processing systems, pp. 2377-2385, 2015.\nChristian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and th impact of residual connections on learning. arXiv preprint arXiv:1602.07261. 2016\nAndreas Veit. Michael Wilber. and Serge Belongie. Residual networks are exponential ensembles of relatively shallow networks. arXiv preprint arXiv:1605.06431, 2016.\nJingdong Wang, Zhen Wei, Ting Zhang, and Wenjun Zeng. Deeply-fused nets. arXiv preprint arXiv:1605.07716, 2016."}, {"section_index": "11", "section_name": "RELATIONSHIPS BETWEEN RESIDUAL ARCHITECTURES", "section_text": "The architectures mentioned in Section2lcommonly combine outputs from two or more layers using concatenation along the depth axis, element-wise summation, and element-wise average. We shov here that the latter two are special cases of the former with weight-sharing enforced. Likewise, we show that skip connections can be considered as introducing additional layers into a network tha share parameters with existing layers. In this way, any of the Residual Network variants can be reformulated into a standard form where many of the variants are equivalent.\nA filter has three dimensions: two spatial dimensions, along which convolution occurs, and a thir dimension, depth. Each input channel corresponds to a different depth for each filter of a layer. A a result, a filter can be considered to consist of \"slices, each of which is convolved over one inpu channel. The results of these convolutions are then added together, along with a bias, to produce single output channel. The output channels of multiple filters are concatenated to produce the outpu of a single layer. When the outputs of several layers are concatenated, the behavior is similar to tha of a single layer. However, instead of each filter having the same spatial dimensions, stride, an padding, each filter may have a different structure. As far as the function within a network, thougl the two cases are the same. In fact, a standard layer, one where all filters have the same shape, ca be considered a special case of concatenating outputs of multiple layer types.\nIf summation is used instead of concatenation, the network can be considered to perform concatena tion but enforce weight-sharing in the following layer. The results of first summing several channels element-wise and then convolving a filter slice over the output yields the same result as convolving the slice over the channels and then performing an element-wise summation afterwards. Therefore enforcing weight-sharing such that the filter slices applied to the nth channel of all inputs share weight results in behavior identical to summation, but in a form similar to concatenation, which highlights the relationship between the two. When Batch Normalization (BN) (Ioffe & Szegedy. 2015 is used, as is the current standard practice, performing an average is essentially identical to performing a summation, since BN scales the output. Therefore, scaling the input by a constant (i.e., averaging instead of a summation) is rendered irrelevant. The details of architecture-specific. manipulations of summations and averages is described further in Section|3.2.1.\nDue to the ability to express depth-wise concatenation, element-wise sum, and element-wise mean as variants of each other, architectural features of recent works can be combined within a single network, regardless of choice of combining operation. However, this is not to say that concatenation has the most expressivity and is therefore strictly better than the others. Summation allows networks to divide up the network's task. Also, there is a trade-off between the number of parameters and the expressivity of a layer; summation uses weight-sharing to significantly reduce the number of parameters within a layer at the expense of some amount of expressivity.\nSebastien C Wong, Adam Gatt, Victor Stamatescu, and Mark D McDonnell. Understanding data augmentation for classification: when to warp? arXiv preprint arXiv:1609.08764, 2016.\nKe Zhang, Miao Sun, Tony X Han, Xingfang Yuan, Liru Guo, and Tao Liu. Residual networks of. residual networks: Multilevel residual networks. arXiv preprint arXiv:1608.02908, 2016\nDifferent architectures can further be expressed in a similar fashion through changes in the connec-. tions themselves. A densely connected series of layers can be \"pruned' to resemble any desirec. architecture with skip connections through zeroing specific filter slices. This operation removes the. dependency of the output on a specific input channel; if this is done for all channels from a given layer, the connection between the two layers is severed. Likewise, densely connected layers can be turned into linearly connected layers while preserving the layer dependencies; a skip connection can. be passed through the intermediate layers. A new filter can be introduced for each input channel. passing through, where the filter performs the identity operation for the given input channel. All. existing filters in the intermediate layers can have zeroed slices for this input so as to not introduce. new dependencies. In this way, arbitrarily connected layers can be turned into a standard form.."}, {"section_index": "12", "section_name": "B.1 ARCHITECTURES", "section_text": "We started with the FractalNet implementation2|as our baseline and it is described inLarsson et al. 2016] We used the three column module as shown in Figure[1a In some of our experiments, we replaced the fractal-join with concatenation at the downsampling locations. In other experiments, we modified the kernel sizes in module one and combined the branches with Maxout. A FractalNet module is shown in Figure|1a and the architecture consists of five sequential modules.\nOur fractal of FractalNet (FoF) architecture uses the same module but has an overall fractal desig. as in Figure 1b rather than the original sequential one. We limited our investigation to this on. realization and left the study of other (possibly more complex) designs for future work. We followe. the FractalNet implementation in regards to dropout where the dropout rate for a module were 0% 10%, 20%, or 30%, depending on the depth of the module in the architecture. This choice fo. dropout rates were not found by experimentation and better values are possible. The local drop-patl rate in the fractal-joins were fixed at 15%, which is identical to the FractalNet implementation..\nFreeze-drop-path introduces four new parameters. The first is whether the active branch is chosen stochastically or deterministically. If it is chosen stochastically, a random number is generated and the active branch is assigned based on which interval it falls in (intervals will be described shortly). If it is deterministically, a parameter is set by the user as to the number of iterations in one cycle through all the branches (we called this parameter num_iter _per _cycle). In our Caffe implementation of the freeze-drop-path unit, the bottom input specified first is assigned as branch 1, the next is branch 2, then branch 3, etc. The next parameter indicates the proportion of iterations each branch should be active relative to all the other branches. The first type of interval uses the square of the branch number (i.e., 1, 4, 9, 16, ...) to assign the interval length for that branch to be active, which gives the more update iterations to the higher numbered branches. The next type gives the same amount of iterations to each branch. Our experiments showed that the first interval type works better (as we expected) and was used to obtained the results in Section|4.2 In addition, our experiments showed that the stochastic option works better than the deterministic option (unexpected) and was used for Section4.2results.\n2https://github.com/gustavla/fractalnet/tree/master/caffe\nWe certainly do not recommend this representation for actual experimentation as it introduces fixed parameters. We merely describe it to illustrate the relationship between different architectures. This representation illustrates how skip connections effectively enforce specific weights in intermediate layers. Though this restriction reduces expressivity, the number of stored weights is reduced, the number of computations performed is decreased, and the network might be more easily trainable.\nOur implementations are in Caffe (Jia et al.2014] downloaded October 9, 2016) using CUDA 8.0. These experiments were run on a 64 node cluster with 8 Nvidia Titan Black GPUs, 128 GB memory, and dual Intel Xenon E5-2620 v2 CPUs per node. We used the CIFAR- 10 and CIFAR-100 datasets (Krizhevsky & Hinton2009 for our classification tests. These datasets consist of 60,000 32x32 colour images (50,000 for training and 10,000 for testing) in 10 or 100 classes, respectively. Our Caffe code and prototxt files are publicly available at https://github.com/iPhysicist/CNNDesignPatterns.\nThe Stagewise Boosting Network's (SBN) architecture is the same as the FoF architecture except that branches 2 and 3 are combined with a fractal-join and then combined with branch 1 in a freeze- drop-path join. The reason for combining branches 2 and 3 came out of our first experiments; if branches 2 and 3 were separate, the performance deteriorated when branch 2 was frozen and branch 3 was active. In hindsight, this is due to the weights in the branch 2 path that are also in branch 3's path being modified by the training of branch 3. The Taylor series network has the same architecture as SBN with the addition of squaring the branch 2 and 3 combined activations before the freeze-drop-path join.\nFor all of our experiments, we trained for 400 epochs. Since the training used 8 GPUs and each GPU had a batchsize of 25, 400 epochs amounted to 100,000 iterations. We adopted the same learning rate as the FractalNet implementation, which started at O.002 and dropped the learning rate by a factor of 10 at epochs 200, 300, and 350."}]
B1kJ6H9ex
[{"section_index": "0", "section_name": "COMBINING POLICY GRADIENT AND O-LEARNING", "section_text": "Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu & Volodymyr Mnih Deenmind\nbodonoghue, munos, korayk, vmnih}@google.com\nPolicy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as 'PGQL' for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regular- ized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate im- proved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In reinforcement learning an agent explores an environment and through the use of a reward signa learns to optimize its behavior to maximize the expected long-term return. Reinforcement learning has seen success in several areas including robotics (Lin|1993, Levine et al.2015), computer games (Mnih et al.]2013} 2015), online advertising (Pednault et al. 2002), board games (Tesauro1995 Silver et al.[2016), and many others. For an introduction to reinforcement learning we refer to the classic text bySutton & Barto(1998). In this paper we consider model-free reinforcement learning where the state-transition function is not known or learned. There are many different algorithms fo model-free reinforcement learning, but most fall into one of two families: action-value fitting anc policy gradient techniques.\nAction-value techniques involve fitting a function, called the Q-values, that captures the expecte return for taking a particular action at a particular state, and then following a particular policy there. after. Two alternatives we discuss in this paper are SARSA (Rummery & Niranjan1994) an. Q-learning (Watkins1989), although there are many others. SARSA is an on-policy algorithn. whereby the action-value function is fit to the current policy, which is then refined by being mostl. greedy with respect to those action-values. On the other hand, Q-learning attempts to find the Q. values associated with the optimal policy directly and does not fit to the policy that was used t generate the data. Q-learning is an off-policy algorithm that can use data generated by another agen or from a replay buffer of old experience. Under certain conditions both SARSA and Q-learning ca. be shown to converge to the optimal Q-values, from which we can derive the optimal policy (Sutton. 1988Bertsekas & Tsitsiklis1996)\nIn policy gradient techniques the policy is represented explicitly and we improve the policy by updating the parameters in the direction of the gradient of the performance (Sutton et al.]1999 Silver et al.1 2014 Kakade2001).Online policy gradient typically requires an estimate of the action-value function of the current policy. For this reason they are often referred to as actor-critic methods, where the actor refers to the policy and the critic to the estimate of the action-value functior (Konda & Tsitsiklis2003). Vanilla actor-critic methods are on-policy only, although some attempts have been made to extend them to off-policy data (Degris et al. 2012 Levine & Koltun2013)."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper we derive a link between the Q-values induced by a policy and the policy itself wher. the policy is the fixed point of a regularized policy gradient algorithm (where the gradient van. ishes). This connection allows us to derive an estimate of the Q-values from the current policy which we can refine using off-policy data and Q-learning. We show in the tabular setting that wher the regularization penalty is small (the usual case) the resulting policy is close to the policy tha. would be found without the addition of the Q-learning update. Separately, we show that regularizec. actor-critic methods can be interpreted as action-value fitting methods, where the Q-values have been parameterized in a particular way. We conclude with some numerical examples that provid. empirical evidence of improved data efficiency and stability of PGQL.."}, {"section_index": "3", "section_name": "1.1 PRIOR WORK", "section_text": "We consider the infinite horizon, discounted, finite state and action space Markov decision process with state space S, action space A and rewards at each time period denoted by rt E R. A policy : S A -> R+ is a mapping from state-action pair to the probability of taking that action. at that state, so it must satisfy ge.A (s,a) = 1 for all states s E S. Any policy induces a probability distribution over visited states, d\" : S -> R+ (which may depend on the initial state), sc the probability of seeing state-action pair (s, a) E S A is d\"(s)(s, a)..\nIn reinforcement learning an 'agent' interacts with an environment over a number of times steps. A each time step t the agent receives a state st and a reward rt and selects an action at from the polic Tt, at which point the agent moves to the next state st+1 ~ P(., St, at), where P(s', s, a) is the probability of transitioning from state s to state s' after taking action a. This continues until the agen encounters a terminal state (after which the process is typically restarted). The goal of the agent is to find a policy that maximizes the expected total discounted return J() = E(t=o 7'rt | ) where the expectation is with respect to the initial state distribution, the state-transition probabilities and the policy, and where y E (0, 1) is the discount factor that, loosely speaking, controls how much the agent prioritizes long-term versus short-term rewards. Since the agent starts with no knowledge\nHere we highlight various axes along which our work can be compared to others. In this paper we use entropy regularization to ensure exploration in the policy, which is a common practice in policy gradient (Williams & Peng1991; Mnih et al.2016). An alternative is to use KL-divergence instead of entropy as a regularizer, or as a constraint on how much deviation is permitted from a prior policy (Bagnell & Schneider2003} Peters et al. 2010f Schulman et al.2015f Fox et al.2015). Natural policy gradient can also be interpreted as putting a constraint on the KL-divergence at each step of the policy improvement (Amari1998} Kakade2001}Pascanu & Bengio2013).InSallans & Hinton (2004) the authors use a Boltzmann exploration policy over estimated Q-values which they update using TD-learning. In|Heess et al.[(2012) this was extended to use an actor-critic algorithm instead of TD-learning, however the two updates were not combined as we have done in this paper. In|Azar et al.(2012) the authors develop an algorithm called dynamic policy programming, whereby they apply a Bellman-like update to the action-preferences of a policy, which is similar in spirit to the update we describe here. In Norouzi et al.(2016) the authors augment a maximum likelihood objective with a reward in a supervised learning setting, and develop a connection that resembles the one we develop here between the policy and the Q-values. Other works have attempted to com- bine on and off-policy learning, primarily using action-value fitting methods (Wang et al.] 2013 Hausknecht & Stonel[2016] Lehnert & Precup]2015), with varying degrees of success. In this paper we establish a connection between actor-critic algorithms and action-value learning algorithms. In particular we show that TD-actor-critic (Konda & Tsitsiklis]2003) is equivalent to expected-SARSA (Sutton & Barto1998| Exercise 6.10) with Boltzmann exploration where the Q-values are decom- posed into advantage function and value function. The algorithm we develop extends actor-critic with a Q-learning style update that, due to the decomposition of the Q-values, resembles the update of the dueling architecture (Wang et al.]2016). Recently, the field of deep reinforcement learning. i.e., the use of deep neural networks to represent action-values or a policy, has seen a lot of success {Mnih et al.]2015} 2016} Silver et al.]2016} Riedmiller2005]Lillicrap et al.]2015} Van Hasselt et al.|2016). In the examples section we use a neural network with PGQL to play the Atari games Suite.\nAction-values. The action-value, or Q-value, of a particular state under policy is the e oected total discounted return from taking that action at that state and following thereafter, i.e Q\"(s, a) = E(t=o 7'rt | so = s, ao = a, ). The value of state s under policy is denoted b V (s) = E(t=o 7'rt ! so = s, ), which is the expected total discounted return of policy fror state s. The optimal action-value function is denoted Q* and satisfies Q*(s, a) = max, Q*(s, a for each (s, a). The policy that achieves the maximum is the optimal policy *, with value functio. V*. The advantage function is the difference between the action-value and the value function, i.e. A\" (s, a) = Q\" (s, a) -- V(s), and represents the additional expected reward of taking action a ove. the average performance of the policy from state s. Since V(s) = a (s, a)Q(s, a) we hay. the identity (s, a)A\" (s, a) = 0, which simply states that the policy has no advantage ove. itself.\nwhere the expectation is over next state s' ~ P(., s, a), the reward r(s, a), and the action b from policy s'. The Q-value function for policy is the fixed point of the Bellman operator for , i.e., T\" Q\" = Q. The optimal Bellman operator T* is defined as"}, {"section_index": "4", "section_name": "2.1 ACTION-VALUE LEARNING", "section_text": "In value based reinforcement learning we approximate the Q-values using a function approximator. We then update the parameters so that the Q-values are as close to the fixed point of a Bellmar equation as possible. If we denote by Q(s, a;0) the approximate Q-values parameterized by 0 then Q-learning updates the Q-values along direction Es,a(T*Q(s, a; 0) Q(s, a; 0))VeQ(s, a; 0 and SARSA updates the Q-values along direction Es.a(TQ(s, a; 0) - Q(s, a; 0))VeQ(s, a; 0). Ir the online setting the Bellman operator is approximated by sampling and bootstrapping, whereby. the Q-values at any state are updated using the Q-values from the next visited state. Exploratior. is achieved by not always taking the action with the highest Q-value at each time step. One. common technique called 'epsilon greedy' is to sample a random action with probability e > 0 where e starts high and decreases over time. Another popular technique is Boltzmann explo. ration', where the policy is given by the softmax over the Q-values with a temperature T, i.e.. (s, a) = exp(Q(s, a)/T)/ , exp(Q(s, b)/T), where it is common to decrease the temperature Over time."}, {"section_index": "5", "section_name": "2.2 POLICY GRADIENT", "section_text": "Alternatively, we can parameterize the policy directly and attempt to improve it via gradient ascen on the performance J. The policy gradient theorem (Sutton et al.]1999) states that the gradient o. J with respect to the parameters of the policy is given by.\nVeJ() = E Q*(s,a)Velogn(s,a) S,a\nwhere the expectation is over (s,a) with probability d\"(s)(s, a). In the original derivation o the policy gradient theorem the expectation is over the discounted distribution of states, i.e., ove 1TT,S0s)= t-o y' Pr{ st = s so, }. However, the gradient update in that case will assign a lov\nT\"Q(s,a) = E (r(s,a)+yQ(s,b)) 'r,6\nT*Q(s,a) = E (r(s,a) +ymaxQ(s',b))\nwhere the expectation is over the next state s' ~ P(., s, a), and the reward r(s, a). The optimal Q-value function is the fixed point of the optimal Bellman equation, i.e., T*Q* = Q*. Both the -Bellman operator and the optimal Bellman operator are y-contraction mappings in the sup-norm, i.e., |TQ1 - TQ2|| ||Q1 Q2||, for any Q1, Q2 E RSxA. From this fact one can show that the fixed point of each operator is unique, and that value iteration converges, i.e., (T)kQ -> Q\" and(T > Q* from any initial Q. (Bertsekas2005)\nweight to states that take a long time to reach and can therefore have poor empirical performance. Ir practice the non-discounted distribution of states is frequently used instead. In certain cases this is. equivalent to maximizing the average (i.e., non-discounted) policy performance, even when Q\" uses. a discount factor (Thomas||2014). Throughout this paper we will use the non-discounted distribution. of states.\nIn the online case it is common to add an entropy regularizer to the gradient in order to prevent the policy becoming deterministic. This ensures that the agent will explore continually. In that case the (batch) update becomes\n0 x E Q*(s,a)Velogn(s,a) +aEVeH\"(s) s,a\nwhere H(s) = g (s, a) log (s, a) denotes the entropy of policy , and a > 0 is the reg. ularization penalty parameter. Throughout this paper we will make use of entropy regularization. however many of the results are true for other choices of regularizers with only minor modification e.g., KL-divergence. Note that equation (2) requires exact knowledge of the Q-values. In practice. they can be estimated, e.g., by the sum of discounted rewards along an observed trajectory (Williams. 1992), and the policy gradient will still perform well (Konda & Tsitsiklis2003).."}, {"section_index": "6", "section_name": "REGULARIZED POLICY GRADIENT ALGORITHM", "section_text": "In this section we derive a relationship between the policy and the Q-values when using a regularized. policy gradient algorithm. This allows us to transform a policy into an estimate of the Q-values. We. then show that for small regularization the Q-values induced by the policy at the fixed point of the algorithm have a small Bellman error in the tabular case.."}, {"section_index": "7", "section_name": "3.1 TABULAR CASE", "section_text": "Consider the fixed points of the entropy regularized policy gradient update (2). Let us define f() - Es,a Q(s, a)Ve log (s, a) + Es VeH(s), and gs() = a (s, a) for each s. A fixed point is one where we can no longer update 0 in the direction of f(0) without violating one of the constraints gs() = 1, i.e., where f(0) is in the span of the vectors {Vegs()}. In other words, any fixed point must satisfy f(0) = s s Vegs(), where for each s the Lagrange multiplier s E R ensures that gs() = 1. Substituting in terms to this equation we obtain\nE (Q\"(s,a) -alog(s,a) - cs) Ve log(s,a) = 0 S, a\nwhere we have absorbed all constants into c E R|S|. Any solution to this equation is strictly positive element-wise since it must lie in the domain of the entropy function. In the tabular case is represented by a single number for each state and action pair and the gradient of the policy with respect to the parameters is the indicator function, i.e., e(t,b) (s, a) = 1(t,b)=(s,a). From this we obtain Q\"(s, a) log (s, a) cs = 0 for each s (assuming that the measure d\"(s) > 0) Multiplying by (a, s) and summing over a E A we get cs = aH*(s) + V(s). Substituting c into equation (3) we have the following formulation for the policy:\n(s,a) = exp(A*(s,a)/a - H*(s))\nQ\"(s,a) = A(s,a) + V(s) = a(log(s,a) + H\"(s)) + V(s)\n0 x E(Q*(s,a) - Q+(s,a))Velog(s,a) S,a\nfor all s E S and a E A. In other words, the policy at the fixed point is a softmax over the. advantage function induced by that policy, where the regularization parameter a can be interpreted as the temperature. Therefore, we can use the policy to derive an estimate of the Q-values,.\nsince the update is unchanged by per-state constant offsets. When the policy is parameterized as a. softmax, i.e., (s, a) = exp(W(s, a))/, exp W(s, b), the quantity W is sometimes referred to. as the action-preferences of the policy (Sutton & Bartol[1998, Chapter 6.6). Equation (4) states that the action preferences are equal to the Q-values scaled by 1/a, up to an additive per-state constant.\nminimize Es,a(q(s, a) a log (s, a)) subject to ).(s,a) =1, seS\nE(q(s,a) - a log(s,a) + cs)Ve log(s,a) = 0 S,a\nwhere c E R!S! is the Lagrange multiplier associated with the constraint that the policy sum to one at each state. Comparing this to equation (3), we see that if q = Q\" and the measure in the expectation is the same then they describe the same set of fixed points. This suggests an interpretation of the fixed points of the regularized policy gradient as a regression of the log-policy onto the Q-values. In the general case of using an approximation architecture we can interpret equation (3) as indicating that the error between Q* and Q\" is orthogonal to e. log for each i, and so cannot be reduced further by changing the parameters, at least locally. In this case equation (4) is unlikely to hold at a solution to (3), however with a good approximation architecture it may hold approximately, so that the we can derive an estimate of the Q-values from the policy using equation (5). We will use this estimate of the Q-values in the next section.\nThe previous section made a connection between regularized policy gradient and a regression onto the Q-values at the fixed point. In this section we go one step further, showing that actor-critic methods can be interpreted as action-value fitting methods, where the exact method depends on the choice of critic\nActor-critic methods. Consider an agent using an actor-critic method to learn both a policy and a value function V. At any iteration k, the value function Vk has parameters wk, and the policy is of the form\ns,a) = exp(Wk(s,a)/)/>* exp(Wk(s, )\nA0 x E 8ac(VeWk(s,a) ->`k(s,b)VeWk(s,b)) w x E 8acVwVk S,a S,a\nwhere Sac is the critic minus baseline term, which depends on the variant of actor-critic being use (see the remark below).\nAction-value methods. Compare this to the case where an agent is learning Q-values with a du. eling architecture (Wang et al.2016), which at iteration k is given by\nS = Yk (s,a)-) (s,b)Yk(s,b) + Vk(s) 6\n,a) = exp(Yk(s,a)/)/> exp(Yk(s, /a\nIn action value fitting methods at each iteration the parameters are updated to reduce some error where the update is given by\nA0 x E 8ax(VeYk(s,a) - >`(s,b)VeYk(s,b)), w x E 8axVwVk s,a s,a 6\nwhere day is the action-value error term and depends on which algorithm is being used (see th remark below).\nwhere is a probability distribution, Yk is parameterized by Ok, Vk is parameterized by wk, and the exploration policy is Boltzmann with temperature a, i.e.,\nEquivalence.The two policies (8) and (10) are identical if Wk = Yk for all k. Since X0 anc. Y0 can be initialized and parameterized in the same way, and assuming the two value functior. estimates are initialized and parameterized in the same way, all that remains is to show that th updates in equations (11) and (9) are identical. Comparing the two, and assuming that dac = da (see remark), we see that the only difference is that the measure is not fixed in (9), but is equa. to the current policy and therefore changes after each update. Replacing in (11) with make. the updates identical, in which case Wk = yk at all iterations and the two policies (8) and (10. are always the same. In other words, the slightly modified action-value method is equivalent t an actor-critic policy gradient method, and vice-versa (modulo using the non-discounted distribu tion of states, as discussed in 2.2). In particular, regularized policy gradient methods can be inter. preted as advantage function learning techniques (Baird III|1993), since at the optimum the quantit W(s, a) - , (s, b)W(s, b) = a(log (s, a) + H\" (s)) will be equal to the advantage functior. values in the tabular case.\nRemark. In SARSA (Rummery & Niranjan1994) we set dav = r(s, a) + yQ(s', b) - Q(s, a) where b is the action selected at state s', which would be equivalent to using a bootstrap critic in equation (6) where Q\"(s, a) = r(s, a) + Q(s',b). In expected-SARSA (Sutton & Barto. 1998 Exercise 6.10), (Van Seijen et al.[2009)) we take the expectation over the Q-values at the next state, so dav = r(s, a)+V (s')-Q(s, a). This is equivalent to TD-actor-critic (Konda & Tsitsiklis)2003) where we use the value function to provide the critic, which is given by Q\" = r(s, a) + yV(s'). In Q-learning (Watkins||1989) av = r(s, a) + y max Q(s', b) - Q(s, a), which would be equivalent to using an optimizing critic that bootstraps using the max Q-value at the next state, i.e., Q* (s, a) = r(s, a) + y max, Q\" (s', b). In REINFORCE the critic is the Monte Carlo return from that state on, i.e., Q\"(s, a) = (t=o 'rt | so = s, ao = a). If the return trace is truncated and a bootstrap is performed after n-steps, this is equivalent to n-step SARSA or n-step Q-learning, depending on the form of the bootstrap (Peng & Williams1996)."}, {"section_index": "8", "section_name": "3.4 BELLMAN RESIDUAL", "section_text": "In this section we show that T*Q - Q* -> 0 with decreasing regularization penalty Q, where is the policy defined by (4) and Q\" is the corresponding Q-value function, both of which are. functions of a. We shall show that it converges to zero by bounding the sequence below by zero. and above with a sequence that converges to zero. First, we have that T*Q\" TQ\" = Q since T* is greedy with respect to the Q-values. So T*Qa Q 0. Now, to bound from above. we need the fact that a(s, a) = exp(Q*(s, a)/a)/, exp(Q*(s,b)/) exp((Q*(s, a) maxc Q\" (s, c))/a). Using this we have\n0 T*Q*a(s,a) -Q*a(s,a) <A|ae\nfor all (s, a), and so the Bellman residual converges to zero with decreasing a. In other words, for small enough a (which is the regime we are interested in) the Q-values induced by the policy (4 will have a small Bellman residual. Moreover, this implies that lima->o Q\" = Q*, as one might exXpect."}, {"section_index": "9", "section_name": "4 PGQL", "section_text": "In this section we introduce the main contribution of the paper, which is a technique to combine pol icy gradient with Q-learning. We call our technique PGQL', for policy gradient and Q-learning. In the previous section we showed that the Bellman residual is small at the fixed point of a regularizec\n0 Es (s',b)Q\"a(s',b)) maxc ( Es a(s', b)(maxc -Q(s',b)) b7 Es exp( s',b*))/a)(maxcQ(s',c) - Q(s',b) Es fa(maxc Q*a Q(s',b))\npolicy gradient algorithm when the regularization penalty is sufficiently small. This suggests adding an auxiliary update where we explicitly attempt to reduce the Bellman residual as estimated from the policy, i.e., a hybrid between policy gradient and Q-learning\nWe first present the technique in a batch update setting, with a perfect knowledge of Q (i.e., a perfect critic). Later we discuss the practical implementation of the technique in a reinforcement learning setting with function approximation, where the agent generates experience from interacting. with the environment and needs to estimate a critic simultaneously with the policy..\nQ(s,a) = a(log(s,a) + H*(s)) + V(s)\nwhere V has parameters w and is not necessarily V as it was in equation (5). In (2) it was unneces. sary to estimate the constant since the update was invariant to constant offsets, although in practice it is often estimated for use in a variance reduction technique (Williams1992] Sutton et al.][1999)\nSince we know that at the fixed point the Bellman residual will be small for small a. we can conside. updating the parameters to reduce the Bellman residual in a fashion similar to Q-learning, i.e...\n0 x E(T*Q\"(s,a) -Q\"(s,a))Velog(s,a), w x E(T*Q\"(s,a) -Q\"(s,a))VwV(s) (13 This is Q-learning applied to a particular form of the Q-values, and can also be interpreted as ar actor-critic algorithm with an optimizing (and therefore biased) critic\nA0 x E(T*Q(s,a) - Q*(s,a))Velogn(s,a), w x E(T*Q*(s,a) -Q*(s,a))VwV(s S.C\nA0 (1-n)Es,a(Q\"_Q\")Ve log+nEs,a(T*Q\" Q)Velog\nhere n E [0, 1] is a weighting parameter that controls how much of each update we apply. In the. case where n = 0 the above scheme reduces to entropy regularized policy gradient. If n = 1 ther it becomes a variant of (batch) Q-learning with an architecture similar to the dueling architecture (Wang et al.]2016). Intermediate values of n produce a hybrid between the two. Examining the. update we see that two error terms are trading off. The first term encourages consistency with. critic, and the second term encourages optimality over time. However, since we know that unde standard policy gradient the Bellman residual will be small, then it follows that adding a term tha. reduces that error should not make much difference at the fixed point. That is, the updates should be complementary, pointing in the same general direction, at least far away from a fixed point This update can also be interpreted as an actor-critic update where the critic is given by a weightec. combination of a standard critic and an optimizing critic. Yet another interpretation of the update. is a combination of expected-SARSA and Q-learning, where the Q-values are parameterized as the. sum of an advantage function and a value function.."}, {"section_index": "10", "section_name": "4.2 PRACTICAL IMPLEMENTATION", "section_text": "Our proposal scheme is as follows. One or more agents interact with an environment, encountering. states and rewards and performing on-policy updates of (shared) parameters using an actor-critic algorithm where both the policy and the critic are being updated online. Each time an agent receives new data from the environment it writes it to a shared replay memory buffer. Periodically a separate. learner process samples from the replay buffer and performs a step of Q-learning on the parameters. of the policy using (13). This scheme has several advantages. The critic can accumulate the Monte.\nThe full scheme simply combines two updates to the policy, the regularized policy gradient update (2) and the Q-learning update (13). Assuming we have an architecture that provides a policy , a value function estimate V, and an action-value critic Q\", then the parameter updates can be written as (suppressing the (s, a) notation)\nAw x (1 - n)Es.g Q* -Q*)VwV +nEsa(T Q _Q)VwV\nThe updates presented in (14) are batch updates, with an exact critic Q. In practice we want to run this scheme online, with an estimate of the critic, where we don't necessarily apply the policy gradient update at the same time or from same data source as the Q-learning update.\n0.6 0.5 0.4 S 0.3 0.2 Actor-critic 0.1 Q-learning T PGQL 0.0 0 200 400 600 800 1000 1200 1400 aaent stens (a) Grid world. (b) Performance versus agent steps in grid worl\nFigure 1: Grid world experiment\nCarlo return over many time periods, allowing us to spread the influence of a reward received in the future backwards in time. Furthermore, the replay buffer can be used to store and replay important past experiences by prioritizing those samples (Schaul et al.2015). The use of the replay buffer can. help to reduce problems associated with correlated training data, as generated by an agent explor-. ing an environment where the states are likely to be similar from one time step to the next. Also. the use of replay can act as a kind of regularizer, preventing the policy from moving too far from. satisfying the Bellman equation, thereby improving stability, in a similar sense to that of a policy trust-region' (Schulman et al.|2015). Moreover, by batching up replay samples to update the net-. work we can leverage GPUs to perform the updates quickly, this is in comparison to pure policy gradient techniques which are generally implemented on CPU (Mnih et al.]2016).\nSince we perform Q-learning using samples from a replay buffer that were generated by a old policy. we are performing (slightly) off-policy learning. However, Q-learning is known to converge to the. optimal Q-values in the off-policy tabular case (under certain conditions) (Sutton & Barto 1998) and has shown good performance off-policy in the function approximation case (Mnih et al.. 2013)"}, {"section_index": "11", "section_name": "4.3 MODIFIED FIXED POINT", "section_text": "=(1-n)Q\"+nT*Q"}, {"section_index": "12", "section_name": "5.1 GRID WORLD", "section_text": "In this section we discuss the results of running PGQL on a toy 4 by 6 grid world, as shown in Figure 1a The agent always begins in the square marked 'S' and the episode continues until it reaches the square marked 'T', upon which it receives a reward of 1. All other times it receives no reward. For this experiment we chose regularization parameter a = 0.001 and discount factor y = 0.95.\nFigure[1b|shows the performance traces of three different agents learning in the grid world, running. from the same initial random seed. The lines show the true expected performance of the policy\n(b) Performance versus agent steps in grid world\nThe PGQL updates in equation (14) have modified the fixed point of the algorithm, so the analysis of 3lis no longer valid. Considering the tabular case once again, it is still the case that the policy. exp(Q/a) as before, where Q\" is defined by (12), however where previously the fixed point. satisfied Q = Q, with Q\" corresponding to the Q-values induced by , now we have.\nQ-learning Policy TD learning gradient Policy Input\nFigure 2: PGQL network augmentation\nfrom the start state, as calculated by value iteration after each update. The blue-line is standard TD-actor-critic (Konda & Tsitsiklis||2003), where we maintain an estimate of the value function and use that to generate an estimate of the Q-values for use as the critic. The green line is Q-learning where at each step an update is performed using data drawn from a replay buffer of prior experience. and where the Q-values are parameterized as in equation (12). The policy is a softmax over the. Q-value estimates with temperature a. The red line is PGQL, which at each step first performs the TD-actor-critic update, then performs the Q-learning update as in (14).\nThe grid world was totally deterministic, so the step size could be large and was chosen to be 1. A step-size any larger than this made the pure actor-critic agent fail to learn, but both PGQL and Q-learning could handle some increase in the step-size, possibly due to the stabilizing effect of using replay.\nIt is clear that PGQL outperforms the other two. At any point along the x-axis the agents have seer the same amount of data, which would indicate that PGQL is more data efficient than either of the vanilla methods since it has the highest performance at practically every point.\nIn the results we compare against both A3C and a variant of asynchronous deep Q-learning. The. changes we made to Q-learning are to make it similar to our method, with some tuning of the hyper-. parameters for performance. We use the exact same network, the exploration policy is a softmax. over the Q-values with a temperature of 0.1, and the Q-values are parameterized as in equation (12 (i.e., similar to the dueling architecture (Wang et al.f2016), where = 0.1. The Q-value updates are performed every 4 steps with a minibatch of 32 (roughly 5 times more frequently than PGQL) For each method, all games used identical hyper-parameters..\nWe tested our algorithm on the full suite of Atari benchmarks (Bellemare et al.|[2012), using a neural. network to parameterize the policy. In figure2 we show how a policy network can be augmented with a parameterless additional layer which outputs the Q-value estimate. With the exception of. the extra layer, the architecture and parameters were chosen to exactly match the asynchronous. advantage actor-critic (A3C) algorithm presented in|Mnih et al. (2016), which in turn reused many of the settings from|Mnih et al.(2015). Specifically we used the exact same learning rate, number of. workers, entropy penalty, bootstrap horizon, and network architecture. This allows a fair comparison. between A3C and PGQL, since the only difference is the addition of the Q-learning step. Our. technique augmented A3C with the following change: After each actor-learner has accumulated the. gradient for the policy update, it performs a single step of Q-learning from replay data as described. in equation (13), where the minibatch size was 32 and the Q-learning learning rate was chosen to be 0.5 times the actor-critic learning rate (we mention learning rate ratios rather than choice of n. in (14) because the updates happen at different frequencies and from different data sources). Each. actor-learner thread maintained a replay buffer of the last 1o0k transitions seen by that thread. We ran the learning for 50 million agent steps (200 million Atari frames), as in (Mnih et al.2016)..\nThe results across all games are given in table 3lin the appendix. All scores have been normal-. ized by subtracting the average score achieved by an agent that takes actions uniformly at random Each game was tested 5 times per method with the same hyper-parameters but with different ran-\ndom seeds. The scores presented correspond to the best score obtained by any run from a random start evaluation condition (Mnih et al.]2016). Overall, PGQL performed best in 34 games, A3C performed best in 7 games, and Q-learning was best in 10 games. In 6 games two or more methods. tied. In tables 1and 2|we give the mean and median normalized scores as percentage of an expert human normalized score across all games for each tested algorithm from random and human-start conditions respectively. In a human-start condition the agent takes over control of the game from. randomly selected human-play starting points, which generally leads to lower performance since. the agent may not have found itself in that state during training. In both cases, PGQL has both the. highest mean and median, and the median score exceeds 100%, the human performance threshold..\nIt is worth noting that PGQL was the worst performer in only one game, in cases where it was. not the outright winner it was generally somewhere in between the performance of the other two algorithms. Figure 3|shows some sample traces of games where PGQL was the best performer. In. these cases PGQL has far better data efficiency than the other methods. In figure4|we show some of the games where PGQL under-performed. In practically every case where PGQL did not perform. well it had better data efficiency early on in the learning, but performance saturated or collapsed. We hypothesize that in these cases the policy has reached a local optimum, or over-fit to the early. data, and might perform better were the hyper-parameters to be tuned..\nA3C Q-learning PGQL Mean 636.8 756.3 877.2 Median 107.3 58.9 145.6\nTable 1: Mean and median normalized scores for the Atari suite from random starts, as a percentage of human normalized score\nA3C Q-learning PGQL Mean 266.6 246.6 416.7 Median 58.3 30.5 103.3\nTable 2: Mean and median normalized scores for the Atari suite from human starts, as a percentage of human normalized score\nassault battle zone 12000 16000 A3C A3C 14000 10000 Q-learning Q-learning PGQL 12000 PGQL 8000 10000 6000 8000 6000 4000 4000 2000 2000 0 0 0 1 2 3 4 5 0 1 2 3 4 5 agent steps 1e7 agent steps 1e7 chopper command 12000 100000 yars revenge A3C A3C 10000 Q-learning Q-learning 80000 PGQL PGQL 8000 60000 6000 40000 4000 20000 2000 0 0 0 1 2 3 4 5 0 1 2 3 4 5 agent steps 1e7 agent steps 1e7\nFigure 3: Some Atari runs where PGQL performed well\nbreakout hero 800 35000 A3C A3C 700 Q-learning 30000 Q-learning 600 PGQL PGQL 25000 500 score 20000 400 15000 300 10000 200 100 5000 0 0 0 1 2 3 4 5 0 1 2 3 4 5 agent steps 1e7 agent steps 1e7 qbert up n down 25000 80000 A3C A3C 70000 Q-learning Q-learning 20000 PGQL 60000 PGQL 15000 50000 40000 10000 30000 20000 5000 10000 0 0 0 1 2 3 4 5 0 1 2 3 4 5 agent steps 1e7 agent steps 1e7\nFigure 4: Some Atari runs where PGQL performed poorly"}, {"section_index": "13", "section_name": "6 CONCLUSIONS", "section_text": "We have made a connection between the fixed point of regularized policy gradient techniques anc the Q-values of the resulting policy. For small regularization (the usual case) we have shown that the Bellman residual of the induced Q-values must be small. This leads us to consider adding an auxiliary update to the policy gradient which is related to the Bellman residual evaluated on a transformation of the policy. This update can be performed off-policy, using stored experience We call the resulting method PGQL', for policy gradient and Q-learning. Empirically, we observe better data efficiency and stability of PGQL when compared to actor-critic or Q-learning alone. We verified the performance of PGQL on a suite of Atari games, where we parameterize the policy using a neural network, and achieved performance exceeding that of both A3C and Q-learning.\nWe thank Joseph Modayil for many comments and suggestions on the paper, and Hubert Soyer for help with performance evaluation. We would also like to thank the anonymous reviewers for their constructive feedback"}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Andrew Bagnell and Jeff Schneider. Covariant policy search. In IJCAI, 2003\nLeemon C Baird III. Advantage updating. Technical Report WL-TR-93-1146, Wright-Patterson Air Force Base Ohio: Wright Laboratory, 1993.\nRichard Bellman. Dynamic programming. Princeton University Press, 1957.\nDimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996\nThomas Degris, Martha White, and Richard S Sutton. Off-policy actor-critic. 2012\nMatthew Hausknecht and Peter Stone. On-policy vs. off-policy updates for deep reinforcement learning. Deep Reinforcement Learning: Frontiers and Challenges, IJCAI 2016 Workshop, 2016\nSham Kakade. A natural policy gradient. In Advances in Neural Information Processing Systems volume 14, pp. 1531-1538, 2001.\nVijay R Konda and John N Tsitsiklis. On actor-critic algorithms. SIAM Journal on Control and Optimization, 42(4):1143-1166, 2003.\nLucas Lehnert and Doina Precup. Policy gradient methods for off-policy control. arXiv preprin arXiv:1512.04105, 2015\nSergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo motor policies. arXiv preprint arXiv:1504.00702, 2015.\nShun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251- 276, 1998.\nMarc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi- ronment: An evaluation platform for general agents. Journal of Artificial Intelligence Research 2012.\nDimitri P Bertsekas. Dynamic programming and optimal control, volume 1. Athena Scientific 2005.\nRoy Fox, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via soft updates. arXiv preprint arXiv:1207.4708, 2015.\nVolodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tir Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcemen learning. arXiv preprint arXiv:1602.01783, 2016\nRazvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. arXiv preprin arXiv:1301.3584, 2013.\nJing Peng and Ronald J Williams. Incremental multi-step Q-learning. Machine Learning, 22(1-3) 283-290, 1996.\nGavin A Rummery and Mahesan Niranjan. On-line Q-learning using connectionist systems. 1994\nBrian Sallans and Geoffrey E Hinton. Reinforcement learning with factored states and actions Journal of Machine Learning Research. 5(Aug):1063-1088. 2004\nTom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXi preprint arXiv:1511.05952, 2015.\nR. Sutton and A. Barto. Reinforcement Learnir an Introduction. MIT Press, 1998\nMohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu and Dale Schuurmans. Reward augmented maximum likelihood for neural structured prediction arXiv preprint arXiv:1609.00150, 2016.\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche. Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering. the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.\nRichard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Infor. mation Processing Systems, volume 99, pp. 1057-1063, 1999\nHarm Van Seijen, Hado Van Hasselt, Shimon Whiteson, and Marco Wiering. A theoretical and em pirical analysis of expected sarsa. In 2009 IEEE Symposium on Adaptive Dynamic Programmin and Reinforcement Learning, pp. 177-184. IEEE, 2009\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcemen learning. Machine learning, 8(3-4):229-256, 1992.\nRonald J Williams and Jing Peng. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 3(3):241-268., 1991."}, {"section_index": "15", "section_name": "PGOL BELLMAN RESIDUAL", "section_text": "Here we demonstrate that in the tabular case the Bellman residual of the induced Q-values for the PGQL updates of (14) converges to zero as the temperature decreases, which is the same guarantee as vanilla regularized policy gradient (2). We will use the notation that is the policy at the fixed point of PGQL updates (14) for some , i.e., exp(Q\"), with induced Q-value function Q\"\nQa -Q n||T*Qa - Qa| n||T*Q\"a_TaQa t II VI nl|T*Q\"a _ T\"aQ\"a| +||TQa _ T\"aQ\"a VI n(l|T*Qa _T\"aQ\"a| +yl|Q\"a _Q\" < n/(1- nY)|lT*Q\"a -T\"Q\"a|l\nwhich therefore also converges to zero in the limit. Finally we obtai\nwhich combined with the two previous results implies that lim Q = 0, as befor\nZiyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas Dueling network architectures for deep reinforcement learning. In Proceedings of the 33rd Inter-. national Conference on Machine Learning (ICML), pp. 1995-2003, 2016.\nI|T*Q\"a -Q|| l|T*Q\"a-TaQ\"a + T\"aQ\"a - Q\"a + Q\"a -Qa|I = |T*Q\"a _ TaQ\"a||+||TaQa_TaQ\"a||+||Q\"a_Q\"a l|T*Q\"a-TQ\"||+ (1+Y)|lQ\"a - Q 3/(1 - nY)||T*Q\" TQ\"|I, <\n[T*Q*a -Q* l|T*Q\"a -T*Q\"a +T*Q\"a -Q\"a +Q\"a -Q\"a l|T*Q\"a -T*Q\"aI +l|T*Q\"a -Q\"aII+lIQ\"a -Q (1+y)|IQ\"a - Q\"a|I+||T*Q\"x -Q\nTable 3: Normalized scores for the Atari suite from random starts, as a percentage of human nor malized score.\nCanne A5C Q-learnng POQL alien 38.43 25.53 46.70 amidar 68.69 12.29 71.00 assault 854.64 1695.21 2802.87 asterix 191.69 98.53 3790.08 asteroids 24.37 5.32 50.23 atlantis 15496.01 13635.88 16217.49 bank heist 210.28 91.80 212.15 battle zone 21.63 2.89 52.00 beam rider 59.55 79.94 155.71 berzerk 79.38 55.55 92.85 bowling 2.70 -7.09 3.85 boxing 510.30 299.49 902.77 breakout 2341.13 3291.22 2959.16 centipede 50.22 105.98 73.88 chopper command 61.13 19.18 162.93 crazy climber 510.25 189.01 476.11 defender 475.93 58.94 911.13 demon attack 4027.57 3449.27 3994.49 double dunk 1250.00 91.35 1375.00 enduro 9.94 9.94 9.94 fishing derby 140.84 -14.48 145.57 freeway -0.26 -0.13 -0.13 frostbite 5.85 10.71 5.71 gopher 429.76 9131.97 2060.41 gravitar 0.71 1.35 1.74 hero 145.71 15.47 92.88 ice hockey 62.25 21.57 76.96 jamesbond 133.90 110.97 142.08 kangaroo -0.94 -0.94 -0.75 krull 736.30 3586.30 557.44 kung fu master. 182.34 260.14 254.42 montezuma revenge -0.49 1.80 -0.48 ms pacman 17.91 10.71 25.76 name this game 102.01 113.89 188.90 phoenix 447.05 812.99 1507.07 pitfall 5.48 5.49 5.49 pong 116.37 24.96 116.37 private eye -0.88 0.03 -0.04 qbert 186.91 159.71 136.17 riverraid 107.25 65.01 128.63 road runner 603.11 179.69 519.51 robotank 15.71 134.87 71.50 seaquest 3.81 3.71 5.88 skiing 54.27 54.10 54.16 solaris 27.05 34.61 28.66 space invaders 188.65 146.39 608.44 star gunner 756.60 205.70 977.99 surround 28.29 -1.51 78.15 tennis 145.58 -15.35 145.58 time pilot 270.74 91.59 438.50 tutankham 224.76 110.11 239.58 up n down 1637.01 148.10 1484.43 venture -1.76 -1.76 -1.76 video pinball 3007.37 4325.02 4743.68 wizard of wor. 150.52 88.07 325.39 yars revenge 81.54 23.39 252.83 zaxxon 4.01 44.11 224.89"}]
B1PA8fqeg
[{"section_index": "0", "section_name": "MULTIAGENT SYSTEM FOR LAYER FREE NETWORK", "section_text": "Hiroki Kurotaki, Kotaro Nakayama & Yutaka Matsuo\nWe propose a multiagent system that have feedforward networks as its subset while free from layer structure with matrix-vector scheme. Deep networks are often compared to the brain neocortex or visual perception system. One of the largest difference from human brain is the use of matrix-vector multiplication based on layer architecture. It would help understanding the way human brain works if we manage to develop good deep network model without the layer architecture while preserving their performance. The brain neocortex works as an aggregation of the local level interactions between neurons, which is rather similar to multiagent system consists of autonomous partially observing agents than units aligned in column vectors and manipulated by global level algorithm. Therefore we suppose that it is an effective approach for developing more biologically plausible model while preserving compatibility with deep networks to alternate units with multiple agents. Our method also has advantage in scalability and memory efficiency. We reimplemented Stacked Denoising Autoencoder(SDAE) as a concrete instance with our multiagent system and verified its equivalence with the standard SDAF from both theoritical and empirical perspectives. Additionary, we also proposed a variant of our multiagent SDAE named \"Sparse Connect SDAE\", and showed its computational advantage with the MNIST dataset.\nFigure 1: Comparison of network structures with adjacency indications. Left: standard feedforward neural network with layer structure. The two solid rectangles represent the weight matrices. The connection is restricted by layer structure. Center: layer-free version of standard feedforward network. We need a whole n n matrix indicated by the outer solid square to store parameters even when most of them become almost zero as a result of learning. Right: an example of network which is a subset of our proposed multiagent system. Matrices (solid rectangle) are no longei. used. The network is free from the layer restriction while requiring fewer number of parameters. Communication between the nodes is prohibited unless they are connected with an edge, which leads. to scalability, memory efficiency and biological plausibility.\nDeep networks are used in many tasks, especially in image recognition, signal processing, NLP and robot manipulation. Almost all deep network models utilize structure with layers. One of the primary reasons to use those layer structure is to utilize parallel computation techniques and hardware level technologies such as GPU or dedicated tensor processors."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Vector-matrix approach requires dense matrices to store weight parameters even after sparse represen tations and weights are learned, which requires inefficient use of memory.\nIn this paper we propose a framework of multiagent calculation network that has feedforwar matrix-vector network as a subset but free from the notion of layer to solve these problems\nSpecifically, we reimplemented Stacked Denoising Autoencoder(SDAE) (Vincent et al.[(2008)) a one of the variations in our framework. SDAE is one of the earliest successful deep network mode and free from spatiality assumption in CNNs and RNNs. SDAE is related to the Ladder Networl (Rasmus et al.(2015)) in that both employ reconstruction and denoising mechanism. We suppos starting from the model strictly derived from SDAE and gradually extending the model is one gooc approach to develop useful algorithms to construct deep multiagent networks. We show that th standard vector-matrix SDAE can be interpreted and reconstructed as a specific case of our multiager network in the sense that both SDAE compute the exact same result in the end but with differen implementations and processes.\nFoerster et al.[(2016) and Sukhbaatar et al.(2016) model multiagent environment as a deep network but their unit of agent is a whole network which models an individual actor and not a feature inside the network. The motivation and architecture of our proposal multiagent framework is different. from theirs and we suppose our method could be a complementary method between the different. granularities of agents.\nThere are several approaches to design biologically plausible model (Sussillo & Abbott(2009); Risi. & Stanley (2014); Mi et al.(2014). Cao et al.(2014) convert regular CNN into spiking neural. networks (SNN).Osogami & Otsuka(2015) build a variation of Boltzman Machine that follows the properties of Spike-timing dependent plasticity (STDP), which makes it more close to biological. neural networks.Lee et al. (2014) proposes a modification of autoencoder that uses a novel credit assignment method called \"target propagation\" in place of back-propagation and achieved state.\nLayers can be viewed as a restriction on the connection pattern of the units that they must be aligned in a row to form a vector. Though layers are related to vector-matrix processing technologies and necessary to make use of them, it is not evident that this vector-matrix restriction is the most appropriate model to capture the latent representation of the data.\nOur multiagent network consists of many agents that replaces each unit in standard deep networks. The agents act autonomously and calculations are executed as an accumulation of many local. level communication among the agents. This local calculation scheme is contrary to the previous feedforward network implementation that all computations in the units in the same layer is done. simultaneously (Figure1).\nOnce we establish the basic multiagent SDAE, we aim to extend the SDAE and examine its behavior One of the minimum and simplest modification to the proposed multiagent SDAE is to keep the node's locations and possible edge connections as they are, but randomly truncate the edges. We call this testing model \"Sparse connect SDAE\" or \"SCSDAE\" in the latter section. This model is an example of the potential of our model to handle sparse weight parameters more efficient than the standard networks composed of dense weight matrices.\n1. We propose new multiagent-based neural network system framework that free from restric- tions of layer scheme with matrix-vector while getting more biologically plausible. 2. We prove that SDAE can be reinterpreted as a special case of our multiagent system. 3. We also propose Sparse Connect SDAE, a primitive extension of our multiagent system. 4. We demonstrate the performance of the proposed models on XOR toy dataset and the permutation-invariant MNIST task.\nIn this section we describe our algorithm. First we show how we can reinterpret some common. properties of multiagent systems to form a deep network. Next we define our multiagent system in. general form. Then we prove the standard SDAE is indeed a special case of our multiagent system in. the sense that both calculate the exact same computational result in the same order. Finally we show an example of our SDAE's variations.\nIn this section we define the multiagent system that has the properties we stated above\nA system is consists of several units and one environment.. Units are consists of nodes and edges. Edges connect between two nodes.. Nodes has some variables and can memorize actual computed values of tl\nNodes can transfer informations with other nodes connected with edges by message passing. These. information may involve the value of variable computed by the units, the errors accumulated through epoch in the units, and any other things. Sometimes nodes may receive data input from the envi. ronment as a message, and may send value back to environment to let them compute global cost function.\nThe environment can manipulate unit from outside (via message passing) and change their relation ships. For example,\nThe environment can generate new unit.. The environment can connect between nodes The environment can change the state of the unit. The environment can send units input data.\nOur SCSDAE is a variation of SDAE and can be viewed as a technique for truncating edges and reduce the computational cost. There are several techniques to downsize networks and enables us to load them on mobile devices for realtime processing. One example is distillation (Hinton et al. (2014)). It needs huge networks as a superviser network and it cannot be used as a mean to full scratch to new domain. MADE (Mathieu et al.(2015)) uses binary masking matrix to represent hard zero and thus conditional distribution. Limiting weight connection to binary (Courbariaux et al. (2015)) is another approach and its applicability is actively studied. Han et al.(2015) combines pruning, quantization and Huffman coding together.\nWe extracted some common aspects of those multiagent system from the view of deep network development as follows:\n1. The system consists of several number of autonomous units (as agents) and the environment 2. All units are autonomous and only act when stimulated by messages from the other units and the environment. 3. Units process calculation only with local information they hold.\nThe variables includes not only the input data and feature variables themselves, but parameters of adjacent edge's weights and intermediate variables that are needed to calculate feature variable's activation values. These additional variables can also be updated..\nThese special manipulation may seems to break locality and autonomy, but its global manipulation is. limited to changing the overall network structure. In the calculation process, the algorithm still run by agents local algorithm themselves rather than step-by-step instructions from the environment. The. environment input some data to input units, and then, all it can do is to wait at the output units to get. the result and it isn't aware of the internal behavior of each units and the order of execution..\nAlgorithm[2|is the general form of our proposed multiagent system. The notation is matched with the multiagent MLP in later sections. Note that the order of calculation is not yet specialized to match to standard MLP, and thus free from the constants related to global structure such as N. and N,.\nthe equivalence of the value computed the equivalence of the order of computatio\ngiven the same input and the same random sampling. (e.g. initial parameters, order of input data noise variables, etc.)\nWe begin our proof with a simple model and reuse it for prove of more complex models. Our first target model is the simple multi layer perceptron(MLP) with only one hidden layer and no pretraining. We then go on to the autoencoder and Denoising Autoencoder, which is a kind of extension of MLP.. Finally we show the equivalence of the two SDAEs..\nFor bravity, we limit the choice of activation function for each unit to sigmoid, and we use unit-wise. mean square error (MSE) for cost function. We also use simple SGD without minibatch. There are. several methods that empirically known to perform well. (e.g. Adam (Kingma & Ba (2014)) fo. optimization, cross entropy for cost function, ReLU and softmax for activation function.) However. we prioritize to establish the basic architecture of deep multiagent network, and choose these simple. methods. We can also extend the algorithm to apply minibatch updating easily..\nNow we prove the equivalence of our proposed multiagent version of MLP and standard implementa tion.\nNext we construct a multiagent network that is equivalent to Algorithm1by specializing the general Algorithm[2] We first generate units ux,, Wy,, Uz(i = 1... Nx, j = 1... Ny, k = 1... Nz) from. the environment. We also denote the set of units Ux = {x, | i = 1... N}, and similary Uy, U, for. set of all uy,, Uz, respectively. Each unit and the environment possess the unique varibles as listed in. Table1\nWe show how to reconstruct Stacked Denoising Autoencoder((Vincent et al.[(20o8))) with the. proposed multiagent system.We concurrently prove that our multiagent version of SDAE and the standard SDAE have indeed the same algorithm and do the same calculation. In our case, we suppose. that the two criteria below are sufficient for our objective.\nSuppose there is a MLP consists of an input layer, a hidden layer and an output layer. Let Nx, Ny, N,. be the number of input, hidden and output layer's dimension respectively. Simillary, let x(i = 1... N), y;(j = 1... Ny), z(k = 1... N) be the activated output value (hereafter simply \"output. value\" ) at each unit in each layer respectively. The weight value between these variables are wi, wjk,. O... N, k = 1... N,} represents the weight parameters and is the sigmoid function. We also r(1)+(1) d (X data) (d) (d) (d) }, where d = 1 ... D. Hereafter , X Nx dat N z d.at we might drop the data index (d) for readability..\nAlgorithm 1 Standard MLP Algorithm 2 General form of our multiagent al gorithm (the environment) Initialize wij, wk for all (i, j), (j, k). Initialize ux, Uy,, Uzk for all i, j, k. while criteria is not satisfied do. while criteria is not satisfied do for d = 1... D do for all d E {1... D} do. for k = 1... N, do (d) for all uzk E Uz do. tk+tkdata Input tkdata t(d) tO Uzk end for for i = 1... N. do end for (d) for all ux, E Ux do. xixidata Input xidata r(d) end for tO Ux for j = 1... N, do end for Yj F0(w0j+ wizxi) L(d) h(tk-Zk)2 end for end for L a L(d) for k = 1... N, do end while. I*y 8kN(2h-tx)2zn(1-zk) for j = 1... Ny do Algorithm 3 Multiagent MLP (the environment WjkWjk-nOkYj Initialize ux, Wy, Uzk for all i, J, k. end for end for while criteria is not satisfied do. for j = 1... Ny do for d = 1... D do for k = 1... N, do for i = 1... N, do end for Wij<Wij-nOjXi for i = 1... N. do end for end for tO Ux L(d)N=1(th-zk)2 end for L(d)1(tn- zk) end for L L(d) end for L end while. d=7 L(d)\nTable 1: Varibles each unit (and the environment) possess in multiagent MLP\nUnit Variable Ux(i=1...Nx) Xi Uy,(j=1...Ny) Yj,Oj,Wij(i=1...Nx),n (k=1...N) U z k Zk,tk,Ok,Wk(j =1...Ny), n The environment L, L(d)\nThe unit ux, corresponds to xi and receive xidata. from the environment. The unit uz,. corresponds to. both zk, tk and is input tkdata. Each ux, receive input data, assign the data into the variable owned by itself, then send that value as message to uy, (algorithm[6).\nUy, corresponds to yj and stores wij(i = 0... N) to calculate yj. The unit must wait until receive. message from all ux, (i = 1... M). Then the unit can calculate y; (algorithm4). The calculated value of yj is sent again to uz, so similary zk can also become able to be calculated..\nWe also need to introduce a state variable for SDAE. We will discuss it at the end of this section\nFinally, the environment inputs data in the following order: uz, - > UzNz-> Ux1 Uxx. (Algorithm 3). These inputs stimulate units and the units invoke their message handling algorithm individually. The information required to calculate the cost function L(d) is eventually acumulated in the units uyr. The environment would read out these and calculate the objective L(d)\nAlgorithm 2 General form of our multiagent al gorithm (the environment)\nAlgorithm 4 The handler of uy. when receiving a message\nAlgorithm 5 The handler of uz. when receiving data from the environment\nComparing both algorithms, we can verify that the all update statement and order for all variables are strictly matched between the proposed multiagent SDAE and the standard version..\nNow we advance to the autoencoder algorithm which is a special case of MLP. In autoencoders, The input and output dimension is the same (N, = N). The cost function and difference at output units corresponding multiagent version, the environment don't need to send tidata to uz,. Instead u, send the input value x; to uz, right before messaging to uys:\nDenoising Autoencoder (hereafter \"DAE\") is an extension of autoencoder and used as a building block of SDAE. In DAE, when calculating y of the hidden layer, we don't directly use raw value of x Instead, we add noise to x to make x, then y uses x. Note that the reconstruction layer z still use the Original x value.\nTo reinterpret this DAE by our multiagent system, we can assign x, to ux, along with x. When the data is input, we can calculate x right after the assignment to x. Each x,(i = 1... N) send corresponding yj(j = 1.: . Ny) the noised value x, instead of x;. Then the two algorithm become equivalent again.\nAlgorithm 6 The handler of ux. when receiving data from the environment.\nAlgorithm 7 The handler of uz, when receiving a message\nNz for j = 1... N, do Wjk -n0kYj end for for j = 1... N, do Send uy, the value of u end for\nUz, don't take any action in response to receiving message from ux,, just as same as from the environment. With the modifications above, the two versions become equivalent."}, {"section_index": "2", "section_name": "3.3.4 EOUIVALENCE WITH STANDARD SDAE", "section_text": "We finally reimplement the whole SDAE algorithm by our multiagent system and show its equivalence with the standard version again. The learning process of SDAE can be divided into pretraining and. fine-tuning and we can explain the two phase separately..\nThe hidden units can take the the two states, \"state A (On learning)\" and \"state B (Learned, stable)\" The units are initiated to \"state A\" and the behavior in this state doesn't change from the previous sections. But the units in \"state B\" doesn't send messages to the units for reconstruction zi and. instead send the same messages to the \"state A\" units in the stacked layer..\nAdditionary, we need to append a reconstruction unit for each hidden unit changed to \"state B\". These pairs of the \"state B\" unit and appended reconstruction unit act as same as the pairs of input and. reconstruction units we stated in DAE section except that they send message not after the data is input, but after the feature y; is calculated..\nOnce the pretraining for one block of DAE ends, all the hidden units in that DAE change to \"state B\" Then they form a new DAE starting from \"state B\" hidden units through the new stacked \"state A\" units then the associated reconstruction units appended. This DAE can be optimized in the same way of the single multiagent DAE we have described with only one exception that the data is not sent from the environment, but instaed the feature value y; in the previous DAE is calculated in the same order (j = 1 ::: Ny)\nfine-tuning phase In order to use SDAE for supervised learning task, we pretrain some predefined number of hidden layers by layerwise pretraining, and at the end stack the layer for supervised purpose (typically a softmax layer for classification). This is another form of the change of the objective function and the multiagent system can deal with it too by changing network and unit's State.\nSpecifically, when the all pretraining phase ends, connect the output units for supervised learning onto the topmost hidden units, and all feature unit changes its state to the newly introduced \"state C (On fine-tuning)\". The \"state C\" is almost the same as \"state A\", but the only differences are the two:\nThey don't send message to reconstruction units.. They always send backpropagation message to the previous units\nThis backpropagation message must be sent in ascending order as well as the previous message passing.\nHere we reinterpreted the standard pretraining and fine-tuning algorithm with our proposed multiagent system. These two cover the all learning phase of SDAE, therefore we have successfully reinterpreted the whole SDAE and showed its equivalence of the standard SDAE."}, {"section_index": "3", "section_name": "3.4 SPARSE CONNECT SDAE", "section_text": "Now we obtain new multiagent SDAE, and theoritically showed its equivalence to standard SDAF Next we want to extend this model and experiment its behavior. One of the minimal and simplest\nas a new input units from the view of the new DA. Let z, be the reconstruction unit's value of y.\nNow we describe the change needed on our multiagent system. We need to introduce the notion of state for the units associated with the hidden layers in the standard version. When the objective function changed, the network structure and the state of the units must be changed in response too.\nSince our goal here is to check the basic behavior when we truncate edges, we take one of the simplest. rule that we basically connect all edges between nodes (if the distance meets criteria) but drop them for certain threshold probability. Once the connection is established, we fix them and will not modify online. For example, connection rate 1.0 (100%) means it is the same network as SDAE and 0.3. means 30% of edges in SDAE remain intact and the others are dropped.."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "We implemented the proposed multiagent network and measure its performance. We verified empirical. equivalence between the multiagent SDAE and standard SDAE. We also measure performance of. Sparse connect SDAE, which we described in 3.4 We change its connection rate and see how the performance and learning time changes."}, {"section_index": "5", "section_name": "4.1 DATASET", "section_text": "We use two datasets for experiment, one is an artificial XOR function toy dataset made by us for this experiment, and the other is the MNIST handwritten digit dataset. Since the result and conclusion is almost same, we only report for the MNIST dataset. The details of the datasets are described in Appendix A."}, {"section_index": "6", "section_name": "4.2 MODEL SETTINGS", "section_text": "We use the sigmoid function as activation functions of all units. We use the unit-wise MSRE as mentioned in the section 3 for both reconstruction errors in pretraining phase and classification in finetune phase. In pretraining, we simply take the mean through the all reconstruction units tc check if the training works. For finetuning, we again take the sum of MSRE as cost function, and as classification error, we take the label associated with the unit that has the most activated value and discretely compare that value to measure the error rate. The SDAE algorithm includes random noise addition and is not fully deterministic. So we run each experiment settings for 3 times and take the average through the runs for all metrics we show. We set learning rate for pretraining and finetuning to O.001 and 0.1, respectively. Corruption rate of SDAE is fixed to 0.3."}, {"section_index": "7", "section_name": "4.3 COMPARISON BETWEEN MULTIAGENT AND NORMAL SDAE", "section_text": "We compared the test error rate through epochs between multiagnt and normal SDAE. Figure[2|show the mean classification error rate through 3 simulations with the MNIST dataset. We can see the. two graphs form almost the same shape, which suggests the multiagent implementation of SDAE is empirically equivalent to normal SDAE. This is a verification of the theoritical proof we showed in. section 3.\nIn order to check this equivalence more precisely, we evaluate the difference of maximum weigh update between multiagent and normal SDAE (Table2). We gave an input data to both networks and. measure the largest change of weight change update for each edge group shown in Table[2 To make. the problem simple and clear, for this experiment we used only one hidden layer and no pretraining. so the model is similiar to a naive MLP. From the Table2] we can see that the decimal order of largest difference is 3 digits small as that of information error with 32bit floating point numbers. We suppose. this difference is small enough to be well ignored..\nWe can expect the time spent in learning is proportional to the connection rate. This is an obvious strong point against the standard matrix-vector based network. In matrix-vector based networks, we must fix the size of weight matrices and keep them at their original size even if most weights are learned to be soft zero. In our multiagent model however, we only need to calculate for existing edges. We will see whether this expectation holds in the next experiments section.\n600.0 476.3 0.0600 0.0585 450.0 375.4 . Ennrre 0.0555 279.1 300.0 . 0.0550 190.9 0.0515 150.0 102.6 0.0503 . 0.0497 0.0500 . . 0.0 0 0.25 0.5 0.75 1 0 0.2 0.4 0.6 0.8 1 Connectivity Rate Connectivity Rate\nFigure 3: Mean calculation time for connectivity rate averaged over 3 experiments for each connec- tivity. The horizontal axis indicates connectivity rate of each model and the vertical shows time spent in training\n1.0 1.0 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.0 100 150 200 0.0 0 50 50 100 150 200\nFigure 2: Mean classification error rate (left: stan- dard SDAE, right: multiagent SDAE). The ver- tical coordinate indicates their error rate and the horizontal for epoch count. Both SDAE result in almost the same shape"}, {"section_index": "8", "section_name": "4.4 SPARSE CONNENCT SDA", "section_text": "We gradually changed the connectivity rate of our proposed SCSDAE and compare its performance In this experiment, we measure the performance by the two aspect, the error rate and the calculatior time, because we expect that the less number of edge mean the less calculation in time in our mode as we described in chapter 3.\nFigure4|shows the error rate on test set after the given number of epochs passed. Contrary to our. intuition. it can be said that in some cases despite of the decrease in connection probability, error rate do not deteriorate. The reason is not obvious, but we suppose that the more connection established, the. more it became difficult and time requiring to the model to converge enough to show the potential of. the model class. It can be also possible that the forced sparsity gave the model unexpected advantage. to obtain sparse coding efficiently."}, {"section_index": "9", "section_name": "5 CONCLUSION", "section_text": "We proposed a fundamental framework to reinterpret deep networks as multiagent systems. Specif. ically, we reimplemented a model equivalent to Stacked Denoising Autoencoder(SDAE) but the inside implementation is given with the multiagent system. We verified this equivalence from botl theoritical and empirical aspects. We also tested the behaviour when we actually let the model drop.\nFigure 4: Mean error rate for connectivity rate averaged over 3 experiments for each connectivity The horizontal axis indicates connectivity rate of each model and the vertical for error rate in test dataset.\nTable 2: Difference of the largest weight change for one input between multiagnet and normal SDAE\nGroup of edges. Largest difference input to hidden. 1.08 10-19 hidden bias. 0.0 hidden to output 8.7 10-19 output bias. 0.0\nFigure 3|shows the meausred calculation time and Figure 4|for the error rate. In both figures the horizontal coordinate indicates the number of iterated epochs and the vertical is for time and errol rate, respectively. From Figure[3] we can verify that the calculation time decreases linearly as we ex pected.\nOur contribution is to propose new multiagent based neural network system that free existing deep. network from restriction of layer scheme. Our system involves the standard SDAE as its subset and has large potential of extension. Our model could be extended to more biologically plausible. variation. Our experiment with the proposed Sparse Connect SDAE demostrate the advantage of. non-matrix calculation and permanent drop of edges."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Diederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980, 2014.\nDong-Hyun Lee, Saizheng Zhang, Asja Fischer, Antoine Biard, and Yoshua Bengio. Differen Target Propagation. arXiv preprint arXiv:1412.7525, 2014.\nMathieu Germain Mathieu, Karol Gregor Karol, Iain Murray Iain, Hugo Larochelle Hugo, Mathiet Germain, Karol Gregor, Google Deepmind, Karol Gregor, Gmail Com, Iain Murray, and Hugo. Larochelle. MADE: Masked Autoencoder for Distribution Estimation. In ICML. 2015\nYuanyuan Mi, C. C. Alan Fung, K. Y. Michael Wong, and Si Wu. Spike Frequency Adaptation Implements Anticipative Tracking in Continuous Attractor Neural Networks. In NIPs. 2014\nTakayuki Osogami and Makoto Otsuka. Seven neurons memorizing sequences of alphabetical image via spike-timing dependent plasticity. Scientific Reports, 5, 14149, 2015.\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, Tapani Raiko, Mikko Honkala Mathias Berglund, and Tapani Raiko. Semi-supervised Learning with Ladder Networks. arXiv preprint arXiv:1507.02672, 2015.\nSebastian Risi and Kenneth O. Stanley. Guided Self-organization in Indirectly Encoded and Evolving Topographic Maps. In GECCO, 2014.\nDavid Sussillo and L. F. Abbott. Generating Coherent Patterns of Activity from Chaotic Neura Networks. Neuron, 63(4):544 557, 2009\nPascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting anc composing robust features with denoising autoencoders. In ICML, 2008\nThe next step is to sophisticate the model so that the agents are more strictly independent from the environment. Then we will be able to use this system as a framework for extending more flexible deep network. For example, we can make arbitrary connection of units. We can also consider mixing of units with different activation function, dropout strategy, learning rate.\nMatthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. BinaryConnect: Training Deep Neural Networks with binary weights during propagations. arXiv preprint arXiv:1511.00363. 2015\nJakob N. Foerster, Yannis M. Assael, Nando de Freitas, and Shimon Whiteson. Learning to Com municate with Deep Multi-Agent Reinforcement Learning. arXiv preprint arXiv:1605.06676, 2016.\nSong Han, Huizi Mao, and William J. Dally. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In ICLR, 2015."}, {"section_index": "11", "section_name": "A. DATASETS", "section_text": "MNIST handwritten digit dataset is a famous image recognision task. The input is a 28x28 image. patch of digit classified to 10 classes(0-9). It contains 60000 train images and 10000 test images. Since our goal is not to pursue the state of the art performance of this dataset, but verify the common property of our proposed model with more realistic problem rather than the XOR toy sample, so we. did no preprocessing on the dataset. We directly input image pixel value as a 784 row vector and didn't use the prior that the image is 2D data with spatiality which is importatnt in convolutional. network. This setting is called \"permutation invariant\" version of the MNIST task."}, {"section_index": "12", "section_name": "B. EXPERIMENTS DETAILS", "section_text": "For the XOR task, we set the number of hidden units to [3, 4, 5]. For MNIST, we used [100, 100 100] network for all experiments. The learning rate was fixed to O.001 at pretraining time and O.1 a finetuning. We trained 30 epochs of pretrainings for each MLP layer and 50 epochs of finetuning We used a single core of Intel(R) E5-2630 @ 2.40GHz to measure the performance invariant tc paralization method. The effect of paralization is a question for future research.\nXOR function toy dataset consists of random ordering of four points representing the four region of XOR function on 2D coordinate, which are set of tuple (x1,x2,y) [(1, 1, 1), (1, -1, -1), (1, 1, -1), (-1, -1, -1)]. The value '1' represents boolean value 'true and '-1' for 'false'. x1 and x2 mean boolean value input to the XOR function and y is the output"}]
ByC7ww9le
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "There is a growing interest in incorporating external memory into neural networks. For example. memory networks (Weston et al.[2014] Sukhbaatar et al.2015) are equipped with static memory slots that are content or location addressable. Neural Turing machines (Graves et al.2014) imple. ment memory slots that can be read and written as in Turing machines (Turing| 1938) but through. differentiable attention mechanism\nEach memory slot in these models stores a vector corresponding to a continuous representation c the memory content. In order to recall a piece of information stored in memory, attention is typicall employed. Attention mechanism introduced byBahdanau et al.[(2014) uses a network that outpu a discrete probability mass over memory items. A memory read can be implemented as a weighte sum of the memory vectors in which the weights are given by the attention network. Reading out single item can be realized as a special case in which the output of the attention network is peake at the desired item. The attention network may depend on the current context as well as the memor item itself. The attention model is called location-based and content-based, if it depends on th location in the memory and the stored memory vector, respectively.\nWhen we embed entities from a knowledge base in a continuous vector space, if the capacity of the embedding model is appropriately controlled, we expect semantically similar entities to be close to each other, which will allow the model to generalize to unseen facts. However the notion of proximity may strongly depend on the type of a relation. For example, Benjamin Franklin was an. engineer but also a politician. We would need different metrics to capture his proximity to other engineers and politicians of his time."}, {"section_index": "1", "section_name": "GAUSSIAN ATTENTION MODEL AND ITS APPLICATION TO KNOWLEDGE BASE EMBEDDING AND OUESTION ANSWERING", "section_text": "John Winn & Ryota Tomioka\n{jwinn, ryoto}@microsoft.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Figure 1: Comparison of the conventional content-based attention model using inner product anc the proposed Gaussian attention model with the same mean but two different covariances.\nIn this paper, we propose a new attention model for content-based addressing. Our model score each item vitem in the memory by the (logarithm of) multivariate Gaussian likelihood as follows:\nCompared to the (normalized) inner product used in previous work (Sukhbaatar et al.|. 2015Grave et al.]2014) for content-based addressing, the Gaussian model has the additional control of the. spread of the attention over items in the memory. As we show in Figure[1] we can view the conven tional inner-product-based attention and the proposed Gaussian attention as addressing by an affin. energy function and a quadratic energy function, respectively. By making the addressing mechanisr more complex, we may represent many entities in a relatively low dimensional embedding space. Since knowledge bases are typically extremely sparse, it is more likely that we can afford to have. more complex attention model than a large embedding dimension..\nWe apply the proposed Gaussian attention model to question answering based on knowledge bases At the high-level, the goal of the task is to learn the mapping from a question about objects in the knowledge base in natural language to a probability distribution over the entities. We use the scoring function (1) for both embedding the entities as vectors, and extracting the conditions mentioned in. the question and taking a conjunction of them to score each candidate answer to the question..\nThe ability to compactly represent a set of objects makes the Gaussian attention model well suited fo. representing the uncertainty in a multiple-answer question (e.g., \"who are the children of Abrahan Lincoln?\"). Moreover, traversal over the knowledge graph (see Guu et al.]2015) can be naturall handled by a series of Gaussian convolutions, which generalizes the addition of vectors. In fact, we model each relation as a Gaussian with mean and variance parameters. Thus a traversal on a relatior corresponds to a translation in the mean and addition of the variances..\nThe proposed question answering model is able to handle not only the case where the answer to a. question is associated with an atomic fact, which is called simple Q&A (Bordes et al.][2015), but alsc. questions that require composition of relations (path queries in Guu et al.[(2015)) and conjunction o. queries. An example flow of how our model deals with a question \"Who plays forward for Borussia. Dortmund?\" is shown in Figure2in Section3\nThis paper is structured as follows. In Section [2] we describe how the Gaussian scoring function (1) can be used to embed the entities in a knowledge base into a continuous vector space. We call. our model TransGaussian because of its similarity to the TransE model proposed by|Bordes et al.. (2013). Then in Section 3] we describe our question answering model. In Section 4] we carry out. experiments on WorldCup2014 dataset we collected. The dataset is relatively small but it allows us. to evaluate not only simple questions but also path queries and conjunction of queries. The proposed TransGaussian embedding with the question answering model achieves significantly higher accuracy. than the vanilla TransE embedding or TransE trained with compositional relations Guu et al. (2015 combined with the same question answering model..\nInner-product Gaussian Gaussian\nscore(Vitem) = log (Vitem|context, context.\nwhere context denotes all the variables that the attention depends on. For example, \"American engineers in the 18th century\" or \"American politicians in the 18th century\" would be two contexts. that include Benjamin Franklin but the two attentions would have very different shapes..\nIn this section, we describe the proposed TransGaussian model based on the Gaussian attention model (1). While it is possible to train a network that computes the embedding in a single pass (Bordes et al.]2015) or over multiple passes (Li et al.[2015), it is more efficient to offload the embedding as a separate step for question answering based on a large static knowledge base\nLet be the set of entities and R be the set of relations. A knowledge base is a collec. tion of triplets (s,r,o), where we call s E &, r E R, and o E &, the subject, the re-. lation, and the object of the triplet, respectively. Each triplet encodes a fact. For example. (Albert_Einstein, has_profession, theoretical_physicist). All the triplets given in a knowledge base are assumed to be true. However generally speaking a triplet may be true or. false. Thus knowledge base embedding aims at training a model that predict if a triplet is true or not. given some parameterization of the entities and relations (Bordes et al.2011) 2013) Nickel et al.. 2011 ; Socher et al. 2013 Wang et al.12014).\nIn this paper, we associate a vector vs E Rd with each entity s E , and we associate each relation\nwhere mr,; (j = 1,..., d) are the unconstrained parameters that are optimized during training and. e is a small positive value that ensure the positivity of the variance during numerical computation. The ELU is defined as\nRanking loss Suppose we have a set of triplets T = {(si, ri, oi)},=1 from the knowledge base Let W(s. r) be the set of incorrect objects to be in the triplet (s, r. .)\nOur objective function uses the ranking loss to measure the margin between the scores of true an- swers and those of false answers and it can be written as follows:.\n1 min Et'~N(s,r) |[ - score(s, r, o) + score(s, r, t)]. N {ve:eEE}, (s,r,o)eT {8r,Mr,:rER} + A >|vell2+>(llorll2+|Mr|IF) eEE rER\nwhere, N = |TI, is the margin parameter and M, denotes the diagonal matrix with mr,j, j = 1, . . . , d on the diagonal; the function [1+ is defined as [x]+ = max(0, x). Here, we treat an inverse\nscore(s, r, o) = log $(vo|context, context\nare modeling the relation of subject vs and object v, as a translation dr, which is equivalent to the TransE model (Bordes et al.] 2013). We allow the covariance r to depend on the relation to. handle one-to-many relations (e.g., profes sion_has per son relation) and capture the shape of. the distribution of the set of objects that can be in the triplet. We call our model TransGaussian. because of its similarity to TransE (Bordes et al.2013).\nParameterization For computational efficiency, we will restrict the covariance matrix . to be diagonal in this paper. Furthermore, in order to ensure that . is strictly positive definite, we employ the exponential linear unit (ELU, Clevert et al.2015) and parameterize . as follows:.\nELU(mr,1)+1+e r = diag ELU(mr,d)+1+e\nx > 0, ELU(x) exp(x) - 1, x < 0."}, {"section_index": "3", "section_name": "2.2 COMPOSITIONAL RELATIONS", "section_text": "Guu et al.(2015) has recently shown that training TransE with compositional relations can make it competitive to more complex models, although TransE is much simpler compared to for exam-. ple, neural tensor networks (NTN, Socher et al.(2013)) and TransH Wang et al.(2014).Here. a compositional relation is a relation that is composed as a series of relations in R, for exam ple, grand_father_of can be composed as first applying the parent_of relation and then the. father_of relation, which can be seen as a traversal over a path on the knowledge graph..\nTransGaussian model can naturally handle and propagate the uncertainty over such a chain of relations by convolving the Gaussian distributions along the path. That is, the score of an en- tity o to be in the r-step relation r1/r2/... /r, with subject s, which we denote by the triplet s., r1/r2/ .. /r-, o), is given as\nscore(s, r1/ T-.0 ) = log $(vo 'context. context"}, {"section_index": "4", "section_name": "3.1 ENTITY RECOGNITION", "section_text": "We assume that there is an oracle that provides a list containing all the entities mentioned in the question, because (1) a domain specific entity recognizer can be developed efficiently (Williams et al.[2015) and (2) generally entity recognition is a challenging task and it is beyond the scope of this paper to show whether there is any benefit in training our question answering model jointly. with a entity recognizer. We assume that the number of extracted entities can be different for each question.\nWe train a long short-term memory (LSTM, Hochreiter & Schmidhuber!1997) network that emits an output ht for each token in the input sequence. Then we compute the attention over the hidden\nrelation as a separate relation and denote by R = R U R-1 the set of all the relations including both relations in R and their inverse relations; a relation r is the inverse relation of r if (s, r, o) implies (o,r, s) and vice versa. Moreover, Et'~N(s,r) denotes the expectation with respect to the uniform distribution over the set of incorrect objects, which we approximate with 10 random samples in the experiments. Finally, the last terms are l2 regularization terms for the embedding parameters.\n)i=1 randomly sampled paths from the knowledge graph. Here relation ri, in a path can be a relation in. R or an inverse relation in R-1. With the scoring function (4), the generalized training objective. for compositional relations can be written identically to (3) except for replacing T with T U P and replacing N with N' = T U P|\nGiven a set of question-answer pairs, in which the question is phrased in natural language and the an swer is an entity in the knowledge base, our goal is to train a model that learns the mapping from the. question to the correct entity. Our question answering model consists of three steps, entity recog. ition, relation composition, and conjunction. We first identify a list of entities mentioned in the question (which is assumed to be provided by an oracle in this paper). If the question is \"Who plays Forward for Borussia Dortmund?\" then the list would be [Forward, Borussia_Dortmund] The next step is to predict the path of relations on the knowledgegraph starting from each en ity in the list extracted in the first step. In the above example, this will be (smooth versions of /Forward/position_played_by/ and /Borussia_Dortmund/has_player/ predictec. as series of Gaussian convolutions. In general, we can have multiple relations appearing in each path. Finally, we take a product of all the Gaussian attentions and renormalize it, which is equivalent tc. Bayes' rule with independent observations (paths) and a noninformative prior..\nFigure 2: The input to the system is a question in natural language. Two entities Forward and. Borus sia_Dortmund are identified in the question and associated with point mass distributions centered at the corresponding entity vectors. An LSTM encodes the input into a sequence of output. vectors of the same length. Then we take average of the output vectors weighted by attention pt.e for. each recognized entity e to predict the weight Qr.e for relation r associated with entity e. We form. a Gaussian attention over the entities for each entity e by convolving the corresponding point mass. with the (pre-trained) Gaussian embeddings of the relations weighted by Qr,e according to Eq. (6) The final prediction is produced by taking the product and normalizing the Gaussian attentions..\nf(ve,ht) = u{ ReLU(W f,vVe + W f,hht+ b1) + b2\nNext, we use the weights p to compute the weighted sum over the hidden states ht as\nOe\nThen we compute the weights Qr.e over all the relations as Qr.e = ReLU (w, oe) (Vr E R L R-1). Here the rectified linear unit is used to ensure the positivity of the weights. Note however that the weights should not be normalized, because we may want to use the same relation more than once in the same path. Making the weights positive also has the effect of making the attention sparse and interpretable because there is no cancellation.\nVmarco Reus. score using Eq. (7). weights ar,Forward Imultiply & normalize. OForward weighted sum weights Pt,Forward weighted convolution Whoe playsforward for Borussia Dortmund. Q: Who plays forward for Borussia Dortmund? VForward < VBorussia_Dortmund A: Marco Reus\nFor each extracted entity e, we view the extracted entity and the answer of the question to be the subject and the object in some triplet (e, p, o), respectively, where the path p is inferred from the question as the weights dr.e as we described above. Accordingly, the score for each candidate answer o can be expressed using (1) as:\nscoree(vo) = log $(vo|e.a.KB, e,a,KB\nLet (q) be the set of entities recognized in the question q. The final step of our model is to take the conjunction of the Gaussian attentions derived in the previous step. This step is simply carried ou oy multiplying the Gaussian attentions as follows:\nSuppose we have a knowledge base (E,R,T) and a trained TransGaussian model. ({ve}eee,{(r, r)}reR), where R is the set of all relations including the inverse rela-. tions. During training time, we assume the training set is a supervised question-answer pairs {(qi, (qi), a) : i = 1, 2, ..., m}. Here, qi is a question formulated in natural language, (qi) C is a set of knowledge base entities that appears in the question, and a; E is the answer to the question. For example, on a knowledge base of soccer players, a valid training sample could be.\nTo train our question-answering model, we minimize the objective function\nwhere Et'~N(q.) is expectation with respect to a uniform distribution over of all incorrect answers to. qi, which we approximate with 10 random samples. We assume that the number of relations implied. in a question is small compared to the total number of relations in the knowledge base. Hence the coefficients Qr,e computed for each question qi are regularized by their l1 norms.."}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "As a demonstration of the proposed framework, we perform question and answering on a datase of soccer players. In this work, we consider two types of questions. A path query is a questio that contains only one named entity from the knowledge base and its answer can be found fron the knowledge graph by walking down a path consisting of a few relations. A conjunctive quer is a question that contains more than one entities and the answer is given as the conjunction o all path queries starting from each entity. Furthermore, we experimented on a knowledge bas completion task with TransGaussian embeddings to test its capability of generalization to unseer fact. Since knowledge base completion is not the main focus of this work, we include the results ir the Appendix.\nWe build a knowledge base of football players that participated in FIFA World Ci 2014 The original dataset consists of players' information such as nationality, pos tions on the feld and ages etc. We picked a few attributes and constructed 1127 e tities and6 atomic relations. The entities include 736 players, 297 professional so cer clubs, 51 countries, 39 numbers and 4 positions. And the six atomic relations a\nThe original dataset can be found at https://datahub.io/dataset/fifa-world-cup-2014-all-players\nvo[E(q),O) = log b(Vo|e,a,KB,e,a,KB eEE(q) 1 (Vo - e.a.KB x,KB(Vo - e,a,KB) + const., 2 eEE(q)\n1 Vo- e (Vo - e,a,KB) + const. 2 .KB KB eEE(q)\nwhich is again a (logarithm of) Gaussian scoring function, where e,a,kB and e,a,kB are the mean and the covariance of the Gaussian attention given in (6). Here O denotes all the parameters of the question-answering model.\n\"Who plays forward for Borussia Dortmund?',[Forward, Borussia_Dortmund], Marco_Reus)\nNote that the answer to a question is not necessarily unique and we allow a; to be any of the true answers in the knowledge base. During test time, our model is shown (q, (q)) and the task is to find a. We denote the set of answers to q; by A(qi).\nE [- score(va;|E(qi),O) + score(vg|E(q;), O)]+] +v |ar,el) +X||O|l2 t'~N(qi) eEE(qi)rER\nGiven the entities and relations, we transformed the dataset into a set of 3977 triplets. A list ol sample triplets can be found in the Appendix. Based on these triplets, we created two sets of questior answering tasks which we call path query and conjunctive query respectively. The answer of every. question is always an entity in the knowledge base and a question can involve one or two triplets. The questions are generated as follows.\nPath queries. Among the paths on the knowledge graph, there are some natural composition o. relations, e.g., p1ays_in_country (PLAYER -> COUNTRY) can be decomposed as the com position of p1ays_in_c1ub (PLAYER-> CLUB) and is_in_country (CLUB ->COUNTRY) In addition to the atomic relations, we manually picked a few meaningful compositions of relations and formed query templates, which takes the form \"find e E , such that (s, p, e) is true\", where s. is the subject and p can be an atomic relation or a path of relations. To formulate a set of path-basec question-answer pairs, we manually created one or more question templates for every query tem. plate (see Table|5) Then, for a particular instantiation of a query template with subject and objec entities, we randomly select a question template to generate a question given the subject; the objec entity becomes the answer of the question. See Table|6|for the list of composed relations, sample. questions, and answers. Note that all atomic relations in this dataset are many-to-one while these. composed relations can be one-to-many or many-to-many as well..\nAs a result, we created 8003 question-and-answer pairs of path queries and 2208 pairs of conjunctive queries which are partitioned into train / validation / test subsets. We refer to Table 1|for more statistics about the dataset. Templates for generating the questions are list in Table[5"}, {"section_index": "6", "section_name": "4.2 EXPERIMENTAL SETUP", "section_text": "To perform question and answering under our proposed framework, we first train the TransGaussian model on WorldCup2014 dataset. In addition to the atomic triplets, we randomly sampled 50000 paths with length 1 or 2 from the knowledge graph and trained a TransGaussian model composi- tionally as described in Set [2.2] An inverse relation is treated as a separate relation. Following the naming convention from Guu et al.(2015), we denote this trained embedding by TransGaus- sian (COMP). We found that the learned embedding possess some interesting properties. Some dimensions of the embedding space dedicate to represent a particular relation. Players are clustered by their attributes when entities' embeddings are projected to the corresponding lower dimensional subspaces. We elaborate and illustrate such properties in the Appendix.\nBaseline methods We also trained a TransGaussian model only on the atomic triplets and denote such a model by TransGaussian (SINGLE). Since no inverse relation was involved when Trans- Gaussian (SINGLE) was trained, to use this embedding in question answering tasks, we represent the inverse relations as follows: for each relation r with mean dr and variance r, we model its inverse r-1 as a Gaussian attention with mean d, and variance equal to r.\nWe also trained TransE models on WorldCup2014 dataset by using the code released by the authors of Guu et al.(2015). Likewise, we use TransE (SINGLE) to denote the model trained with atomic triplets only and use TransE (COMP) to denote the model trained with the union of triplets and paths. Note that TransE can be considered as a special case of TransGaussian where the variance matrix is the identity and hence, the scoring formula Eq. (7) is applicable to TransE as well.\nConjunctive queries. To generate question-and-answer pairs of conjunctive queries, we first picked three pairs of relations and used them to create query templates of the form \"Find e E . such that both (s1, r1, e) and (s2, r2, e) are true.\" (see Table[5. For a pair of relations r1 and r2, we enumerated all pairs of entities s1, s2 that can be their subjects and formulated the corresponding query in natural language using question templates as in the same way as path queries. See Table7 for a list of sample questions and answers..\nTraining configurations For all models, dimension of entity embeddings was set to 30. The hid den size of LSTM was set to 80. Word embeddings were trained jointly with the question answering model and dimension of word embedding was set to 40. We employed Adam (Kingma & Ba]2014 as the optimizer. All parameters were tuned on the validation set. Under the same setting, we exper imented with two cases: first, we trained models for path queries and conjunctive queries separately Furthermore, we trained a single model that addresses both types queries. We present the results of the latter case in the next subsection while the results of the former are included in the Appendix\nEvaluation metrics During test time, our model receives a question in natural language and a lis of knowledge base entities contained in the question. Then it predicts the mean and variance of Gaussian attention formulated in Eq. (7) which is expected to capture the distribution of all positiv answers. We rank all entities in the knowledge base by their scores under this Gaussian attention Next, for each entity which is a correct answer, we check its rank relative to all incorrect answers an call this rank the filtered rank. For example, if a correct entity is ranked above all negative answer except for one, it has filtered rank two. We compute this rank for all true answers and report mea. filtered rank and H@ 1 which is the percentage of true answers that have filtered rank 1."}, {"section_index": "7", "section_name": "4.3 EXPERIMENTAL RESULTS", "section_text": "We present the results of joint learning in Table [2] These results show that TransGaussian works better than TransE in general. In fact, TransGaussian (COMP) achieved the best performance in almost all aspects. Most notably, it achieved the highest H@1 rates on challenging questions such as \"where is the club that edin dzeko plays for?\" (#11, composition of two relations) and \"who are the defenders on german national team?\" (#14, conjunction of two queries).\nThe same table shows that TransGaussian benefits remarkably from compositional training. Fo example, compositional training improved TransGaussian's H@1 rate by near 60% in queries or players from a given countries (#8) and queries on players who play a particular position (#9). I also boosted TransGaussian's performance on all conjunctive quries (#13-#15) significantly.\nTo understand TransGaussian (COMP)'s weak performance on answering queries on the profes- sional football club located in a given country (#10) and queries on professional football club that has players from a particular country (#12), we tested its capability of modeling the composed re- lation by feeding the correct relations and subjects during test time. It turns out that these two relations were not modeled well by TransGaussian (COMP) embedding, which limits its perfor- mance in question answering. (See Table|8 in the Appendix for quantitative evaluations.) The same limit was found in the other three embeddings as well.\nNote that all the models compared in Table2|uses the proposed Gaussian attention model because. TransE is the special case of TransGaussian where the variance is fixed to one. Thus the main differ ences are whether the variance is learned and whether the embedding was trained compositionally. Finally, we refer to Table |9|and|10|in the Appendix for experimental results of models trained on path and conjunctive queries separately.\nTable 1: Some statistics of the WorldCup2014 dataset\nThe work of[Vilnis & McCallum(2014) is similar to our Gaussian attention model. They discuss many advantages of the Gaussian embedding; for example, it is arguably a better way of handling asymmetric relations and entailment. However the work was presented in the word2vec (Mikolov et al.f 2013)-style word embedding setting and the Gaussian embedding was used to capture the diversity in the meaning of a word. Our Gaussian attention model extends their work to a more general setting in which any memory item can be addressed through a concept represented as a Gaussian distribution over the memory items\nTable 2: Results of joint learning with path queries and conjunction queries on WorldCup2014\nBordes et al.( (2014]2015) proposed a question-answering model that embeds both questions and their answers to a common continuous vector space. Their method in Bordes et al.[(2015) can combine multiple knowledge bases and even generalize to a knowledge base that was not used during training. However their method is limited to the simple question answering setting in which the answer of each question associated with a triplet in the knowledge base. In contrast, our method can handle both composition of relations and conjunction of conditions, which are both naturally enabled by the proposed Gaussian attention model.\nNeelakantan et al.(2015a) proposed a method that combines relations to deal with compositiona relations for knowledge base completion. Their key technical contribution is to use recurrent neura networks (RNNs) to encode a chain of relations. When we restrict ourselves to path queries, questior answering can be seen as a sequence transduction task (Graves2012] Sutskever et al.]2014) ir which the input is text and the output is a series of relations. If we use RNNs as a decoder, our mode would be able to handle non-commutative composition of relations, which the current weightec convolution cannot handle well. Another interesting connection to our work is that they take the maximum of the inner-product scores (see also Weston et al.]2013] Neelakantan et al.] 2015b) which are computed along multiple paths connecting a pair of entities. Representing a set as a collection of vectors and taking the maximum over the inner-product scores is a natural way tc represent a set of memory items. The Gaussian attention model we propose in this paper, however has the advantage of differentiability and composability.\nIn this paper, we have proposed the Gaussian attention model which can be used in a variety of contexts where we can assume that the distance between the memory items in the latent space is compatible with some notion of semantics. We have shown that the proposed Gaussian scoring function can be used for knowledge base embedding achieving competitive accuracy. We have alsc shown that our embedding model can naturally propagate uncertainty when we compose relations together. Our embedding model also benefits from compositional training proposed byGuu et al. (2015). Furthermore, we have demonstrated the power of the Gaussian attention model in a chal- lenging question answering problem which involves both composition of relations and conjunction of queries. Future work includes experiments on natural question answering datasets and end-to-end training including the entity extractor."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Daniel Tarlow, Nate Kushman, and Kevin Gimpel for valuabl discussions.\nTransE TransE TransGaussian TransGaussian (SINGLE) (COMP) (SINGLE) (COMP) Mean Mean Mean Mean # Sample question H@1(%)Filtered H@1(%) Filtered H@1(%)Filtered H@1(%)Filtered Rank Rank Rank Rank 1 which club does alan pulido play for?. 88.59 1.18 91.95 1.11 96.64 1.04 98.66 1.01 2 what position does gonzalo higuain play?. 100.00 1.00 98.11 1.03 98.74 1.01 100.00 1.00 3 how old is samuel etoo?. 67.11 1.44 90.79 1.13 94.74 1.08 97.37 1.04 4 what is the jersey number of mario balotelli?. 45.00 1.89 83.57 1.22 97.14 1.03 99.29 1.01 5 which country is thomas mueller from ?. 94.40 1.06 94.40 1.06 96.80 1.04 98.40 1.02 6 which country is the soccer team fc porto based in ?. 98.48 1.02 98.48 1.02 93.94 1.06 95.45 1.05 7 who plays professionally at liverpool fc?. 95.12 1.10 90.24 1.20 98.37 1.04 96.75 1.04 8 which player is from iran?. 89.86 1.51 76.81 2.07 38.65 2.96 99.52 1.00 9 name a player who plays goalkeeper?. 98.96 1.01 69.79 1.82 42.71 5.52 100.00 1.00 10 which soccer club is based in mexico?. 22.03 13.94 30.51 8.84 6.78 10.66 16.95 21.14 11 where is the club that edin dzeko plays for ?. 52.63 3.88 57.24 2.10 47.37 2.27 78.29 1.41 12 name a soccer club that has a player from australia ? 30.43 12.08 33.70 11.47 13.04 11.64 19.57 17.57 Overall (Path Query) 74.16 3.11 77.39 2.56 69.54 3.02 85.94 3.52 13 who plays forward for fc barcelona?. 97.55 1.06 76.07 1.66 93.25 1.24 98.77 1.02 14 who are the defenders on german national team?. 95.93 1.06 69.92 2.33 65.04 2.04 100.00 1.00 15 which player in ssc napoli is from argentina?. 88.81 1.17 76.12 1.76 88.81 1.35 97.76 1.03 Overall (Conj. Query) 94.29 1.09 74.29 1.89 83.57 1.51 98.81 1.02"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014\nAntoine Bordes, Jason Weston, and Nicolas Usunier. Open question answering with weakly super vised embedding models. In Joint European Conference on Machine Learning and Knowledg Discovery in Databases, pp. 165-180. Springer, 2014.\nAntoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple questior answering with memory networks. arXiv preprint arXiv:1506.02075, 2015.\nDjork-Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep networ learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015\nKelvin Guu, John Miller, and Percy Liang. Traversing knowledge graphs in vector space. In EMNLI 2015, 2015.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8 1735-1780. 1997.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014\nlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in neural information processing systems, pp. 3104-3112, 2014.\nYujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015.\nArvind Neelakantan, Benjamin Roth, and Andrew McCallum. Compositional vector space model for knowledge base completion. arXiv preprint arXiv:1504.06662, 2015a.\nArvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. Efficient non-parametric estimation of multiple embeddings per word in vector space. arXiv preprint. arXiv:1504.06654, 2015b.\nAlan Mathison Turing. On computable numbers, with an application to the entscheidungsproblen A correction. Proceedings of the London Mathematical Society. 2(1):544. 1938\nZhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. Knowledge graph embedding by trans lating on hyperplanes. In AAAI, pp. 1112-1119. Citeseer, 2014\nJason D Williams, Eslam Kamal, Hani Amr Mokhtar Ashour, Jessica Miller, and Geoff Zweig Fast and easy language understanding for dialog systems with microsoft language understanding intelligent service (LUIS). In 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 159, 2015.\nJason Weston, Ron J Weiss, and Hector Yee. Nonlinear latent factorization by embedding multiple user interests. In Proceedings of the 7th ACM conference on Recommender systems, pp. 65-68. ACM, 2013."}, {"section_index": "10", "section_name": "A WORDCUP2014 DATASET", "section_text": "Subject Relation Object david_villa plays_for_country spain lionel_messi plays_in_club fc_barcelona antoine_griezmann plays_position forward cristiano_ronaldo wears number 7 fulham_fc is_in_country england lukas_podolski is_aged 29\nTable 4: Statistics of the WorldCup2014 dataset\nTable 5: Templates of questions. In the table, (player), (club), (position) are placeholders of name entities with associated type. (country_1) is a placeholder for a country name while (country_2) is : placeholder for the adjectival form of a country.\nTable 3: Sample atomic triplets\nSubject Relation Object david_villa plays_for_country spain lionel_messi plays_in_club fc_barcelona ntoine_griezmann plays_position forward cristiano_ronaldo wears.number 7 fulham_fc is_in_country. england lukas_podolski is_aged 29\n# entity 1127 # atomic relations 6 # atomic triplets 3977 # relations (atomic and compositional) in path queries 12 # question and answer pairs in path queries ( train / validation / test ). 5620 / 804 / 1579 # types of questions in conjunctive queries. 3 # question and answer pairs in conjunctive queries ( train / validation / test ) 1564 / 224 / 420 size of vocabulary 1781\n# Query template Question template which club does (player) play for ? 1 Find e E : ((player), plays.in.club,e) is true which professional football team does (player) play for ? which football club does (player) play for ? 2 Find e E : ((player), plays_position, e) is true what position does (player) play ? how old is (player) ?. 3 Find e E : ((player), is-aged, e) is true what is the age of (player) ? what is the jersey number of (player) ? 4 Find e E : ((player), wears_number, e) is true what number does (player) wear ? what is the nationality of (player) ? 5 Find e E : ((player), plays-for-country, e) is true which national team does (player) play for ? which country is (player) from ? 6 Find e E &: ((club), is_in_country, e) is true which country is the soccer team (club) based in ? name a player from (club) ? Find e E E: ((club), plays_in_club-1, e) is true who plays at the soccer club (club) ? 7 who is from the professional football team (club) ? who plays professionally at (club) ? which player is from (country-1) ? Find e E : ((country 1), plays-for country-1, e) is true name a player from (country_1) ? 8 who is from (country_1) ? who plays for the (country-1) national football team ? Find e E : ((position), plays-position-1, e) is true name a player who plays (position) ? 9 who plays (position) ?. which soccer club is based in (country_1) ? 10 Find e E E: ((country-1), is_in_country-1, e) is true name a soccer club in (country-1) ? Find e E : ((player),plays_in_club / is_in_country, e)is true which country does (player) play professionally in ? 11 where is the football club that (player) plays for ? which professional football team do players from (country 1) play for ? 12 Find e E E: ((country_1), plays.for_country-1 /plays.in_club, e) is true name a soccer club that has a player from (country_1) ? which professional football team has a player from (country-1) ? Find e E : ((position), plays-position-1, e) is true and who plays (position) for (club)? 13 who are the (position) at (club) ? ((club), plays_in_club-1, e) is true name a (position) that plays for (club) ? who plays (position) for (country-1) ? Find e E : (position), plays-position-1, e) is true and who are the (position) on (country-1) national team ? 14 name a (position) from (country 1) ? ((country_1), plays-for-country-1, e) is true which (country _2) footballer plays (position) ? name a (country_2) (position) ? who are the (country_2) players at (club) ? Find e E : (club), plays in.club-l, e) is true and which (country _2) footballer plays for (club) ? 15 ((country-1), plays-for-country -1 name a (country_2) player at (club) ? ,e) is true which player in (club) is from (country-1) ?\nTable 6: (Composed) relations and sample questions in path queries\n10 is in_country- 11 plays-in_club is-in.country I 12 plays for countrs 1 Inlavs in club m\nTable 7: Conjunctive queries and sample questions"}, {"section_index": "11", "section_name": "B TRANSGAUSSIAN EMBEDDING OF WORLDCUP2O14", "section_text": "We trained our TransGaussian model on triplets and paths from WorldCup2014 dataset and illus. trated the embeddings in Fig 3 and4 Recall that we modeled every relation as a Gaussian with. diagonal covariance matrix. Fig3 shows the learned variance parameters of different relations. Each. row corresponds to the variances of one relation. Columns are permuted to reveal the block struc. ture. From this figure, we can see that every relation has a small variance in two or more dimensions This implies that the coordinates of the embedding space are partitioned into semantically coheren. clusters each of which represent a particular attribute of a player (or a football club). To verify this further, we picked the two coordinates in which a relation (e.g. plays position) has the least. variance and projected the embedding of all valid subjects and objects (e.g. players and positions. of the relation to this 2 dimensional subspace. See Fig.4 The relation between the subjects and the. objects are simply translation in the projection when the corresponding subspace is two dimensiona (e.g., plays_position relation in Fig.4|(a)). The same is true for other relations that requires. larger dimension but it is more challenging to visualize in two dimensions. For relations that have. a large number of unique objects, we only plotted for the eight objects with the most subjects fo. clarity of illustration.\nFurthermore, in order to elucidate whether we are limited by the capacity of the TransGaussiar embedding or the ability to decode question expressed in natural language, we evaluated the test. question-answer pairs using the TransGaussian embedding composed according to the ground-truth relations and entities. The results were evaluated with the same metrics as in Sec. 4.3 This es timation is conducted for TransE embeddings as well. See Table 8 for the results. Compared tc Table2] the accuracy of TransGaussian (COMP) is higher on the atomic relations and path queries but lower on conjunctive queries. This is natural because when the query is simple there is not. much room for the question-answering network to improve upon just combining the relations ac. cording to the ground truth relations, whereas when the query is complex the network could com. bine the embedding in a more creative way to overcome its limitation. In fact, the two queries. (#10 and #12) that TransGaussian (COMP) did not perform well in Table2|pertain to a single re. lation is_in_country-1 (#10) and a composition of two relations plays_for country-1. p1ays_in_c1ub (#12). The performance of the two queries were low even when the ground truth\n# Relation Type Sample question Sample answer which club does alan pulido play for ?. tigres_uanl 1 plays_in_club many-to-one which professional football team does klaas jan huntelaar play for ? fc _schalke_04 2 plays-position many-to-one what position does gonzalo higuain play ?. ssc_napoli how old is samuel etoo ?. 33 3 is_aged many-to-one what is the age of luis suarez ?. 27 what is the jersey number of mario balotelli ?. 9 4 wears_number many-to-one what number does shinji okazaki wear ?. 9 which country is thomas mueller from ?. 5 germany plays_for_country many-to-one what is the nationality of helder postiga ?. portugal 6 is_in_country many-to-one which country is the soccer team fc porto based in ?. portugal who plays professionally at liverpool fc ?. steven-gerrard 7 plays-in-club-1 one-to-many name a player from as roma ? miralem-pjanic which player is from iran ?. masoud_shojaei 8 plays_for_country-1 one-to-many name a player from italy ? daniele_de_rossi name a player who plays goalkeeper ?. gianluiqi_buffon 9 plays-position-1 one-to-many who plays forward ? raul.jimenez which soccer club is based in mexico ?. 10 one-to-many cruz_azul_fc is_in_country-1 name a soccer club in australia ?. melbourne_victory_fc where is the club that edin dzeko plays for ?. england 11 plays-in_club / is_in-country. many-to-one which country does sime vrsaljko play professionally in ?. italy -1/plays_in_club name a soccer club that has a player from australia ?. crystal_palace_fc 12 plays_for_country many-to-many name a soccer club that has a player from spain ?. fc_barcelona\n# Relations Sample questions Entities in questions Sample answer plays position who plays forward for fc barcelona ? forward , fc_barcelona 13 lionel_messi and who are the midfielders at fc bayern muenchen ? midfielder, fc _bayern muenchen toni_kroos plays position who are the defenders on german national team ?. defender , germany per_mertesacker 14 and which mexican footballer plays forward ? defender , mexico raul_jimenez plays_for_country-1 plays_in_club which player in paris saint-germain fc is from argentina ? paris_saint-germain.fc , argentina ezequiel_lavezzi 15 and who are the korean players at beijing guoan ? beijing-guoan , korea ha_daesung plays_for.country-1\nparis_saint-germain_fc , argentina beijing-guoan , korea.\nFigure 3: Variance of each relation. Each row shows the diagonal values in the variance matrix associated with a relation. Columns are permuted to reveal the block structure..\n1 plays in_club 2 plays-position O 3 is_aged 4 wears_number C plays_for-country 6 isin.country. O 7 o1avc in club-1\n8 plays.for_country 9 plays-position-1 10 is_in_country-1. 11 in club icin\nrelations were given, which indicates that the TransGaussian embedding rather than the question answering network is the limiting factor.\nKnowledge base completion has been a common task for testing knowledge base models on their ability of generalizing to unseen facts. Here, we apply our TransGaussian model to a knowledge. completion task and show that it has competitive performance.\nWe tested on the subset of WordNet released byGuu et al.(2015). The atomic triplets in this dataset. was originally created by Socher et al.[(2013) and [Guu et al.[(2015) added path queries that were. randomly sampled from the knowledge graph. We build our TransGaussian model by training on. these triplets and paths and tested our model on the same link prediction task as done by Socher et al.(2013];Guu et al.(2015)\nAs done byGuu et al.(2015), we trained TransGaussian (SINGLE) with atomic triplets only and. trained TransGaussian (COMP) with the union of atomic triplets and paths. We did not incorporate\nVariance of relations in every dimension plays_for_country 1.6 is in country 1.4 (inv) plays_for_country (inv) is_in_country 1.2 is_aged 1.0 (inv) is_aged plays in club 0.8 (inv) plays_in_club 0.6 wears number (inv) wears_number 0.4 plays_position 0.2 (inv) plays_position 194 5111018231420228 6241 716252915228212627139123170 dimension\nTable 8: Evaluation of embeddings. We evaluate the embeddings by feeding the correct entities and relations from a path or conjunctive query to an embedding model and using its scoring function to retrieve the answers from the embedded knowledge base.\nTransE TransE TransGaussian TransGaussian (SINGLE) (COMP) (SINGLE) (COMP) Mean Mean Mean Mean # Relation H@1(%) Filtered H@1(%) Filtered H@1(%) Filtered H@1(%) Filtered Rank Rank Rank Rank 1 plays in.club 75.54 1.38 93.48 1.09 99.86 1.00 98.51 1.02 2 plays position 96.33 1.04 94.02 1.09 98.37 1.02 100.00 1.00 3 is_aged 55.03 1.69 91.44 1.12 96.88 1.03 100.00 1.00 4 wears_number 38.86 2.09 78.67 1.32 95.92 1.04 100.00 1.00 5 plays-for-country. 71.60 1.39 94.84 1.10 99.32 1.01 100.00 1.00 6 is_in.country 98.32 1.03 99.66 1.00 99.33 1.01 100.00 1.00 7 87.50 1.46 83.42 1.45 94.70 1.07 plays.in_club-1 97.42 1.03 8 plays-for-country-1 82.47 1.68 68.21 3.37 25.27 5.66 98.78 1.02 9 plays-position-1 100.00 1.00 75.54 1.60 13.59 24.35 98.78 1.02 10 is.in_country-1 23.11 26.92 23.48 23.27 8.32 130.59 19.41 83.61 11 plays_in_club / is_in_country 20.24 7.05 58.29 1.98 46.88 2.99 80.16 1.38 12 plays-for-country-1/plays-in-club 25.32 22.27 27.73 10.04 19.04 35.59 20.15 33.01 Overall (Path relations) 64.64 5.09 75.02 3.59 67.22 14.87 86.73 8.79 plays-position 13 and 91.85 1.20 69.97 1.82 77.45 1.83 95.38 1.06 plays-in-club-1 plays-position 14 and 91.71 1.23 66.71 2.85 51.49 4.88 97.83 1.05 plays-in-club 15 and 88.59 1.20 73.37 1.80 83.42 1.34 94.70 1.08 is_in_country-1 Overall (Conj. relations). 90.72 1.21 70.02 2.16 70.79 2.68 95.97 1.06\nFigure 4: TransGaussian entity embeddings. Crosses are the subjects and circles are the objects of a relation. Specifically, crosses are players in (a)-(e) and professional football clubs in (f)\nword embedding in this task and each entity is assigned its individual vector. Without getting param eters tuned too much, TransGaussian (COMP) obtained accuracy comparable to TransE (COMP) See Table11\nPosition Country Forward Mexico 0.3 0.2 Defender Australia Midfielder Iran 0.2 O Goalkeeper 0.1 England Columbia South Korea ET# 0.1 SZ# 0.0 Russia dnnnnnmn France 0.0 -0.1 0.1 -0.2 0.2 -0.3 -0.3 -0.4 -0.3 -0.2 0.1 0.0 0.1 0.2 -0.3 -0.2 -0.1 0.0 0.1 0.2 dimension #9 dimension #29 (a) plays_position (b) plays_for_country Age Number 0.3 23 0.2 11 0.2 26 O 28 16 0.1 0.1 30 22 27 20 25 23 LZ# 0.0 LT# uonsunwap 0.0 24 21 uo -0.1 dnnmes -0.2 0.2 0.3 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 -0.2 -0.1 0.0 0.1 0.2 dimension #26 dimension #0 (c) is_aged (d) wears_number Club Club (country) O FC Barcelona Mexico 0.2 O Manchester United FC O England 0.2 O Manchester City FC Spain ssc Napoli Turkey 0.1 Chelsea FC 0 O Italy O Real Madrid CF Germany 0t# O FC Bayern Muenchen 22# Russia dnnnnnmnn 0.0 O Juventus FC dnnnnnman 0.0 France -0.1 0.2 -0.2 0.3 -0.3 0.3 -0.2 -0.1 0.0 0.1 0.2 0.2 -0.1 0.0 0.1 0.2 dimension #5 dimension #29 (e nlavs in club (fis in country\nTable 9: Experimental results of path queries on WorldCup2014\nTable 10: Experimental results of conjunctive queries on WorldCup2014\nTable 11: Accuracy of knowledge base completion on WordNet\nModel Accuracy (%) TransE (SINGLE) 68.5 TransE (COMP) 80.3 TransGaussian (SINGLE) 58.4 TransGaussian (COMP) 76.4\nTransE TransE TransGaussian TransGaussian (SINGLE) (COMP) (SINGLE) (COMP) Mean Mean Mean Mean # Relation and sample question H@1(%)Filtered H@1(%)Filtered H@1(%) Filtered H@1(%)Filtered Rank Rank Rank Rank plays_in_club 90.60 1.12 92.62 1.11 96.64 1.03 97.99 1.03 (which club does alan pulido play for?) plays_position 2 100.00 1.00 98.11 1.02 98.74 1.01 100.00 1.00 (what position does gonzalo higuain play?) 3 is_aged 81.58 1.30 92.11 1.10 96.05 1.04 100.00 1.00 (how old is samuel etoo?) 4 wears_number 44.29 1.88 85.71 1.19 96.43 1.04 100.00 1.00 (what is the jersey number of mario balotelli?) 5 plays_for_country 97.60 1.02 94.40 1.11 98.40 1.02 99.20 1.01 (which country is thomas mueller from ?) 6 is_in_country 98.48 1.02 98.48 1.02 93.94 1.08 98.48 1.02 (which country is the soccer team fc porto based in ?) 7 plays-in_club- 95.12 1.08 86.99 1.38 96.75 1.03 96.75 1.03 (who plays professionally at liverpool fc?) 8 plays.for_country 81.16 1.61 72.46 2.36 40.58 3.19 93.24 1.48 (which player is from iran?) 9 plays-position 100.00 1.00 30.21 2.30 55.21 5.09 85.42 1.15 (name a player who plays goalkeeper?) 10 is_in_country 24.58 11.47 23.73 10.07 5.08 9.18 17.80 20.10 (which soccer club is based in mexico?) plays_in_club / is_in_country 11 48.68 4.24 62.50 2.07 48.03 2.41 76.97 1.50 (where is the club that edin dzeko plays for ?) plays.for.country-1/plays-in_club 12 34.78 9.49 30.43 11.26 6.52 9.88 16.30 20.27 (name a soccer club that has a player from australia ?) Overall 74.92 2.80 74.35 2.71 70.17 2.82 84.42 3.68\nTransE TransE TransGaussian TransGaussian (SINGLE) (COMP) (SINGLE) (COMP) Mean Mean Mean Mean # Relation and sample question H@1(%) Filtered H@1(%) Filtered H@1(%)Filtered H@1(%)Filtered Rank Rank Rank Rank plays position-- andplays_in_club-1 13 94.48 1.10 71.17 1.77 87.12 1.37 98.77 1.02 (who plays forward for fc barcelona?) plays position-- and plays for.country 14 95.93 1.08 76.42 2.50 64.23 2.02 100.00 1.00 (who are the defenders on german national team?) plays_in_club-' and is.in_country. 15 91.79 1.13 75.37 1.75 88.06 1.37 94.03 1.07 (which player in ssc napoli is from argentina?) 94.05 74.05 80.7 .56 97.62 1.03"}]
Sy1rwtKxg
[{"section_index": "0", "section_name": "PARALLEL STOCHASTIC GRADIENT DESCENT WITH SOUND COMBINERS", "section_text": "Saeed Maleki, Madanlal Musuvathi & Todd Mytkowicz\nsaemal, madanm, toddm}@microsoft.com\nStochastic gradient descent (SGD) is a well-known method for regression and classification tasks. However, it is an inherently sequential algorithm - at each step, the processing of the current example depends on the parameters learned from the previous examples. Prior approaches to parallelizing SGD, such as HoG- WILD! and ALLReDUCE, do not honor these dependences across threads and thus can potentially suffer poor convergence rates and/or poor scalability. This paper proposes SyMSGD, a parallel SGD algorithm that retains the sequential seman- tics of SGD in expectation. Each thread in this approach learns a local model and a probabilistic model combiner that allows the local models to be combined to produce the same result as what a sequential SGD would have produced, in expectation. This SyMSGD approach is applicable to any linear learner whose update rule is linear. This paper evaluates SyMSGD's accuracy and performance on 9 datasets on a shared-memory machine shows up-to 13 speedup over our heavily optimized sequential baseline on 16 cores"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Stochastic Gradient Descent (SGD) is an effective method for many regression and classificatior. tasks. It is a simple algorithm with few hyper-parameters and its convergence rates are well under. stood both theoretically and empirically. However, its performance scalability is severely limited by its inherently sequential computation. SGD iteratively processes its input dataset where the compu. tation at each iteration depends on the model parameters learned from the previous iteration..\nCurrent approaches for parallelizing SGD do not honor this inter-step dependence across threads. Each thread learns a local model independently and combine these models in ways that can break sequential behavior. For instance, threads in HogwiLD! Recht et al.(2011) racily update a shared global model without holding any locks. In parameter-serverLi et al.(2014a), each thread (or ma- chine) periodically sends its model deltas to a server that applies them to a global model. In ALLRE DUcE Agarwal et al.[(2014), threads periodically reach a barrier where they compute a weighted- average of the local models. Although these asynchronous parallel approaches reach the optimal solution eventually, they can produce a model that is potentially different from what a sequential SGD would have produced after processing a certain number of examples. Our experiments indi-. cate that this makes their convergence rate slower than sequential SGD in terms of total number of examples studied. Our experiments show that all these algorithms either do not scale or their accuracy on the same number of examples falls short of a sequential baseline.\nTo address this problem, this paper presents SymSGD, a parallel SGD algorithm that seeks to retain. its sequential semantics. The key idea is for each thread to generate a sound combiner that allows. the local models to be combined into a model that is the same as the sequential model. This paper. describes a method for generating sound combiners for a class of SGD algorithms in which the inter-step dependence is linear in the model parameters. This class includes linear regression, linea. regression with L2 regularization, and polynomial regression. While logistic regression is not in this class, our experiments show that linear regression performs equally well in classification tasks as. logistic regression for the datasets studied in this paper. Also, this approach works even if the SGD.\nYufei Ding North Carolina State Univers. 3511 Ivy Commons Drive. Raleigh, North Carolina. ydinq8@ncsu.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "D2 Wg W2 Wa Sound W1 Combiner D2 Wx Wn Wy W\nFigure 1: Convex error function for a two-dimensional feature space\ncomputation is non-linear on the input examples and other parameters such as the learning rate; only the dependence on model parameters has to be linear..\nGenerating sound combiners can be expensive. SymSGD uses random projection techniques to reduce this overhead but still retaining sequential semantics in expectation. We call this approach probabilistically sound combiners. Even though SymSGD is expected to produce the same answer as the sequential SGD, controlling the variance introduced by the random projection requires care -- a large variance can result in reduced accuracy. This paper describes the factors that affect this variance and explores the ensuing design trade-offs.\nThe resulting algorithm is fast, scales well on multiple cores, and achieves the same accuracy as. sequential SGD on sparse and dense datasets. When compared to our optimized sequential baseline,. SymSGD achieves a speedup of 3.5 to 13 on 16 cores, with the algorithm performing better. for denser datasets. Moreover, the cost of computing combiners can be efficiently amortized in a. multiclass regression as a single combiner is sufficient for all of the classes. Finally, SymSGD (like ALLReDuce) is deterministic, producing the same result for a given dataset, configuration, and. random seed. Determinism greatly simplifies the task of debugging and optimizing learning.."}, {"section_index": "3", "section_name": "SOUND AND PROBABILISTIC MODEL COMBINERS", "section_text": "Stochastic gradient descent (SGD) is a robust method for finding the parameters of a model thai minimize a given error function. Figure 1 shows an example of a (convex) error function over two dimensions x and y reaching the minimum at parameter w*. SGD starts from some, not necessarily. optimal, parameter wg (as shown in Figure[1), and repeatedly modifies w by taking a step along the. gradient of the error function for a randomly selected example at the current w. The magnitude of the. step is called the learning rate and is usually denoted by a. The gradient computed from one example is not necessarily the true gradient at w. Nevertheless, SGD enjoys robust convergence behavio. by moving along the \"right\" direction over a large number of steps. This is shown pictorially in. Figure[1] where SGD processes examples in dataset D1 to reach w1 from wg. Subsequently, SGD starts from w1 and processes a different set D2 to reach wn. There is a clear dependence betweer the processing of D1 and the processing of D2 - the latter starts from w1, which is only determined after processing D1. Our goal is to parallelize SGD despite this dependence..\nState of the art parallelization techniques such as HogwiLD! and ALLReDucE approach this prob. lem by processing Di and D2 starting from the same model wg (let us assume that there only twc. processors for now), and respectively reaching w1 and w2. Then, they combine their local models. into a global model, but do so in an ad-hoc manner. For instance, ALLReDUCE computes a weightec. average of w1 and w2, where the per-feature weights are chosen so as to prefer the processor thai. has larger update for that feature. This weighted average is depicted pictorially as wa. Similarly, ir. HogwiLd!, the two processors race to update the global model with their respective local model. without any locking. (HogwiLD! performs this udpate after every example, thus the size of D1 and. D2 is 1.) Both approaches do not necessarily reach wn, the model that a sequential SGD would have. reached on D and D,. While SGD is algorithmically robust to errors, such ad-hoc combinations can result in slower convergence or poor performance, as we demonstrate in Section|4.\nSound Combiner: The goal of this paper is to soundly combine local models. Looking at Figure[1 a sound combiner combines local models w1 and w2, respectively generated from datasets D1 and. D2, into a global model wy, that is guaranteed to be the same as the model achieved by the sequential\nIf we look at the second processor, it starts its computation at wg, while in a sequential execution i would have started at w1, the output of the first processor. To obtain sequential semantics, we neec to \"adjust' its computation from wg to w1. To do so, the second processor performs its computatior starting from wg + w, where w is an unknown symbolic vector. This allows the second processo to both compute a local model (resulting from the concrete part) and a sound combiner (resulting from the symbolic part) that accounts for changes in the initial state. Once both processors are done learning, second processor finds wn by setting w to w1 wg where w1 is computed by the first processor. This parallelization approach of SGD can be extended to multiple processors where al processor produce a local model and a combiner (except for the first processor) and the local models are combined sequentially using the combiners.\nWhen the update to the model parameters is linear in a SGD computation, then the dependence on the unknown w can be concisely represented by a combiner matrix, as formally described in Section|3 Many interesting machine learning algorithms, such as linear regression, linear regression with L2 regularization, and polynomial regression already have linear update to the model parameters (but not necessarily linear on the input example).\nProbabilistically Sound Combiner: The main problem with generating a sound combiner is that the combiner matrix has as many rows and columns as the total number of features. Thus, it can be effectively generated only for datasets with modest number of features. Most interesting ma chine learning problems involve learning over tens of thousands to billions of features, for which. maintaining a combiner matrix is clearly not feasible..\nWe solve this problem through dimensionality reduction. Johnson-Lindenstrauss (JL) lemma John- son & Lindenstrauss[(1984) allows us to project a set of vectors from a high-dimensional space to a random low-dimensional space while preserving distances. We use this property to reduce the size of the combiner matrix without losing the fidelity of the computation - our parallel algorithm produces the same result as the sequential SGD in expectation.\nOf course, a randomized SGD algorithm that generates the exact result in expectation is only usefu if the resulting variance is small enough to maintain accuracy and the rate of convergence. We observe that for the variance to be small, the combiner matrix should have small singular values Interestingly, the combiner matrix resulting from SGD is dominated by the diagonal entries as the learning rate has to be small for effective learning. We use this property to perform the JL projectior only after subtracting the identity matrix. Also, other factors that control the singular values are the learning rate, number of processors, and the frequency of combining local models. This paper explores this design space and demonstrates the feasibility of efficient parallelization of SGD that retains the convergence properties of sequential SGD while enjoying parallel scalability.\nn = arg min Q(XiW,Yi) W* wERf i=0\nthat minimizes an error function Q. For linear regression, Q(X, . w, yi) = (X, . w - y)2. When (X,, y) is evident from the context, we will simply refer to the error function as Q,(w).\nWi =Wi-1-QVQr(wi-1) )=Wi-1-Q(XrWi-1-yr)X7\nConsider a training dataset (Xnxf, Ynx1), where f is the number of features, n is the number of examples in the dataset, the ith row of matrix X, X,, represents the features of the ith example, and. yt is the dependent value (or label) of that example. A linear model seeks to find a.\nHere, a is the learning rate that determines the magnitude of the update along the gradient. As it is clear from this equation, w; is dependent on w;-1 which creates a loop-carried dependence and consequently makes parallelization of SGD across iterations using naive approaches impossible.."}, {"section_index": "4", "section_name": "3.1 SYMBOLIC STOCHASTIC GRADIENT DESCENT", "section_text": "This section explains a new approach to parallelize the SGD algorithm despite its loop-carried de. pendence. As shown in Figure[1] the basic idea is to start each processor (except the first) on a. concrete model w along with a symbolic unknown w that captures the fact that the starting model. can change based on the output of the previous processor. If the dependence on w is linear during. an SGD update, which is indeed the case for linear regression, then the symbolic dependence on w on the final output can be captured by an appropriate matrix Ma->b that is a function of the input. examples Xa, ..., X processed (ya, ..., yb do not affect this matrix). Specifically, as Lemma|A.1. in the Appendix shows, this combiner matrix is given by.\nIn effect, the combiner matrix above is the symbolic representation of how a w change in the inpu will affect the output of a processor. Mg->h is referred by M when the inputs are not evident.\nOne can compute a sound model combiner for other SGD algorithms provided the loop-carried dependence on w is linear. In other words, there should exist a matrix A, and vector b; in iteration i such that w; = A, . w-1 + b. Note that A, and b, can be nonlinear in terms of input datasets."}, {"section_index": "5", "section_name": "3.2 DIMENSIONALITY REDUCTION OF A SOUND COMBINER", "section_text": "The combiner matrix M generate above can be quite large and expensive to compute. The sequential. SGD algorithm maintains and updates the weight vector w, and thus requires O(f) space and time,. where f is the number of features. In contrast, M is a f f matrix and consequently, the space and time complexity of parallel SGD is O(f2). In practice, this would mean that we would need O(f). processors to see constant speedups, an infeasible proposition particularly for datasets that can have thousands if not millions of features.\nLemma 3.1. [Let A be a random f k matrix with\nE[A.AT]=Ifxf\nThe complexity of SGD for each iteration is as follows. Assume that X, has z non-zeros. Therefore, the computation in Equation[1 requires O(z) amount of time for the inner product computation X, - w;1, and the same O(z) amount of time for scalar-vector multiplication, (X, . w,1- yr) XT If the updates to the weight vector happen in-place meaning that w; and wi-1 share the same memory. location, the computation in Equation[1[takes O(z) amount of time..\na Ma-b = H(1-aXTXi) i=b\nThe parallel SGD algorithm works as follows (see Figure[1). In the learning phase, each processor i starting from wo, computes both a local model l, and a combiner matrix M. In a subsequent reduction phase, each processor in turn computes its true output using\nW=l+MW-1-Wo\nSymSGD resolves this issue by projecting M into a smaller space while maintaining its fidelity This projection is inspired by the Johnson-Lindenstrauss (JL) lemma Johnson & Lindenstrauss (1984) and follows the treatment of Achlioptas Achlioptas(2001)\naij =dij/Vk\nThe matrix A from Lemma|3.1projects from Rf -> Rk where k can be much smaller than f. This allows us to approximate Equation|3 as\nW~l+MAA(Wi-1-Wo\nE[l+MAA(wi-1-wo)]=l+ME[AA](wi-1-wo)=Wi\nThis allows an efficient algorithm that only computes the projected version of the combiner matrix while still producing the same answer as the sequential algorithm in expectation. We call such combiners probabilistically sound.\nAlgorithm 1: SymSGD learning a local model and a model combiner..\n1 <vector,matrix,matrix> SymsgD( 1 vector SymsGDCombine(vector wo,. 2 float Q, vector: wo, Xi..Xn,. 2 vector w, vector l,. 3 scalar: y1..yn) { 3 matrix Ma, matrix A) {. 4 vector w = wo; 4 parallel { 5 5 matrix A = random(D,f,k) ; matrix NA = MA - A;. Vk 6 w = l+w-wo+NaA'(w-wo); 6 matrix MA = A; 7 7 for i in (l..n) { } 8 return w; } 8 w = w - Q(Xw - yi)Xi; 9 MA = MA - Q Xi(XMA); } 10 return <w,MA,A>; }\nAlgorithm 1 shows the resulting symbolic SGD learner. The random function in line 5 return a f k matrix with elements chosen independently from the random distribution D according t Lemma [3.1 When compared to the sequential SGD, the additional work is the computation o M in Line [9] It is important to note that this algorithm maintains the invariant that M = M A at every step. This projection incurs a space and time overhead of O(z k) where z is th number of non-zeros in X. This overhead is acceptable for small k and infact in our experiment in Section [4] k is between 7 to 15 across all benchmarks. Most of the overhead for such a smal k is hidden by utilizing SIMD hardware within a processor (SymSGD with one thread is only hal as slow as the sequential SGD as discussed in Section 4.1). After learning a local model and probabilistically sound combiner in each processor, Algorithm|2|combines the resulting local mode using the combiners, but additionally employs the optimizations discussed in Section |3.3"}, {"section_index": "6", "section_name": "3.3 CONTROLLING THE VARIANCE", "section_text": "While the dimensionality reduction discussed above is expected to produce the right answer, thi is useful only if the variance of the approximation is acceptably small. Computing the variance i involved and is discussed in the associated technical report SymSGDTR But we discuss the mai result that motivates the rest of the paper.\nConsider the approximation of M : w with v = M A. AT . w. Let C(v) be the covariance matrix. of v. The trace of the covariance matrix tr(C(v)) is the sum of the variance of individual elements of v. Let A,(M) by the ith eigenvalue of M and o,(M) = /X,(MTM) the ith singular value of. M. Let max(M) be the maximum singular value of M. Then the following holds SymSGDTR\nl|w| lw o?(M)<tr(C(w)) < (o?(M)+0max(M) k k i\nThe covariance is small if k, the dimension of the projected space, is large. But increasing k pro portionally increases the overhead of the parallel algorithm. Similarly, covariance is small if the. projection happens on small w. Looking at Equation4] this means that w;-1 should be as close to wo as possible, implying that processors should communicate frequently enough such that their.\nAlgorithm 2: SymSGD combining local models using model combiners\nNote that the correctness and performance of SymSGD do not depend on the sparsity of a dataset and as Section 4|demonstrates, it works for very sparse and completely dense datasets. Also, note that X1, . .. , Xn may contain a subset of size f' of all f features. Our implementation of Algorithm|1 takes advantage of this property and allocates and initializes A for only the observed features. This optimization is omitted from the pseudo code in Algorithm1|for the sake of simplicity..\nmodels are roughly in sync. Finally, the singular values of M should be as small as possible. Th next section describes a crucial optimization that achieves this..\nTaking Identity Off: Expanding Equation[2] we see that the combiner matrices are of the form\nI-aR+aR-aR3+..\nAn important factor in controlling the singular values of N is the frequency of model combinations which is a tunable parameter in SymSGD. As it is shown in Appendix[A.3] the fewer the number of. examples learned, the smaller the singular values of N and the less variance (error) in Equation 5.\nImplementation For the implementation of SymsGD function, matrix M and weight vector w are stored next to each other. This enables better utilization of vector units in the processor and improves the performance of our approach significantly. Also, most of datasets are sparse and therefore, SGD and SymsGD only copy the observed features from wo to their learning model w. Moreover, for. the implementation of matrix A, we used Achlioptas(2001) theorem to minimize the overhead of creating A. In this approach, each element of A is independently chosen from {3,-3, 0} with. probability {, 1, 3}, respectively."}, {"section_index": "7", "section_name": "4 EVALUATION", "section_text": "All experiments described in this section were performed on an Intel Xeon E5-2630 v3 machine clocked at 2.4 GHz with 256 GB of RAM. The machine has two sockets with 8 cores each, allowing us to study the scalability of the algorithms across sockets. We disabled hyper-threading and turbo boost. We also explicitly pinned threads to cores in a compact way which means that thread i + 1 was placed as close as possible to thread i. The machine runs Windows 10. All of our implementa- tions were compiled with Intel C/C++ compiler 16.0 and relied heavily on OpenMP primitives for parallelization and MKL for efficient linear algebra computations. And, finally, to measure runtime, we use the average of five independent runs on an otherwise idle machine.\nDatasets Table [1 describes the datasets used for evaluation. The number of features, training in stances, test instances, classes and the sparsity of each dataset is shown in Table[1 We used Vowpal\nwhere R, matrices are formed from the sum of products of X, : XT matrices. Since a is a small. number, this sum is dominated by I. In fact, for a combiner matrix M generated from n examples. M - I has at most n non-zero singular values SymSGDTR We use these observation to lower the. variance of dimensionality reduction by projecting matrix N = M - I instead of M. Appendix[A.3. empirically shows the impact of this optimization. Rewriting Equations[3|and4] we have.\nW=li+(N+I)wi-1-Wo =li+Wi-1-W0+N.Wi-1-W0 li+Wi-1-W0+NAAT(Wi-1-W0)\nLemma 3.1 guarantees that the approximation above is unbiased.. Algorithm2showsthe pseudo code for the resulting probabilistically sound combination of local models. The function. SymSGDCombine is called iteratively to combine the model of the first processor with the local models of the rest. Note that each model combination is executed in parallel (Line4) by parallelizing. the underlying linear algebra operations..\nThere are several algorithms and implementations that we used for our comparison: Vowpal Wab- bit[Langford et al.(2007), a widely used public library, Baseline, a fast sequential implementation, HW-Paper, the implementation from Recht et al.(2011), Hw-Release, an updated version, Hog- Wild, which runs Baseline in multiple threads without any synchronization, and ALLReDucE, the implementation from|Agarwal et al.(2014). Each of these algorithms have different parameters and settings and we slightly modified to ensure a fair comparison; see Appendix|A.4[for more details\nWhen studying the scalability of a parallel algorithm, it is important to compare the algorithms. against an efficient baseline|Bailey(1991);McSherry et al.(2015). Otherwise, it is empirically not. possible to differentiate between the scalability achieved from the parallelization of the inefficiencies. and the scalability inherent in the algorithm. We spent a significant effort to implement a well-tuned sequential algorithm which we call Baseline in our comparisons. Baseline is between 1.97 to 7.62. (3.64 on average) times faster than Vowpal Wabbit and it is used for all speedup graphs in this paper..\nTable 1: Datasets used for evaluation with their settings, maximum accuracies using logistic and linear regression and maximum speedup using SymSGD and HoGwiLD!. Red speedup numbers represent the only cases where HoGwiLD! performs faster than SyMSGD..\nWabbit with the configurations discussed in Appendix|A.4|to measure the maximum accuracies that can be achieved using linear and logistic regression and the result is presented in columns 8 and 9 of Table1 In the case of aloi dataset, even after 500 passes (the default for our evaluation was 100 passes) the accuracies did not saturate to the maximum possible and we reported that both linear and logistic achieved at least 80% accuracy. The last two columns show the maximum speedup of SymSGD and HoGwILD! over the baseline.\n10 sneeanp 6 4 2 0 0 0 2 4 6 8 10 12 14 16 0 2 4 6 8 10 12 14 16 0 2 4 6 8 10 12 14 16 # Threads # Threads # Threads (a) rcv1.binary speedup (b) rcv1.multiclass speedup (c) epsilon speedup 100 90 92 95 Aeeenncy 90 80 88 70 86 84 75 60 82 70 50 80 0 200000400000600000800000 0 200000 400000 600000 800000 1000000 0 5000000 10000000 15000000 # Examples # Examples # Examples (d) rcv1.binary accuracy (e) rcv1.multiclass accuracy (f) epsilon accuracy ....Baseline --SymSGD - AllReduce -+HogWild HW-Paper --HW-Release\nFigure 2: Speedup and accuracy comparison\nParameters Hyper-parameters searching is essential for performance and accuracy. The learning rate, Q, for each dataset was selected by searching for a constant value among {.5, .05, .005,...? where Baseline reached close to maximum accuracy for each benchmark. The parameters for the projection size, k, and the frequency of model combination were searched to pick the best perform. ing configuration. The parameters for ALLREDUCE were similarly searched."}, {"section_index": "8", "section_name": "4.1 RESULTS", "section_text": "Figure2shows the accuracy and speedup measurements on three benchmarks: rcv1.binary, a sparse binary dataset, rcv1.multiclass, a sparse multiclass dataset, and epsilon, a dense binary dataset. The results for the other six benchmarks are presented in Appendix|A.5.\nSparse Binary, rcv1.binary: Figure 2a| compares the scalability of all the algorithms studied in this paper. Hw-Paper is around six times slower than Hw-Release. While this could potentially be a result of us running Hw-Release on a Ubuntu VM, our primary aim of this comparison was to ensure that HogWild is a competitive implementation of HoGwiLD!. Thus, we remove HW-Paper and HW-Release in our subsequent comparisons..\nSymSGD is half as slow as the Baseline on one thread as it performs lot more computation, but. scales to a 3.5 speedup to 16 cores. Note, this represents a roughly 7 strong-scaling speedup with respect to its own performance on one thread. Analysis of the hardware performance counters. shows the current limit to SymSGD's scalability arises from load-imbalance across barrier synchro. nization, which provides an opportunity for future improvement..\nFigure |2d|shows the accuracy as a function of the number of examples processed by different algo-. rithms. SymSGD \"stutters\"' at the beginning, but it too matches the accuracy of Baseline. The initial. stuttering happens because the magnitude of the local models on each processor are large during the first set of examples. This directly affect the variance of the combiner matrix approximation. How-. ever, as more examples are given to SymSGD, the magnitude of the local models are smaller and thus SymSGD better matches the Baseline accuracy. One way to avoid this stuttering is to combine models more frequently (lower variance) or running single threaded for the first few iterations..\nHogWild does approach sequential accuracy, however, it does so at the cost of scalablity (i.e., se Figure2a(a)). Likewise, ALLReDucE scales slightly better but does so at the cost of accuracy.\nSparse Multiclass, rcv1.multiclass: Figure|2b shows the scalability on rcv1.multiclass. Since this is a multiclass dataset, SymSGD is competitive with the baseline on one thread as it is able to amortize the combiner matrix computation across all of the classes (M is the same across different classes). Thus, it enjoys much better scalability of 7 when compared to rcv1.binary. HogWilc scales similar to SyMSGD up-to 8 threads but suffers when 16 threads across multiple sockets are used. Figure 2e|shows that SymSGD meets the sequential accuracy after an initial stutter. ALLREDUCE suffers from accuracy.\nDense Binary, epsilon: Figure2c|in Appendix A.5 shows that SymSGD achieves a 7 speedup over the baseline on 16 cores. This represents a 14 strong scaling speedup over SymSGD or one thread. As HogwiLD! is not designed for dense workloads, its speedup suffers when 16 core. across multiple sockets are used. This shows that SyMSGD scales to both sparse and dense datasets Similarly, ALLREDucE suffers from accuracy."}, {"section_index": "9", "section_name": "5 RELATED WORK", "section_text": "Most schemes for parallelizing SGD learn local models independently and communicate to update the global model. The algorithms differ in how and how often the update is performed. These choices determine the applicability of the algorithm to shared-memory or distributed systems.\nTo the best of our knowledge, our approach is the only one that retain the semantics of the sequen. tial SGD algorithm. While some prior work provides theoretical analysis of the convergence rates that justify a specific parallelization, convergence properties of SymSGD simply follow from the sequential SGD algorithm. On the other hand, SymSGD is currently restricted to class of SGD computations where the inter-step dependence is linear in the model parameters..\nGiven a tight coupling of the processing units, Langford et al.Langford et al.(2009) suggest or a round-robin scheme to update the global model allowing for some staleness. However, as the SGD computation per example is usually much smaller when compared to the locking overhead, HogwiLD! Recht et al.(2011) improves on this approach to perform the update in a \"racy\" manner While HogwiLd! is theoretically proven to achieve good convergence rates provided the dataset is sparse enough and the processors update the global model fast enough, our experiments show that the generated cache-coherence traffic limits its scalability particularly across multiple sockets. Moreover, as HogwiLd! does not update the model atomically, it potentially loses correlation among more frequent features resulting in loss of accuracy. Lastly, unlike SymSGD, which works for both sparse and dense datasets, Hogwild! is expclitly designed for sparse data. Recently, Sallinen et al.(2016) proposed applying lock-free HogwiLD! approach to mini-batch. However, mini-batch converges slower than SGD and also they did not study multi-socket scaling.\nZinkevich et al.Zinkevich et al.(2010) propose a MapReduce-friendly framework for SGD. The basic idea is for each machine/thread to run a sequential SGD on its local data. At the end, the global model is obtained by averaging these local models. Alekh et al.[Agarwal et al.[(2014) extend this approach by using MPI_AllReduce operation. Additionally, they use the adagrad Duchi et al.[(2011) approach for the learning rates at each node and use weighted averaging to combine local models\nwith processors that processed a feature more frequently having a larger weight. Our experiments on our datasets and implementation shows that it does not achieve the sequential accuracy..\nSeveral distributed frameworks for machine learning are based on parameter server Li et al [2014b a) where clients perform local learning and periodically send the changes to a central pa rameter server that applies the changes. For additional parallelism, the models themselves can be split across multiple servers and clients only contact a subset of the servers to perform their updates\nWith terabytes of memory available on multicore machines today, our current implementation has the capability of learning from large datasets without incurring the communication overheads of a distributed system. That said, we believe the ideas in this paper apply to distributed SGD algorithms and how to pursue in future work.\nMany machine learning SGD algorithms require a nonlinear dependence on the parameter mod- els. While SymSGD does not directly apply to such algorithms, it is an interesting open problem to devise linear approximations (say using Taylor expansion) to these problems and subsequently parallelize with probabilistically sound combiners. This is an interesting study for future work.\nJohn Langford, Lihong Li, and Alex Strehl. Vowpal Wabbit. 2007\nJohn Langford, Alexander Smola, and Martin Zinkevich. Slow learners are fast. arXiv preprin arXiv:0911.0491, 2009.\nMu Li, David G Andersen, Alex J Smola, and Kai Yu. Communication efficient dis In Z. Ghahramani, M. Welling,. tributed machine learning with the parameter server..\nBenjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach tc. parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems pp. 693-701, 2011. S. Sallinen, N. Satish, M. Smelyanskiy, S. S. Sury, and C. R. High performance parallel stochasti gradient descent in shared memory. In 2016 IEEE International Parallel and Distributed Pro cessing Symposium (IPDPS), pp. 873-882, May 2016. doi: 10.1109/IPDPS.2016.107.\nMartin Zinkevich. Markus Weimer, Lihong Li, and Alex J Smola. Parallelized stochastic gradien descent. In Advances in neural information processing systems, pp. 2595-2603, 2010."}, {"section_index": "10", "section_name": "A.1 COMBINER MATRIX", "section_text": "Lemma A.1.Ifthe SGD algorithmforlinear regression processes examples (Xa,Ya), (Xa+1, Ya+1),...,(Xb,yb) starting from model ws to obtain wb, then its outcome. starting on model ws + w is given by w + Ma->b : w where the combiner matrix Ma--b is given by\nWa=Ws+ A =ws+w-a(Xaws-ya)XT-a(Xaw)X! =Ws-a(XaWs-ya)XT+w-a(Xaw)XI =Wa+w-a(Xaw)XI =wa+w-aXT(Xaw) =wa+w-a(XTXa)w =wa+(I-aXT.Xa)w\nStep6|uses Equation[1] Step7]uses the fact that Xg : w is a scalar (allowing it to be rearranged] and Step[8|follows from associativity property of matrix multiplication..\nThe induction is very similar and follows from replacing w with Ma->i-1w and the propert that\nMa-=(I-aXXi)Mai-1\nMa-b = H(1-aXfx;) i=b\nProof. The proof follows from a simple induction. Starting from ws, let the models computed by SGD after processing (Xa, Ya), (Xa+1, Ya+1),..., (Xb, yb) respectively be wa, Wa+1,... Wb.. Consider the base case of processing of (Xa, ya). Starting from ws + w, SGD computes the. model w' using Equation[1|(reminder: w; = w;-1 Q(X; : w;-1 yi)X)\nsingular values of the combiner singular values of the combiner matrix with alpha = 0.01 matrix with alpha = 0.001 1.01 1.01 1 1 0.99 0.99 0.98 0.98 0.97 0.97 0 64 128 192 256 0 64 128 192 256 iter64 .....iter128 - -iter192 iter256 iter64...iter128 - -iter192 iter256 (a) Q = 0.01 (b) a = 0.001\nFigure 3: Distribution of singular values of M for rcv1 binary dataset for a = 0.01 and a = 0.001\nFigure |3 empirically demonstrates the benefit of taking identity off. This figure plots the singular values of M for the rcv1.binary dataset (described in Section4) after processing 64, 128, 192, 256 examples for two different learning rates. As it can be seen, the singular values are close to 1.. However, the singular values of N = M - I are roughly the same as those of M minus 1 and. consequently, are small. Finally, the smaller a, the closer the singular values of M are to 1 and the singular values of N are to 0. Also, note that the singular values of M decrease as the numbers of examples increase and therefore, the singular values of N increase. As a result, the more frequent the models are combined, the less variance (and error) is introduced into Equation[5"}, {"section_index": "11", "section_name": "A.4 ALGORITHM DETAILS AND SETTINGS", "section_text": "This section provides details of all algorithms we used in this paper. Each algorithm required slig modification to ensure fair comparison.\nVowpal Wabbit: Vowpal Wabbit Langford et al.[(2007) is one of the widely used public libraries for machine learning algorithms. We used this application as a baseline for accuracy of different datasets and as a comparison of logistic and linear regression and also an independent validation of the learners without any of our implementation bias. Vowpal Wabbit applies accuracy-improving optimizations such as adaptive and individual learning steps or per feature normalize updates. While all of these optimizations are applicable to S ymSGD, we avoided them since the focus of this paper is the running time performance of our learner. The non-default flags that we used are: --sgd -power_t 0, --holdout_off, --oaa nc for multiclass datasets where nc is the numbei of classes, --loss_function func where func is squared or logistic. For learning rate, we searched for a, the learning rate, in the set of {.1, .5, .01, .05, .001, .005,... } and usec --learning_rate a. We went through dataset 100 times for each dataset (--passes 100) and saved the learned model after each pass (--s ave per_pa s s). At the end, for linear and logis tic regressions, we reported the maximum accuracies achieved among different passes and different learning rates.\nBaseline: Baseline uses a mixture of MKLIntel and manually vectorized implementations of linea. algebra primitives in order to deliver the fastest performance. Baseline processes up-to 3.20 billiol features per second at 6.4 GFLOPS.\nHoGwiLD!: HoGwiLD! Recht et al.(2011) is a lock-free approach to parallelize SGD where multiple thread apply Equation 1|simultaneously. Although this approach may have race condition\n. Hw-Paper: This is the implementation used to report the measurements in Recht et al.(2011. which is publicly available [Hogwild This code implements SVM algorithm. Therefore, we modi. fied the update rule to linear regression. The modified code was compiled and run on our Windows. machine described above using an Ubuntu VM since the code is configured for Linux systems.\n14 10 14 12 12 snpeanp 10 8 8 10 6 8 6 4 6 4 4 2 2 2 0 0 0 0 2 4 6 8 10 12 14 16 0 2 4 6 8 10 12 14 16 0 2 4 6 8 10 12 14 16 # Threads # Threads # Threads (a) aloi (b) sector (c) mnist 14 6 5 12 5 snpeanp 1 8 4 4 2 6 3 2 4 2 2 1 1 0 0 0 0 2 4 6 8 10 12 14 16 0 2 4 6 10 12 14 16 0 2 4 6 8 10 12 14 16 # Threads # Threads # Threads (d) mnist8m (e) news20 (f) url ...Baseline -SymSGD AllReduce +HogWild\nFigure 4: Speedups on remaining benchmarks\nwhen two threads process instances with a shared feature but the authors discuss that this does not hurt the accuracy significantly for sparse datasets. There are multiple implementations of this. approach that we studied and evaluated in this section. Below is a description of each:.\nALLReDuCE: ALLREDUcE[Agarwal et al.(2014) is an approach where each thread makes a copy from the global model and applies the SGD update rule to the local model for certain number of instances. Along with the local model, another vector g is computed which indicates the confidence in an update for the weight of a feature in the local model. After the learning phase, the local weight vectors are averaged based on the confidence vectors from each thread. We implemented this approach similarly using MKL calls and manual vectorization.."}]
rkuDV6iex
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks are trained by optimizing an extremely high-dimensional loss function witl. respect to the weights of the network's linear layers. The objective function minimized is some. measure of the error of the network's predictions based on these weights compared to training data This loss function is non-convex and has many local minima. These loss functions are usually minimized using first-order gradient descent (Robbins & Monro] 1951] Polyak[1964) algorithms such as stochastic gradient descent (SGD) (Bottou1991). The success of deep learning critically. depends on how well we can minimize this loss function, both in terms of the quality of the loca minima found and the time to find them. Understanding the geometry of this loss function and how. well optimization algorithms can find good local minima is thus of vital importance..\nSeveral works have theoretically analyzed and characterized the geometry of deep network loss functions. However, to make these analyses tractible, they have relied on simplifications of the network structures, including that the networks are linear (Saxe et al.|2014), or assuming the path and variable independence of the neural networks (Choromanska et al.2015). Orthogonally, the performance of various gradient descent algorithms has been theoretically characterized (Nesterov 1983). Again, these analyses make simplifying assumptions, in particular that the loss function is\nIn this work, we empirically investigated the geometry of the real loss functions for state-of-the-art. networks and data sets. In addition, we investigated how popular optimization algorithms interact with these real loss surfaces. To do this, we plotted low-dimensional projections of the loss function. in subspaces chosen to investigate properties of the local minima selected by different algorithms.. We chose these subspaces to address the following questions:."}, {"section_index": "1", "section_name": "2.1 LOSS SURFACES", "section_text": "Work done during an internship at Janelia Research Campus"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "What types of changes to the optimization procedure result in different local minima?. . Do different optimization algorithms find qualitatively different types of local minima?\nThere have been several attempts to understand the loss surfaces of deep neural networks. Some have studied the critical points of the deep linear neural networks (Baldi]1989} Baldi & Hornik\nOne approach is to analogize the states of neurons as the magnetics dipoles used in spherical spin glass Ising models from statistical physics (Parisi]|2016) Fyodorov & Williams]2007]Bray & Dean 2007).Choromanska et al.(2015) attempted to understand the loss function of neural networks through studying the random Gaussian error functions of Ising models. Recent results (Kawaguchi 2016, Soudry & Carmon2016) have provided cursory evidence in agreement with the theory pro vided by Choromanska et al.(2015) in that they found that that there are no \"poor' local minima in neural networks still with strong assumptions.\nThere is some potential disconnect between these theoretical results and what is found in practice due to several strong assumptions such as the activation of the hidden units and output being inde. pendent of the previous hidden units and input data. The work of Dauphin et al.(2014) empirically investigated properties of the critical points of neural network loss functions and demonstrated that their critical points behave similarly to the critical points of random Gaussian error functions in higl dimensional space. We will expose further evidence along this trajectory."}, {"section_index": "3", "section_name": "2.2 OPTIMIZATION", "section_text": "In practice, the local minima of deep network loss functions are for the most part decent. This implies that we probably do not need to take many precautions to avoid bad local minima in practice. If all local minima are decent, then the task of finding a decent local minimum quickly is reduced to the task of finding any local minimum quickly. From an optimization perspective this implies that solely focusing on designing fast methods are of key importance for training deep networks.\nIn the literature the common method for measuring performance of optimization methods is to analyze them on nice convex quadratic functions (Polyak]1964] Broyden1970] Nesterov1983 Martens2010] Erdogdu & Montanari]2015) even though the problems are applied to non-convex problems. For non-convex problems, however, if two methods converge to different local minima their performance will be dictated on how those methods solve those two convex subproblems. It is challenging to show that one method will beat another without knowledge of the sort of convex subproblems, which is generally not known apriori. What we will explore is whether indeed are some characteristics that can found experimentally. If so, perhaps one could validate where these analytical results are valid or even improve methods for training neural networks.\nFigure 1: An example of learning curve of neural network\n4.0 training Slowing 3.5 Fast decaying decaying validation 3.0 2.5 sso| 2.0 1.5 1.0 0.5 0.0 50 100 150 200 epochs\nOne of the interesting empirical observation is that we often observe is that the incremental improve- ment of optimization methods decreases rapidly even in non-convex problems. This behavior has been discussed as a \"transient\"' phase followed by a \"minimization' phase (Sutskever et al.[2013)\nwhere the former finds the neighborhood of a decent local minima and the latter finds the local minima within that neighborhood. The existence of these phases implies that if certain methods are. better at different phases one could create novel methods that schedule when to apply each method\nWe conducted experiments on three state-of-the-art neural network architectures. Network-in- Network (NIN) (Lin et al.]2014) and the VGG(Simonyan & Zisserman] 2015) network are feed forward convolutional networks developed for image classification, and have excellent performance on the Imagenet (Russakovsky et al.] 2014) and CIFAR10 (Krizhevsky2009) data sets. The long short-term memory network (LSTM) (Hochreiter & Schmidhuber 1997) is a recurrent neural net work that has been successful in tasks that take variable-length sequences as input and/or produce variable-length sequences as output, such as speech recognition and image caption generation. These are large networks currently used in many machine vision and learning tasks, and the loss functions minimized by each are highly non-convex.\nAll results using the feed-forward convolutional networks (NIN and VGG) are on the CIFAR10 image classification data set, while the LSTM was tested on the Penn Treebank next-word prediction data set."}, {"section_index": "4", "section_name": "3.2 OPTIMIZATION METHODS", "section_text": "We analyzed the performance of five popular gradient-descent optimization methods for these learn- ing frameworks: Stochastic gradient descent (SGD) (Robbins & Monro|1951), stochastic gradient descent with momentum (SGDM), RMSprop (Tieleman & Hinton2012), Adadelta (Zeiler et al. 2011), and ADAM (Kingma & Ba]2014). These are all first-order gradient descent algorithms that estimate the gradients based on randomly-grouped minibatches of training examples. One of the major differences between these algorithms is how they select the weight-update step-size at each iteration, with SGD and SGDM using fixed schedules, and RMSprop, Adadelta, and ADAM using adaptive, per-parameter step-sizes. Details are provided in Section[A.2\nIn addition to these five existing optimization methods, we compare to a new gradient descent method we developed based on the family of Runge Kutta integrators. In our experiments, we tested a second-order Runge-Kutta integrator in combination with SGD (RK2) and in combination with ADAM (ADAM&RK2). Details are provided in Section[A.3]"}, {"section_index": "5", "section_name": "3.3 ANALYSIS METHODS", "section_text": "Several of our empirical analyses are based on the technique of Goodfellow et al. (Goodfellow et al.. 2015). They visualize the loss function by projecting it down to one carefully chosen dimension. They plot the value of the loss function along a set of samples along this dimension. The projec. tion space is chosen based on important weight configurations, thus they plot the value of the loss function at linear interpolations between two weight configurations. They perform two such analy. ses: one in which they interpolate between the initialization weights and the final learned weights. and one in which they interpolate between two sets of final weights, each learned from different. initializations.\nWe refer to the critical points found using these variants of SGD, for which the gradient is approxi mately O, as local minima. Our evidence that these are local minima as opposed to saddle points is\nIn this work, we use a similar visualization technique, but choose different low-dimensional sub spaces for the projection of the loss function. These subspaces are based on the initial weights as well as the final weights learned using the different optimization algorithms and combinations of them, and are chosen to answer a variety of questions about the loss function and how the different optimization algorithms interact with this loss function. In contrast, Goodfellow et al. only looked at SGDM. In addition, we explore the use of two-dimensional projections of the loss function, al- lowing us to better visualize the space between local minima. We do this via barycentric and bilinar interpolation for triplets and quartets of points respectively (details in Section|A.1).\nInitial Config. Initial Config. Initial Config. Initial Config. ADAM ADAM&RK2 ADAM ADAM&RK2 SGD SGD SGD SGD SGD RK2 RK2 SGD (a) NIN (b) VGG (a) NIN (b) VGG\nInitial Config. Initial Config. Initial Config. Initial Config. ADAM ADAM&RK2 ADAM ADAM&RK2 SGD SGD SGD SGD SGD RK2 RK2 SGD\nFigure 2: : Visualization of the loss surface at weights interpolated between two initial configu.. rations and the final weight vectors learned using SGD from these initializations.\nsimilar to that presented in Goodfellow et al. (Goodfellow et al.]2015). If we interpolate beyond th critical point, in this one-dimensional projection, the loss increases (Fig.10)\nThe batch size was set to 128 and the number of epochs was set to 200. The learning rate was chose. from the discrete range between [0.2, 0.1, 0.05, 0.01] for SGD and [0.002, 0.001, 0.0005, 0.0001] fo adaptive learning methods. We doubled the learning rates when we ran our augmented versions witl. Runge-Kutta because they required two stochastic gradient computations per epoch. We used batch normalization and dropout to regularize our networks. All experiments were run on a 6-core Intel(R Xeon(R) CPU @ 2.40GHz with a TITAN X.\nWe trained the neural networks described above using each optimization method starting from th same initial weights and with the same minibatching. We computed the value of the loss functior. for weight vectors interpolated between the initial weights, the final weights for one algorithm, anc. the final weights for a second algorithm for several pairings of algorithms. The results are shown i1. the lower triangle of Table1\nFor every pair of optimization algorithms, we observe that the training loss between the final weights. for different algorithms shows a sharp increase along the interpolated path. This suggests that each optimization algorithm found a different critical point, despite starting at the same initialization. We. investigated the space between other triplets and quadruples of weight vectors (Figure2|and3), and even in these projections of the loss function, we still see that the local minima returned by different. algorithms are separated by high loss weight parameters..\nDeep networks are overparameterized. For example, if we switch all corresponding weights for a. pair of nodes in our network, we will obtain effectively the same network, with both the origina and permuted networks outputting the same prediction for a given input. To ensure that the weigh. vectors returned by the different algorithms were functionally different, we compared the outpts o. the networks on each example in a validation data set:.\nwhere 01 and 02 are the weights learned by two different optimization algorithms, x; is the input fo. a validation example, and F(x, 0) is the output of the network for weights 0 on input x\nFigure 3: Visualization of the loss surface at weights interpolated between the weights learned by four different algorithms from the same ini tialization.\nNtest 1 dist(01,02) = ||F(xi,01)-F(xi,02)|2 est i=1\nTable 1: Visualization of the loss surface near and between local minima found by different opti mization methods. Each box corresponds to a pair of optimization methods. In the lower triangle. we plot the projection of the loss surface at weight vectors between the initial weight and the learned weights found by the two optimization methods. Color as well as height of the surface indicate the loss function value. In the upper triangle, we plot the functional difference between the network. corresponding to the learned weights for the first algorithm and networks corresponding to weights linearly interpolated between the first and second algorithm's learned weights. (Best viewed in z0om)\nSGD RMSprop Adadelta Adam RK2 Adam&RK2 dist(RMSB - RM$Sprop) t(SGD, SGD + (1 o)RMSprop) =ddsD+1-ada =(SRD, oRK3 + (1- aSED} bitnneee Pianeee bianneeee SGD .5 .5 00 RMSprop Q SGD SGD .5 Adam a SGD RK2 21 t(RMS (1 a)RMSprop) , Adam + (1 aRMSprop) bitnneee 1 RMSprop .5 RMSprop Adam Init Adadelta RMSprop Adadelta Init / dist AdaaRK + 1 Adam) Init Init dssaeee dit AdamRR2 + (1 )A dam) 1 Adam Adam Adam Adam Adadelta 00 Adam&RK2 SGD RMSprop Adam Q < dist(RK2, Adam&RK2 + (1 )RK2) Init bianneeee (AAdam&RK2 + (1 a)RK2) 1 RK2 0 RK2 SGD RK2 Adam&RK2 a Init, Init Init Adam&RK2 Adam&RK2 Adam Adam&RK2 Adadelta Adam&RK2 RK2\n99.0 90.2 . 90.0 98.5 : .0 0 89.8 98.0 89.6 : 0 0 . 0 0 : 0 89.2 . : . . . 89.0 . 96.5 O O . 88.8 0 0 96.0 88.6 : 95.5 88.4 SGD RK2 ADAM ADAM&RK2Rmsprop Adadelta SGD RK2 ADAM ADAM&RK2 Rmsprop Adadelta (a) Train (b) Test\n. : 90.0 98.5 0 . 0 89.8 98.0 89.6 97.5 0 AArreree . . 0 0 0 0 0 89.2 . 97.0 . . . . 89.0 . 0 96.5 0 . O 88.8 0 96.0 0 88.6 . : 95.5 88.4\nFigure 4: (a) Training accuracy and (b) test accuracy for each of the optimization methods. Colo correspond to different initializations.\nWe found that, for all pairs of algorithms, the average distance between the outputs of the network (Equation4.1) was approximately 0.16, corresponding to a label disagreement of about 8% (uppe triangle of Table[1). Given the generalization error of these networks (approximately 11%, Figure[4] the maximum disagreement we could see was 22%. Thus, these networks disagreed on a large. fraction of these test examples - over rd. Thus, the local minima found by different algorithms correspond to effectively different networks, not trivial reparameterizations of the same one..\n2.0 1.5 0.5 0.9.0 0.2 .4 0.6 0.8 0.2 .6 0.8 0.6 0.0 0.2 0.4 0.6 alpha 0.8 alpha alpha alpha init-sgd init-adam&RK2 sgd-RK2 RK2-adam&RK2 init-sgd sgd-RK2 RK2-adam&RK2 sgd-adam init-adam init-RK2 adam-adam&RK2 init-adam init-RK2 sgd-adam adam-adam&RK2 sgd-adam&RK2 RK2-adam sgd-adam&RK2 RK2-adam (a) NIN - Initial to Final. (b) NIN - Final to Final. (c) VGG - Initial to Final. (d) VGG - Final to Final\nFigure 5: Loss function value near local minima found by multiple restarts of each algorithm\nNext, we investigated whether the local minima found by the different optimization algorithms hac distinguishing properties. To do this, we trained the networks with each optimization algorithr using different initial parameters. We then compared differences between runs of the same algorithn but different initializations to differences between different algorithms.\nAs shown in Figure4(a), in terms of training accuracy, we do see some stereotypy for the optima found by different algorithms, with SGD finding local minima with the lowest training accuracy anc ADAM, Rmsprop, and Adadelta finding local minima with the highest training accuracy. However this could be attributed to SGD's asymtotically slow convergence near local minima due to the gradient diminishing near extrema. Despite this limitation, Figure|4(b) shows that the generalizatior accuracy of these different local minima on validation data was not significantly different betweer algorithms. We also did not see a relationship between the weight initialization and the validatior accuracy. Thus, while these algorithms fall into different local minima, they are not different ir terms of their final quality.\nWe visualized the loss surface around each of the local minima for the multiple runs. To do this we plotted the value of the loss function between the initial and final weights for each algorithm. (Figure [5(a,c)) for each run of the algorithm from a different initialization. In addition, we plotted.\nFigure 6: Observing the absolute size of basin for different local minimas found by different opti mization methods.\nthe value of the loss function between the final weights for selected pairs of algorithms for each run (Figure 5(b,d)). We see that the surfaces look strikingly similar for different runs of the same algorithm, but characteristically different for different algorithms. Thus, we found evidence that the different algorithms land in qualitatively different types of local minima.\nIn particular, we see in Figure |5(a,c) that the size of the basins around the local minima found by. ADAM and ADAM&RK2 are larger than those found by SGD and RK2, i.e. that the training loss is small for a wider range of a values. This is a relative measure, and the magnitude of the change in the weight vector is a[01 - 0o for a change of size a, where 0o is the initial weight vector 0] is the result found by a given optimization algorithm. In Figure 6l we repeat this analysis, instead showing the loss as a function of the absolute distance in parameter space:.\n0o 01 9()=01+ 0o01\nWe again see that the size of the basin around the local minima varies by optimization algorithm Note that we evaluate the loss for weight vectors beyond the initial configuration, which had a loss Of 2.4.\nWe found that, regardless of how late we switch optimization algorithms, as shown in the right. most column of Figure 7] the local minima found were all different. This directly disagrees with the notion that the local minimum has effectively been chosen before the \"minimization'' phase, but instead that which local minimum is found is still in flux this late in optimization. It appears that. this switch from one local minimum to another happens almost immediately after the optimization method switches, with the training accuracy jumping to the characteristic accuracy for the given. method within a few epochs (Figure[7l left column). Interestingly, we also see the distance between. the initial and current weight vectors changes drastically after switching from one optimization.\n6 5 4 Sso 3 2 1 0 0 50 100 150 200 250 300 lambda adadelta rmsprop ADAM sgd ADAM&RK2 RK2\nRecall that, during optimization, it has been observed that there is a short \"transient\"' phase when the. loss decreases rapidly and a \"minimization\"' phase in which the loss decreases slowly (Section|2.2.1 and Figure[1). In this set of experiments, we investigated the effects of switching from one type of optimization method to another at various points during training, in particular at late stages of. training when it is thought that a local minimum has been chosen and is only being localized. We. switched from one optimization method to another 25%, 50%, and 75% of the way through training. The results are plotted in Figure[7d We emphasize that we are not switching methods to improve. performance, but rather to investigate the shape of the loss function in regions explored during the. 'minimization'' phase of optimization.\n250 2.5 ed 1 2.C 1.5 0.2 0.4 0.6 0.8 1.0 100 100 alpha Epoch Epoch A200->A50-S150 A50-S150->A100-S100 A200 A100-S100 A200 A100-S100 A200<->A100-S100 A50-S150->A150-S50 A50-S150 A150-S50 A50-S150 A150-S50 A200<->A150-S50 A100-S100->A150-S50 (a) The learning rate is set to O.001 for ADAM , and then switched to SGD with learning rate O.01. 100 40 30 .7 0.4 0.6 0.8 1.0 100 150 200 100 alpha Epoch Epoch S200->S50-A150 S50-A150->S100-A100 S100-A100 S100-A100 S200<->S100-A100 S50-A150->S150-A50 S200 S200 S50-A150 S150-A50 S50-A150 S150-A50 S200<->S150-A50 S100-A100->S150-A50 (b) The learning rate is set to 0.1 for SGD, and then switched to ADAM with learning rate 0.0001 1200 100 00 100 alpha Epoch A200<->A50-ADE150 A50-ADE150->A100-ADE100 ADE200 ADE100-A100 A200 A100-ADE100 A200<->A100-ADE100 A50-ADE150->A150-ADE50 ADE50-A150 ADE150-A50 A50-ADE150 A150-ADE50 A200<->A150-ADE50 A100-ADE100->A150-ADE50 (c) The learning rate is set to O.001 for ADAM , and then switched to Adadelta (learning rate is not required) 100 900 800 80 30 20 100 150 200 50 100 Epoch 150 Epoch alpha ADE200->ADE50-A150 A200 A100-ADE100 ADE200 ADE100-A100 ADE200<->ADE100-A100 ADE50-A150->ADE150-A50 A50-ADE150 A150-ADE50 ADE50-A150 ADE150-A50 ADE200<->ADE150-A50 ADE100-A100->ADE150-A50\n(d) The learning rate is not required for Adadelta, and then switched to ADAM with learning rate O.o001\nFigure 7: Switching methods from one method to another method at epoch 50 and 100 and 150 Accuracy curve (Left two columns). Distance between initial weights to weights at each epoch (Middle). The interpolation between different convergence parameters (Right). For instance, S100 A100 as trained with SGD in the first 100 epoch and switched to ADAM for the rest of the epoch (Best viewed in zoom)\nInitial Config.e ADAM ADAM ADAM With Batch Normalization SGD Rmsprop SGDM Adadelta Rmsprop Adam Rmsprop SGDM VGG VGG VGG VGG ADAM&RK2 ADAM ADAM Initial Config.. Without Batch Normalization Rmsprop sGDM Rmsprop ADAM RmspropRmsprop ADAM&RK2 VGG ADAM VGG NIN NIN\nTable 2: Visualization of the Loss Surface with and without batch-normalization\n2.5 2.5 12000 2.0 2.0 10000 8000 1.5 1.5 SSO 6000 1.0 1.0 4000 0.5 0.5 2000 0.00.0 0.00.0 %.0 0.2 0.4 0.6 0.8 0.2 0.4 0.6 1.0 0.2 0.4 1.0 0.8 0.6 0.8 1.0 alpha alpha Epoch init-adam+ralston init-rmsprop init-adam init-rmsprop init_sgd init_adagrad init_adam init-ralston init-adam&RK2 init_sgdm init_rmsprop init-adam (a) NIN - Initial to Final. (b) VGG - Initial to Final. (c) LSTM - Initial to Final. 3.0 4.0 9000 8000 2.5 3.5 7000 3.0 2.0 6000 2.5 5000 1.5 2.0 4000 1.0 1.5 3000 1.0 2000 0.5 0.5 1000 0.0 00.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.0 0.8 1.0 alpha 0.2 0.4 0.6 0.8 1.0 Epoch alpha rmsprop-ralston ralston-adam+ralston sgd-sgdm rmsprop-adam adam-adam&RK2 adam&RK2-rmsprop rmsprop-adam adam-adam+ralston sgd-adagrad adagrad-adam rmsprop-adam+ralston adam-rmsprop ralston-adam sgd-adamrmsprop-adagrad (e) VGG - Final to Final. (d) NIN - Final to Final. (f) LSTM - Final to Final.\nFigure 8: (Without batch normalization) Loss function with parameter interporlated between Initia to Final. (Bottom) Loss function with parameters interporlated between Final to Final. Each column. of the plots are from different initializations..\nmethod to another, and that this distance is characteristic per algorithm (Figure 7 middle column) While distance increases with training epoch for any single optimization method, it actually starts tc decrease when switching from ADAM to SGD."}, {"section_index": "6", "section_name": "4.4 EFFECTS OF BATCH-NORMALIZATION", "section_text": "To understand how batch normalization affects the types of local minima found, we performed a set of experiments comparing loss surfaces near local minima found with and without batch normal-\nization for each of the optimization methods. We visualized the surface near these local minim by interpolating between the initial weights and the final weights as well as between pairs of fina weights found with different algorithms.\nWe observed clear qualitative differences between optimization with (Figure [5) and without (Fig. ure[8) batch normalization. We see that, without batch normalization, the quality of local minimun. found by a given algorithm is much more dependent on the initialization. In addition, the sur faces between different local minima are more complex in appearance: with batch normalization we see sharp unimodal jumps in performance but without batch normalization we obtain wide bumpy. shapes that aren't necessarily unimodal..\nThe neural networks are typically initialized with very small parameter values (Glorot & Bengio 2010, He et al.]2015). Instead, we trained NIN with exotic intializations such as initial parameters drawn from N(-10.0, 0.01) or W (-1.0, 1.0) and observe the loss surface behaviours. The detail of results are discussed in Appendix|A.5\nIn this work, we performed a series of empirical analyses to understand the geometry of the los. functions corresponding to deep neural networks, and how different optimization methods minimize this loss to answer the two questions posed in the introduction..\nWe found that every type of change to the optimization procedure we tested resulted in a different. local minimum. Different local minima were found using the different optimization algorithms from the same initialization (Section4.1). Even switching the optimization algorithm to another very late in optimization - during the slow \"mimimization'' portion of learning - resulted in a different local. minimum (Section4.3). The quality of the local minima found, in terms of training and generaliza tion error, is similar. These different local minima were not equivalent, and made mistakes on differ ent test examples (Section4.1). Thus, they were not trivially different local minima, as would occur if nodes in internal layers of the network were permuted. We observed that the quality of these local minima was only consistently good when we used batch normalization for regularization. Without batch normalization, the quality of the critical points found depended on the initialization, and some solutions found were not as good as others. Our observations are in contrast to the conclusions of Goodfellow et al., i.e. that local minima are not a problem in deep learning because, in the region of the loss function explored by SGD algorithms, the loss function is well-behaved (Goodfellow et al.|2015). Instead, our observations are more consistent with the explanation that the local min- ima found by popular SGD optimization methods are almost all good (Choromanska et al.]2015 Kawaguchi2016 Soudry & Carmon2016).\nInterestingly, we found that, while the local minima found by the same optimization algorithm from different initializations were different, the shape of the loss function around these local minima was strikingly similar. and was a characteristic of the optimization algorithm. In particular. we found that the size of the basin around ADAM-based optimization was larger than that around vanilla SGD (Section4.2). A large basin is related to a large margin, as small changes in the weight vector will not affect the training error, and perhaps could have some implications for generalization error. In our experiments, however, we did not observe better generalization error for ADAM than SGD. Questions for potential future research are why the shapes of the loss functions around different local minima found by the same algorithm are so similar, and what the practical implications of this are."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Pierre. Baldi. Linear learning: Landscapes and algorithms. In In Advances in neural informatio processing systems., pp. 65-72, 1989.\nPierre Baldi and K. Hornik. Neural networks and principal component analysis: Learning fron examples without local minima. Neural Networks. 2:53-58. 1989\nierre. Baldi and Zhiqin. Lu. Complex-valued autoencoders. Neural Networks, 33:136-147, 2012\nLeon Bottou. Stochastic gradient learning in neural networks. In Proceedings of Nuero-Nimes 1991.\nAlan J. Bray and David S. Dean. The statistics of critical points of gaussian fields on large dimensional spaces. In Physics Review Letter. 2007..\nJohn C. Butcher. Coefficients for the study of runge-kutta integration processes. Society for Indus trial and Applied Mathematics. 3:185-201. 1963\nAnna Choromanska, Mikael Henaf, Michael Mathieu, Gerard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. arXiv preprint arXiv:1406.2572. 2015.\nMurat A. Erdogdu and Andrea Montanari. Convergence rates of sub-sampled newton methods. In Proceedings of the Neural Information Processing Systems (NIPs), 2015.\nYan V. Fyodorov and Ian Williams. Replica symmetry breaking condition exposed by random matrix. calculation of landscape complexity. Journal of Statistical Physics,. 129:1081-1161. 2007\nAlex Krizhevsky. Learning multiple layers of features from tiny images. In MSc thesis, Univesity oJ Toronto, 2009.\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. In Proceedings of the International Conference on Learning Representations (ICLR), 2014..\nXavier Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neura networks. In Jnternational . rtiteialintelioen id statistics. 2010\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In arXiv preprint arXiv:1502.01852, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings oJ\nHerbert Robbins and Sutton Monro. A stochastic approximation method. Annals of Mathematica. Statistics, 22(3):400-407, 1951.\nDaniel Soudry and Yair Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint arXiv:1605.08361, 2016..\nMatthew D. Zeiler, Graham W. Taylor, and Rob Fergus. Adaptive deconvolutional networks for mic and high level feature learning. In International Conference on Computer Visio, 2011."}, {"section_index": "8", "section_name": "A SUPPLEMANTARY MATERIALS", "section_text": "Goodfellow et al.(2015) introduced the idea of visualizing 1D subspace of the loss surface between the parameters. Here, we propose to visualize loss surface in 3D space through interpolating over three and four vertices.\nLinear Interpolation Given two parameters 0j and 02\n0 =a0 +(1-a)0 Va E [0, 1].\n=a0+1-a)0 Pi=a03+(1-a)04 0=$i+1-\nfor all a E [0, 1] and e [0, 1]\n$i= ad +0g Si = ad2 + 0o 0=$i+1)\nfor all a E [0, 1] and eE [0, 1]\nIn many deep learning applications both the number of parameters and quantity of input data point can be quite large. This makes the full evaluation of U(0) be prohibitively expensive. A standar technique for aleviating computational loadis to apply an stochastic approximation to the gradi entRobbins & Monro(1951). More precisely, one approximates U by a subset of n data points denoted by {;} } =1 at each timestep:\nN n 1 1 Un(e) = l(0,xi) = U(0 N n j=1 i=1\nThis method is what is commonly called Stochastic Gradient Descent or SGD. So long as the data i distributed nicely the approximation error of Un should be sufficiently small such that not only wil SGD still behave like normal GD , but it's wall clock time for to converge should be significantl lower as well.\nUsually one uses the stochastic gradient rather than the true gradient, but the inherent noisiness musi be kept in mind. In what follows we will always mean the stochastic gradient.."}, {"section_index": "9", "section_name": "A.2.2 MOMENTUM", "section_text": "In order to aleviate both noise in the input data as well as noise from stochasticity used in computing quantities one often maintains history of previous evaluations. In order to only require one extra variable one usually stores variables of the form.\nOf course this approximation also carries over to the gradient, which is of vital importance to opti mization techniques:\nn VUn(e) = Vl(0,xo,) ~ VU(0) n j=1\nE[F]t=aFt+E[F]t-1\nwhere F is some value changing.. over time and E[Fl is the averaged quantity\nE[g]t =(1-a)gt +aE|gt-1\nWith the aforementioned tools there are a variety of methods that can be constructed. We choose to. view these algorithms as implementations of Explicit Euler on a variety of different vector fields tc remove the ambiguity between n and gt. We therefore can define a method by the vector field l. that explicit Euler is applied to with a single n that is never changed..\nAdagrad Adagrad rescales X by summing up the sqaures of all previous gradients in a coefficient\ngt t g +e\nHere e is simply set to some small positive value to prevent division-by-zero. In the future we wil neglect this term in denominators because it is always necessary..\nThe concept is to accentuate variations in gt, but because the denominator is monotonically nonde. creasing over time this method is doomed to retard its own progress over time. The denominator can also be seen as a form of momentum where a and are both set to 1.\nRmsprop A simple generalization of ADAGrad is to simply allow for and to be changed from. 1. In particular one usually chooses a less than 1, and presumably = 1 - . Thus one arrives at a method where the effects of the distance history are diminished:.\nAdadelta Adadelta adds another term to RMSprop in order to guarantee that the magnitude of l is balanced with gt.Zeiler et al.(2011). More precisely it maintains.\nXt=-E[g]t\nThis is the most fundamental method that is used in practice and the basis for everything that follows\n9t X = Iq4\nXt 9t E[gt\nE[X2 9t: c[gt\nADAM By applying momentum to both gt and g? one arrives at what is called ADAM. This is often considered a combination of SGDM + RMSprop,\nE[g]t Xt = Ct"}, {"section_index": "10", "section_name": "A.3 RUNGE KUTTA", "section_text": "The general form of second-order explicit Runge-Kutta o a time-independent yector field is\nt+1 =0t+(a1k1+a2k2) k1 = X(0t) k2 = X(0t +q1hk1)\nwhere a1, a2, and q1 are parameters that define a given. Runge-Kutta method. Table 3]refers to the parameters. used for the different Runge-Kutta variants we use in our experiments\nAdvectrk2(0,h) - 0t gt = h 0t +(a1k1+ ak2)h-0 h =(a1k1 + a2k2)\nIf we simply substitute the gradient gt with gt one obtains an RK2-augmented optimization tech nique.\nThe results in Figure9lillustrates that, with the exception of the Midpoint method, stochastic Runge Kutta methods outperform SGD. \"SGD x2\" is the stochastic gradient descent with twice of the learn ing rate of \"SGD\". From the figure, we observe that the Runge-Kutta methods perform even better with half the number of gradient computed by SGD. The reason is because SGD has the accumulated truncated error of O(h) while second-order Runge-Kutta methods have the accumulated truncatec error of O(h2).\nUnfortunately, ADAM outperforms ADAM+RK2 methods. We speculate that this is because the way how ADAM's renormalization of input gradients in conjunction with momentum eliminates the value added by using our RK-based descent directions.\nThe neural networks are typically initialized with very small parameter values (Glorot & Bengio. 2010] He et al.]2015). Instead, we trained NIN with exotic intializations such as initial parameters drawn from N(-10.0, 0.01) or N (-1.0, 1.0) and observe the loss surface behaviours. The results are shown in Figure|11 We can see that NIN without BN does not train at all with any of these ini- tializations. Swirszcz et al. (2016) mentioned that bad performance of neural networks trained with these initializations are due to finding a bad local minima. However, we see that loss surface region around these initializations are plateau[rather than a bad local minima as shown in Figure|11b On\n1 We used same initializations as (Swirszcz et al. 2016) but we trained different neural networks with SG on a different dataset. We used NIN and CIFAR10 and Swirszcz et al.(2016) used smaller neural network and MNIST.\nRunge-Kutta methods[Butcher(1963) are a broad class of numerical integrators categorized by their truncation error. Because the ordinary differential equations Runge-Kutta methods solve generalize gradient descent, our augmentation is quite straightforward. Although our method applies to al explicit Runge-Kutta methods we will only describe second order methods for simplicity.\nTable 3: The coefficients of yari- ous second order Runge-Kutta methods Hairer et al.(1987\nMethod Name a1 a2 q1 Midpoint 0 1 1 2 Heun 1|2 1|2 1 1 Ralston 2|3 3 3 4\n100 100 90 90 80 80 70 aCe 70 60 60 50 50 40 30 40 100 150 50 100 150 200 0 50 200 sgd sgd_itb adam adam_itb sgd_ralston adam ralston adamX2 adam_heun adam_midpoin sgd x2 sgd_heun sgd_midpoint (b) VGG - SGD & RK2 (a) NIN - SGD & RK2 100 100 90 90 80 80 70 70 60 60 50 50 40 40 30 30 20 10 20 100 150 0 50 50 100 150 200 200 0 adam adam_itb adam_ralston sgd sgd_itb sgd_ralston sgd_heun sgd_midpoint adamX2 adam_heun adam_midpoin sgd x2 (d) VGG - ADAM & ADAM+RK2 (c) NIN - ADAM & ADAM+RK2 Figure 9: Training accuracy curve\n6 5 4 Sso 3 2 1 0 0.0 0.5 1.0 1.5 2.0 alpha init-sgd init-ralston init-adam init-adam+ralston\nFigure 10: Interpolation between initial points to final points upto a = 2 in Equation2\nthe other hand. NIN with BN does train slowly over time but finds a local minima. This implies tha. BN redeems the ill-posed loss surface (plateau region). Nevertheless, the local minima it found was. not good as when the parameters were initialized with small values. However, it is not totally clea. whether this is due to difficulty of training or due to falling in a bad local minima..\nFigure 9: Training accuracy curve\n90 80 70 N(0, 0.05) N(0,0.05) 60 30 20 N(-1,1) N(-10,0.01) 10 N(-1,1) 100 200 300 400 500 600 N(-10,0.01) Epoch N(-10, 0.01) w. BN N(-10, 0.01) wo. BN N(-1,1) w. BN N(-1,1) wo. BN (b) NIN without batch normaliza- tion (c) NIN with batch normalization (a) NIN - Learning curve\nFigure 11: NIN trained from different initializaitons\n100 100 100 90 90 90 80 80 80 70 70 70 60 60 60 50 50 50 40 50 100 150 200 50 100 150 200 40 50 100 150 200 Epoch Epoch Epoch A200 S150-A50 A200 S150-A50 A200 S150-A50 S200 A50-S150 S200 A50-S150 S200 A50-S150 S50-A150 A100-S100 S50-A150 A100-S100 S50-A150 A100-S100 S100-A100 A150-S50 S100-A100 A150-S50 S100-A100 A150-S50 (a)\nFigure 12: NIN - Learning curve when switching methods from SGD to ADAM and visa versa at epoch 50 and 100. Learning rate switched from SGD (ADAM) to ADAM (SGD) at (left) 0.001 (0.1) to 0.1 (0.001). (middle) 0.001 (0.1) to 0.05. (0.001). and (right) 0.001 (0.1) to 0.01 (0.001)\n100 95 80 90 60 40 85 0 40 60 80 100 120 140 160 180 100 150 200 0.0 0.2 0.4 0.6 0.8 50 Epoch Epoch alpha A200 S100-A100 A200->A100-S100 A200->A50-S150 A200 S100-A100 S200 A50-S150 S200 A50-S150 S200->S100-A100 S50-A150->S100-A10 S50-A150 A100-S200 S50-A150 A100-S200 S200->S50-A150 A50-S150->A100-S10 (a) The learning rates is set to O.001 and O.05 for ADAM and SGD in the beginning, and then switched it. 0.05, 0.001 for SGD and ADAM 100 1200 98 1000 96 4.0 800 3.5 3.0 ACCU 600 90 2.0 betren 400 1.5 8 1.0 200 86 0.5 150 0.0 60 80 100 120 140 160 180 200 100 0.0 0.2 0.4 0.6 0.8 Epoch Epoch alpha A200 S100-A100 A200 S100-A100 A200->A100-S100 A200->A50-S150 S200 A50-S150 S200 S200->S100-A100 A50-S150 S50-A150->S100-A10 S50-A150 A100-S200 S50-A150 A100-S200 S200->S50-A150 A50-S150->A100-S10 (b) The learning rates is set to O.001 and O.05 for ADAM and SGD in the beginning, and then switched it 0.05. 0.0005 for SGD and ADAM . 100 1200 98 100 96 800 600 beeeee 400 90 20 R 20 40 60 80 100 120 140 160 180 200 100 150 200 0.0 0.2 0.4 0.6 0.8 Epoch Epoch alpha A200->A100-S100 A200->A50-S150 A200 S100-A100 A200 S100-A100 S200 A50-S150 S200 A50-S150 S200->S100-A100 S50-A150->S100-A10 S50-A150 A100-S200 S50-A150 A100-S200 S200->S50-A150 A50-S150->A100-S10\nFigure 13: VGG - Switching methods from SGD to ADAM and ADAM to SGD at epoch 50 anc 100. Zoomed in version (Left). Distance between initial weights to weights at each epoch (Middle) The interpolation between different convergence parameters (Right). Each figure shows the results of switching methods at different learning rate. We label the switch of methods in terms of ratio For instance, S100-A100 as trained with SGD in the first 100 epoch and swithced to ADAM for the rest of the epoch.\n100 250 95 2.5 200 90 2.0 150 ACennney 1.5 SSO\" 100 80 1.0 75 5 0.5 70 0.0 20 40 60 80 100 120 140 160 180 200 50 100 150 200 0.0 0.2 0.4 0.6 0.8 1.0 Epoch Epoch alpha A200 S100-A100 A200 S100-A100 A200->A100-S100 A200->A50-S150 S200 A50-S150 S200 A50-S150 S200->S100-A100 S50-A150->S100-A100 S50-A150 A100-S200 S50-A150 A100-S200 S200->S50-A150 A50-S150->A100-S100\n(a) Learning rate is not required for Adadelta. Learning rate is set to O.05 for SGD in the beginning, and ther switched it to 0.1.\n1CCO 100 25 95 3.0 90 2.5 2.0 85 1.5 80 1.0 50 75 0.5 50 0.0 40 60 80 100 120 140 160 180 200 100 150 200 0.0 0.2 0.4 0.6 0.8 1.0 Epoch Epoch alpha A200 S100-A100 A200 S100-A100 A200->A100-S100 A200->A50-S150 S200 A50-S150 S200 A50-S150 S200->S100-A100 S50-A150->S100-A100 S50-A150 A100-S200 S50-A150 A100-S200 S200->S50-A150 A50-S150->A100-S100 (b) Learning rate is set to 0.05 for SGD. 100 95 2.5 90 2.0 85 1.5 SSO 80 1.0 50 75 0.5 70 0.0 20 40 60 80 100 120 140 160 180 200 50 100 150 200 0.0 0.2 0.4 0.6 0.8 1.0 Epoch Epoch alpha A200 S100-A100 A200 S100-A100 A200->A100-S100 A200->A50-S150 S200 A50-S150 S200 A50-S150 S200->S100-A100 S50-A150->S100-A100 S50-A150 A100-S200 S50-A150 A100-S200 S200->S50-A150 A50-S150->A100-S100\n(c) Learning rate is set to O.05 for SGD in the beginning, and then switched it to O.01.\nFigure 14: VGG - Switching methods from SGD to Adadelta and Adadelta to SGD at epoch 50 and 100. Zoomed in version (Left). Distance between initial weights to weights at each epoch (Middle). The interpolation between different convergence parameters (Right). Each figure shows the results of switching methods at different learning rate. We label the switch of methods in terms of ratio For instance, S50-A50 as trained with SGD in the first 100 epoch and swithced to Adadelta for the rest of the epoch.\n98 2.5 96 800 2.0 ACUur 94 600 1.5 SSo 92 400 1.0 90 aitt 200 0.5 8820 0.0 40 60 80 100 120 140 160 180 200 50 100 150 200 0.0 0.2 0.4 0.6 0.8 1.0 Epoch Epoch alpha A200 ADE100-A100 A200 ADE100-A100 A200->A100-S100 A200->A50-S150 ADE200 A50-ADE150 ADE200 A50-ADE150 S200->S100-A100 S50-A150->S100-A100 ADE50-A150 A100-ADE100 ADE50-A150 A100-ADE100 S200->S50-A150 A50-S150->A100-S100\n(a) Learning rate is not required for Adadelta. Learning rate is set to O.05 for SGD in the beginning, and the. switched it to 0.1.\nFigure 15: VGG - Switching methods from ADAM to Adadelta and Adadelta to ADAM at epocl 50 and 100. Zoomed in version (Left). Distance between initial weights to weights at each epoch (Middle). The interpolation between different convergence parameters (Right). Each figure shows the results of switching methods at different learning rate. We label the switch of methods in terms of ratio. For instance, S50-A50 as trained with SGD in the first 100 epoch and swithced to Adadelta for the rest of the epoch.."}]
HJpfMIFll
[{"section_index": "0", "section_name": "GEOMETRY OF POLYSEMY", "section_text": "Jiaqi Mu, Suma Bhat, Pramod Viswanath\nDepartment of Electrical and Computer Engineering University of Illinois at Urbana Champaign. Urbana. IL 61801. USA\n{jiaqimu2, spbhat2,pramodv}@illinois.edu\nVector representations of words have heralded a transformational approach to clas sical problems in NLP; the most popular example is word2vec. However, a sin- gle vector does not suffice to model the polysemous nature of many (frequent) words, i.e., words with multiple meanings. In this paper, we propose a three-fold approach for unsupervised polysemy modeling: (a) context representations, (b) sense induction and disambiguation and (c) lexeme (as a word and sense pair) representations. A key feature of our work is the finding that a sentence contain- ing a target word is well represented by a low rank subspace, instead of a point in a vector space. We then show that the subspaces associated with a particular sense of the target word tend to intersect over a line (one-dimensional subspace), which we use to disambiguate senses using a clustering algorithm that harnesses the Grassmannian geometry of the representations. The disambiguation algorithm which we call K-Grassmeans, leads to a procedure to label the different senses of the target word in the corpus - yielding lexeme vector representations, all in an unsupervised manner starting from a large (Wikipedia) corpus in English. Apart from several prototypical target (word,sense) examples and a host of empirical studies to intuit and justify the various geometric representations, we validate ou algorithms on standard sense induction and disambiguation datasets and present new state-of-the-art results."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Distributed representations are embeddings of words in a real vector space, achieved via an appro priate function that models the interaction between neighboring words in sentences (e.g.: neural networks (Bengio et al., 2003; Mikolov et al., 2010; Huang et al., 2012), log-bilinear models (Mnih & Hinton, 2007; Mikolov et al., 2013), co-occurrence statistics (Pennington et al., 2014; Levy & Goldberg, 2014)). Such an approach has been strikingly successful in capturing the (syntactic and semantic) similarity between words (and pairs of words), via simple linear algebraic relations be tween their corresponding vector representations. On the other hand, the polysemous nature of words, i.e., the phenomenon of the same surface form representing multiple senses, is a central fea ture of the creative process embodying natural languages. For example, a large, tall machine used for moving heavy objects and a tall, long-legged, long-necked bird both share the same surface form \"crane\". A vast majority of words, especially frequent ones, are polysemous, with each word taking on anywhere from two to a dozen different senses in many natural languages. For instance, Word Net collects 26,896 polysemous English words with an average of 4.77 senses each (Miller, 1995) Naturally, a single vector embedding does not appropriately represent a polysemous word.\nSince hand-crafted lexical resources sometimes do not reflect the actual meaning of a target word in a. given context (Veronis, 2004) and, more importantly, such resources are lacking in many languages. (and their creation draws upon intensive expert human resources), we focus on the second approach"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "There are currently two approaches to address the polysemy issue (a detailed discussion is in Ap-. pendix A): (a) sense specific representation learning (Chen et al., 2014; Rothe & Schutze, 2015),. usually aided by hand-crafted lexical resources such as WordNet (Miller, 1995); (b) unsupervised sense induction and sense/lexeme representation learning by inferring the senses directly from text (Huang et al., 2012; Neelakantan et al., 2015; Li & Jurafsky, 2015; Arora et al., 2016b)..\nFirth's hypothesis - a word is characterized by the company it keeps (Firth, 1957) - has motivated the development of single embeddings for words, but also suggests that multiple senses for a target word could be inferred from its contexts (neighboring words within the sentence). This task is. naturally broken into three related questions: (a) how to represent contexts (neighboring words of. the target word); (b) how to induce word senses (partition instances of contexts into groups where the target word is used in the same sense within each group) and (c) how to represent lexemes (word. and sense pairs) by vectors.\nExisting works address these questions by exploring the latent structure of contexts. In an inspired. work, Arora et al. (2016b) hypothesize that the global word representation is a linear combination. of its sense representations, models the contexts by a finite number of discourse atoms, and recov-. ers the sense representations via sparse coding of all the vectors of the vocabulary (a global fit).. Other works perform a local context-specific sense induction: Li & Jurafsky (2015) introduce a sense-based language model to disambiguate word senses and to learn lexeme representations by. incorporating the Chinese restaurant process, Reisinger & Mooney (2010) and Huang et al. (2012). label the word senses by clustering the contexts based on the average of the context word embed-. dings and learn lexeme representations using the labeled corpus. Neelakantan et al. (2015) retain the. representation of contexts by the average of the word vectors, but improves the previous approach. by jointly learning the lexeme vectors and the cluster centroids in one shot..\nGrassmannian Model: We depart from the linear latent models in these prior works by presenting a nonlinear (Grassmannian) geometric property of contexts. We empirically observe and hypothesize that the context word representations surrounding a target word reside roughly in a low dimensional subspace. Under this hypothesis, a specific sense representation for a target word should reside in all the subspaces of the contexts where the word means this sense. Note that these subspaces need not cluster at all: a word such as \"launch' in the sense of \"beginning or initiating a new endeavor' could be used in a large variety of contexts. Nevertheless, our hypothesis that large semantic units (such as sentences) reside in low dimensional subspaces implies that the subspaces of all contexts where the target word shares the same meaning should intersect non-trivially. This further implies that there exists a direction (one dimensional subspace) that is very close to all subspaces and we treat such an intersection vector as the representation of a group of subspaces. Following this intuition, we propose a three-fold approach to deal with the three central questions posed above:\nExperiments: The lexical aspect of our algorithm (i.e., senses can be induced and disambiguatec individually for each word) as well as the novel geometry (subspace intersection vectors) jointly allow us to capture subtle shades of senses. For instance, in \"Can you hear me? You're on the air. One of the great moments of live television, isn't it?\", our representation is able to capture the occurrence of \"air' to mean \"live event on camera\". In contrast, with a global fit such as that ir\nin this paper; such an approach is inherently scalable and potentially plausible with the right set of ideas. Indeed, a human expects the contexts to cue in on the particular sense of a specific word,. and successful unsupervised sense representation and sense extraction algorithms would represent progress in the broader area of representation of natural language. Such are the goals of this work.\nContext Representation: we define the context for a target word to be a set of left W anc. right W non-functional words of the target word (W ~ 10 in our experiments), includin. the target word itself, and represent it by a low-dimensional subspace spanned by its contex. word representations; Sense Induction and Disambiguation: we induce word senses from their contexts b. partitioning multiple context instances into groups, where the target word has the same. sense within each group. Each group is associated with a representation - the intersectiol direction - found via K -Grassmeans, a novel clustering method that harnesses the geometr of subspaces. Finally, we disambiguate word senses for new context instances using th respecive group representations; Lexeme Representation: the lexeme representations can be obtained by running an off. the-shelf word embedding algorithm on a labeled corpus. We label the corpus through har. decisions (involving erasure labels) and soft decisions (probabilistic labels), motivated b analogous successful approaches to decoding of turbo and low density parity check codes. (LDPC).\nAs a quantitative demonstration of the latent geometry captured by our methods, we evaluate the proposed induction algorithm on standard Word Sense Induction (WsI) tasks. Our algorithm out performs state-of-the-art on two datasets: (a) SemEval-2010 Task 14 (Manandhar et al., 2010) whose word senses are obtained from OntoNotes (Hovy et al., 2006); and (b) a custom-built dataset built by repurposing the polysemous dataset of (Arora et al., 2016b). In terms of lexeme vector embeddings our representations have evaluations comparable to state-of-the-art on standard tasks - the word sim ilarity task of SCwS (Huang et al., 2012) - and significantly better on a subset of the SCwS dataset which focuses on polysemous target words and the \"police lineup\"' task of (Arora et al., 2016b). A detailed study of the experiments and quantitative results can be found in Section 3.\nWe propose a three-fold approach that includes: (a) context representation, (b) word sense induction and disambiguation, and (c) lexeme representation.."}, {"section_index": "3", "section_name": "2.1 CONTEXT REPRESENTATION", "section_text": "Contexts refer to entire sentences or (long enough) consecutive blocks of words in sentences sur. rounding a target word. Efficient distributed vector representations for sentences and paragraphs. are active topics of research in the literature ((Le & Mikolov, 2014; Tai et al., 2015)), with much. emphasis on appropriately relating the individual word embeddings with those of the sentences (anc paragraphs) they reside in. The scenario of contexts studied here is similar in the sense that they. constitute long semantic units similar to sentences, but different in that we are considering semantic. units that all have a common target word residing inside them. Instead of a straightforward applica tion of existing literature on sentence (and paragraph) vector embeddings to our setting, we deviate. and propose a non-vector space representation; such a representation is central to the results of this. paper and is best motivated by the following simple experiment..\nIn this paper, we represent a context c (as a multiset of words) by a point in the Grassmannian. manifold - a subspace (denoted by S(c)) spanned by its top N principle components (denoted by {un(c)}n-1). Such representation is motivated by a subspace hypothesis (its empirical validation is. given in Appendix B) that the context word representations surrounding a target word reside roughly in a low dimensional subspace. A detailed algorithm chart for context representations is provided in. Appendix J, for completeness."}, {"section_index": "4", "section_name": "2.2 SENSE INDUCTION AND DISAMBIGUATION", "section_text": "We now turn to sense induction, a basic task that explores polysemy: in this task, a set of sentences (each containing a common target word) have to be partitioned such that the target word is used in the same sense in all the sentences within each partition. The number of partitions relates to the number of senses being identified for the target word. The geometry of the subset representations plays a key role in our algorithm and we start with this next.\nGeometry of Polysemy Consider a target monosemous word w and a context sentence c contain. ing this word w. The empirical experiment from the previous section allows us to represent c by. a N-dimensional subspace of the vectors of the words in c. Since N (3~5 in our experiments) is. much smaller than the number of words in c (21, on average), one suspects that the representation associated with c wouldn't change very much if the target word w were expurgated from it, i.e.,. S(c) ~ S(cw). Putting these two observations together, one arrives at the following hypothesis,. in the context of monosemous target words:.\nIntersection Hypothesis: the target word vector v(w) should reside in the inter section of S(cw), where the intersection is over all its contexts c..\nWe propose an algorithmic approach to robustly discover the intersection direction involves finding that direction vector that is \"closest\"' to all subspaces; we propose doing so by solving the following\nwhich can be solved by taking the first principle component of { un.\nThe property that context subspaces of a monosemous word intersect at one direction naturally generalizes to polysemy:\nA detailed study on validating the hypotheses and the corresponding visualization can be found in Appendix C and Appendix D.\nSense Induction We can use the representation of senses by the intersection directions of contex subspaces for unsupervised sense induction: supposing the target polysemous word that has K senses (known ahead of time for now), the goal is to partition the contexts associated with this targe word into K groups within each of which the target polysemous word shares the same sense. The fact that two groups of context subspaces, corresponding to different senses, intersect at differen directions motivates our geometric algorithm: we note that each one of the contexts belongs to a group associated by the nearest intersection direction which serves as a prototype of the group Part of the task is also to identify the most appropriate intersection direction vectors associated witl each group. This task represents a form of unsupervised clustering which can be formalized as the optimization problem below.\nGiven a target polysemous word w, M contexts c1, ..., Cm containing w and a number K indicating the number of senses w has, we would like to partition the M contexts into K sets S1, ..., Sk so as to minimize the distance d(., :) of each subspace to the intersection direction of its group,\nNote that our algorithm can be run for any one specific target word, and makes for efficient online sense induction; this is relevant in information retrieval applications where the sense of the query\nN u(w) = arg min d(u,S(c\\w))2, d(u,S) =s|u||2 -> uTu lu||=1 wEc n=1\nwhere d(v, S) is the shortest l2-distance between u and subspace S, and u1, ..., uy are N orthonor- mal basis vectors for subspace S. It is worth mentioning that we chose d(., .) as an l2-distance for simplicity and there are alternative metrics. Specifically, such metric gives an closed form solution,\nN (w) = arg max L(uTun(c\\ l|u||=1 wEcn=1\nPolysemy Intersection Hypothesis: the context subspaces of a polysemous word intersect at different directions for different senses.\nK d2(uz,S(c\\w)). min U1,...,UK,S1, k=1 cESk\nThis problem (3) is analogous to the objective of K-means clustering for vectors and solving it ex actly in the worst case can be shown to be NP-hard. We propose a natural algorithm by repurposing traditional K-means clustering built for vector spaces to the Grassmannian space (a detailed algo- rithm chart is provided in Appendix K). The difference between K-Grassmeans and K-means lies in the maximization step:\n.Initialization: we randomly initialize K unit-length vectors u1, ..., uK.. Expectation: we group contexts based on the distance to each intersection direction:. Sk t{cm :d(uk,S(cm\\w)) < d(uk',S(cm\\w)) Vk'},V k. Maximization: we update the intersection direction for each group with the contexts in the group. Uk + arg min d2(u, S(c\\w))\nSk{cm:d(uk,S(cm\\w))< d(uk',S(cmw) Vk'},V k\nd2(u,S(c\\w)) Uk argmin u cESk\nwords may need to be found in real time. To get a qualitative feel for how good K-Grassmeans is for the sense induction task, we run the following synthetic experiment: we randomly pick K monosemous words, merge their surface forms to create a single artificial polsemous word, collect all the contexts corresponding to the K monosemous words, replace every occurrence of the K monosemous words by the single artificial polysemous word. Then we run the K-Grassmeans al- gorithm on these contexts with the artificial polysemous word as the target word, so as to recover their original labels (which are known ahead of time, since we merged known monosemous words together to create the artificial polysemous word). Figure 1(a) shows the clustering performances on a realization of the artificial polysemous word made of \"monastery\" and \"phd\"' (here K = 2) and Figure 1(b))shows the clustering performance when K = 5 monosemous words \"employers' \"exiled\", \"grossed\", \"incredible\"' and \"unreleased\"' are merged together. We repeat the experiment over 1,00 trials with K varying from 2~8 and the accuracy of sense induction is reported in Fig ure 1(c). Compared to the baseline algorithm proposed in (Huang et al., 2012; Neelakantan et al., 2015), K-Grassmean on subspaces outperforms K-means on average word vectors by 5-6%. We also provide a qualitative study of this algorithm on real data in Appendix E, where we study the semantics of each group for an example target word \"columbia\"'. From these experiments we see that K-Grassmeans performs very well, qualitatively and quantitatively.\n80000 18000 1.0 monastery 16000 Xemployers 70000 kGrassmean+subspace 1 phd exiled 0.9 kMeans+average 14000 60000 grossed 12000 incredible 0.8 ureer 50000 rrenneeey 10000 unreleased ear 0.7 40000 accu ebu 8000 0.6 30000 6000 20000 0.5 4000 10000 2000 0.4 2 3 4 5 6 7 8 0 # senses group-0 group-1 group-0group-1 group-2 group-3 group-4 (a) K=2 (b) K=5 c) accuracy\nA quantitative experiment on large and standardized real datasets (which involves real polysemou target words as opposed to synthetic ones), with a comparison with other algorithms in the literature is detailed in Section 3, where we see that K-Grassmeans outperforms state-of-the-art.\nSense Disambiguation Having the intersection directions to represent the senses, we are ready tc disambiguate a target word sense in a given context using the learned intersection directions specific. to this target word: for a new context instance for a polysemous word, the goal is to identify which sense this word means in the context. Our approach is three-fold: represent the context by a low. dimensional subspace S(c \\ w) approximation of the linear span of the word embeddings of non. functional words in the context, find the orthogonal projection distance between the intersection. vector u(w) and the context subspace. and finally output k* that minimizes the distance, i.e...\nWe refer to (4) as a hard decoding of word senses since this outputs a deterministic label. At times. it makes sense to consider a soft decoding algorithm where the output is a probability distribution The probability that w takes k-th sense given the context c is defined via..\nexp(-d(ux(w),S(c\\w)) P(w,c, k) k, exp(-d(uk'(w),S(c\\w)))\nHere we calculate the probability as a monotonic function of the cosine distance between the in tersection vector ug(w) and the context subspace S(cw), inspired by similar heuristics in the literature (Huang et al., 2012). A qualitative study of (4) and (5) is again provided in Appendix E where we apply (4) and (5) on the target word \"columbia'' and five sentences as its contexts.\nFigure 1: A synthetic experiment to study the performances of K-Grassmeans: (a) monosemous words: \"monastery' and \"phd\"; (b) K = 5 monosemous words: \"employers\", \"exiled\", \"grossed\". 'incredible\" and \"unreleased\"'; (c) accuracy versus K..\nk* = argmind(ux(w), S(c\\w))\nInduction and disambiguation are important tasks by themselves, but several downstream applica tons can use a distributed vector representation of the multiple senses associated with a target word Just as with word representations, we expect the distributed lexeme representations to have semantic meanings - similar lexemes should be represented by similar vectors..\nIt seems natural to represent a lexeme sk(w) of a given word w by the intersection vector associated. with the k-th sense group of w, i.e., u(w). Such an idea is supported by an observation that the in-. tersection vector is close to the word representation vector for many monosemous words (a detailed study of this observation is provided in Appendix F). Despite this empirical evidence, somewhat. surprisingly, lexeme representation using the intersection vectors turns out to be not such a good. idea, and the reason is fairly subtle. It turns out that the intersection vectors are concentrated on a. relatively small surface area on the sphere (magnitudes are not available in the intersection vectors). - the cosine similarity between two random intersection vectors among 10,000 intersection vectors. (five intersection vectors each for 2,000 polysemous words) is 0.889 on average with standard devi-. ation O.068. This is quite in contrast to analogous statistics for (global) word embeddings from the. word2vec algorithm: the cosine similarity between two random word vectors is 0.134 on average. with standard deviation O.072. Indeed, word vector representations are known to be approximately. uniformly scattered on the unit sphere (the so-called isotropy property, see (Arora et al., 2016a)). The intersection vectors cluster together far more and are quite far from being isotropic - yet they. are still able to distinguish different senses as shown by the empirical studies and qualitative exper- iments on prototypical examples above (and also on standard datasets, as seen in Section 3)..\nDue to this geometric mismatch between word vectors and intersection directions, and correspond ing mismatch in linear algebraic properties expected of these distributed representations, it is not appropriate to use the intersection direction as the lexeme vector representation. In this light, we propose to learn the lexeme representations by an alternate (and more direct) procedure: first label the polysemous words in the corpus using the proposed disambiguation algorithm from Section 2.2 and then run a standard word embedding algorithm (we use word2vec) on this labeled corpus, yield- ing lexeme embeddings. There are several possibilities regarding labeling and are discussed next.\nHard Decodings We label the corpus using the disambiguation algorithm as in (4). A special. label \"IDK\"' representing \"I don't know\"' is introduced to avoid introducing too many errors during. the labeling phase since (a) our approach is based on the bag-of-words model and cannot guarantee to label every sense correctly; (for example, \"arm\" in \"the boat commander was also not allowed. to resume his career in the Greek Navy due to his missing arm which was deemed a factor that could possibly raise enquiries regarding the mission which caused the trauma.' will be labeled as. 'weapon'); and (b) we are not clear how such errors will affect existing word embedding algorithms..\nAn \"IDK\" label is introduced via checking the closest distance between the context subspace and the intersection directions, i.e., let ug* (w) be the closest intersection vector of w to context c, we will label this instance as k* if d(uk* (w), S(c\\ w)) < 0 and \"IDK\" otherwise, where 0 is a hyper- parameter. A detailed algorithm chart for sense disambiguation and corpus labeling is provided in Appendix L. The \"IDK\" label includes instances of words that means a rare sense, (for example: 'crane' as in stretching the neck), or a confusing sense which requires disambiguation of context words (for example: \"book' and \"ticket' in \"book a flight ticket'). The IDK labeling procedure is inspired by analogous scenarios in reliable communication over noisy channels where the log likelihood ratio of (coded) bits is close to zero and in practice are better labeled as \"erasures', than treating them as informative for the overall decoding task (Cidon et al., 2012).\nSoft Decodings Another way of labeling is via using the absolute scores of K-Grassmeans dis ambiguation for each sense of a target work in a specific context, cf. Equation (5). Soft decoding involves generating a random corpus by sampling one sense for every occurrence of a polysemou word according to its probability distribution from (5). Then lexeme representations are obtainec via an application of a standard word embedding algorithm (we use word2vec) on this (random labeled corpus. Since we only consider words that are frequent enough (i.e., whose occurrence is larger than 10,0o0), each sense of a polysemous word is sampled enough times to allow a robus lexeme representation with high probability."}, {"section_index": "5", "section_name": "3 EXPERIMENTS", "section_text": "Preliminaries All our algorithms are unsupervised and operate on a large corpus obtained from Wikipedia dated 09/15. We use WikiExtractor (http://medialab.di.unipi.it/wiki/ Wikipedia_Ext ractor) to extract the plain text. We use the skip-gram model from word2vec Mikolov et al. (2013) as the word embedding algorithm where we use the default parameter setting We set c = 10 as the context window size and set N = 3 as the rank of PCA. We choose K = 2 and K = 5 in our experiment. For the disambiguation algorithm, we set 0 = 0.6.\nBaselines Our main comparisons are with algorithms that conduct unsupervised polysemy dis ambiguation, specifically the sense clustering method of (Huang et al., 2012), the multi-sense skip gram model (MSsG) of (Neelakantan et al., 2015) with different parameters, and the sparse coding method with a global dictionary of (Arora et al., 2016b). We were able to download the word and sense representations for (Huang et al., 2012; Neelakantan et al., 2015) online, and trained the word and sense representations of (Arora et al., 2016b) on the same corpus as that used by our algorithms."}, {"section_index": "6", "section_name": "3.1 SENSE INDUCTION AND DISAMBIGUATION", "section_text": "Word sense induction (WsI) tasks conduct the following test: given a set of context instances con taining a target word, one is asked to partition the context instances into groups such that within each group the target word shares the same sense. We test K-Grassmeans on two datasets -- a standard one from the test set of SemEval-2010 (Manandhar et al., 2010) and a custom-built Make-Sense-2016 Appendix G gives a detailed description about the two datasets.\nWe evaluate the performance of the algorithms on this (disambiguation) task according to standar measures in the literature: V-Measure and paired F-Score; these two evaluation metrics also featur. in the SemEval-2010 wSI task (Manandhar et al., 2010). V-measure is an entropy-based externa. cluster evaluation metric. Paired F-score evaluates clustering performance by converting the cluster. ing problem into a binary classification problem - given two instances, do they belong to the sam. cluster or not? Both metrics operate on a contingency table A = {atk}, where atk is the number o. instances that are manually labeled as t and algorithmically labeled as k. A detailed description i. given in Appendix M for completeness. Both the metrics range from O to 1, and perfect clustering. gives a score of 1. Empirical statistics show that V-Measure favors those with a larger number o. cluster and paired F-score favors those with a smaller number of cluster..\nTable 1: Performances (V-measure and paired F-score (x100) of Word Sense Induction Task\nHuang 2012 average-#cluster=2 average-#cluster=5 subspace-#cluster=2 subspace-#cluster=5\nSoft decoding benefits in two scenarios: (a) when a context has enough information for disambigua. tion (i.e., the probability distribution (5) concentrates on one), the random sampling will have a high chance making a correct decision. (b) otherwise (i.e., the probability distribution have more than one peak), the random sampling will have a chance of not making a wrong (irreversible) decision..\nThroughout this paper we have conducted multiple qualitative and empirical experiments to high. light and motivate the various geometric representations. In this section we evaluate our algorithms. (on sense disambiguation method and sense representation) empirically on (standardized) datasets from the literature, allowing us to get a quantitative feel for the performance on large datasets, as. well as afford a comparison with other algorithms from the literature..\nSemEval-2010 Make-Sense-2016 algorithms V-Measure F-score # cluster V-Measure F-score # cluster MSSG.300D.6K 6.90 48.43 2.45 14.40 57.91 2.35 NP-MSSG.300D.6K 6.50 52.45 2.56 15.50 55.39 3.05 Huang 2012 10.60 38.05 6.63 15.50 47.40 6.15 average-#cluster=2 5.30 53.39 1.91 15.9 59.33 1.92 average-#cluster=5 11.10 40.66 4.41 20.3 47.80 4.50 subspace-#cluster=2 7.10 57.25 1.86 28.80 64.66 1.98 subspace-#cluster=5 14.40 44.17 4.23 34.30 58.25 4.58\nTable 2: Performances (V-measure, paired F-score, supervised recalls (SR) (x100)) of Word Sense Induction Task from SemEval-2010.\nalgorithms V-Measure F-score SR (60%/40%) SR (80%/20%) # cluster UoY 15.7 49.8 62.4 62.0 11.54 Hermit 16.2 26.7 58.3 57.3 10.78 Duluth-WSI 9 41.1 60.5 59.5 4.15 #cluster=2 7.50 55.0 60.2 60.1 1.88 #cluster=5 16.5 42.8 64.5 65.2 4.23\nTable 1 shows the detailed results of our experiments where all algorithms use Wikipedia as the training set, and from where we see that K-Grassmeans strongly outperforms the others. For com- pleteness, we adapt the same setting from SemEval-2010 by using the given training corpus and compare K-Grassmeans against the top reporting systems (i.e., UoY (Korkontzelos & Manandhar, 2010), Hermit (Jurgens & Stevens, 2010), Duluth-WSI (Pedersen, 2010)) in (Manandhar et al., 2010) using all four metrics in Table 2. From Table 2 we can see K-Grassmeans also strongly outperforms participating systems."}, {"section_index": "7", "section_name": "3.2 LEXEME REPRESENTATION", "section_text": "The key requirement of lexeme representations should be that they have the same properties as word. embeddings, i.e., similar lexemes (or monosemous words) should be represented by similar vec- tors. Hence we evaluate our lexeme representations on a standard word similarity task focusing on. context-specific scenarios: the Stanford Contextual Word Similarities (SCws) dataset (Huang et al.. 2012). In addition to word similarity task on SCwS, we also evaluate our lexeme representations on. the \"police lineup\"' task proposed in (Arora et al., 2016b)..\nWord Similarity on SCws The task is as follows: given a pair of target words, the algorithr assigns a measure of similarities between this pair of words. The algorithm is evaluated by checking the degree of agreement between the similarity measure given by the algorithm and the similarit measure given by humans in terms of Spearman's rank correlation coefficient. Although this SCWs dataset is not meant specifically for polysemy, we repurpose it for our tasks since it asks for the similarity between two words in two given sentential contexts (the contexts presumably provid important clues to the human rater on the senses of the target words) and also because this is a larg (involving 2,003 word pairs) and standard dataset in the literature with 10 human rated similarit scores, each rated on an integral scale from 1 to 10. We take the average of 10 human scores tc represent the human judgment.\nWe propose two measures between w and w' given their respective contexts c and c' - one (denoted by HardSim) is based on the hard decoding algorithm and the other one (denoted by SoftSim) is based on the soft one. HardSim and SoftSim are defined via.\nHardSim(w. w'. c, c =d(vk(w),Vk(w)\nHardSim(w, w', c,c) = d(vk(w), vk'(w)\nSoftSim(w,w',c,c) =)`)`P(w,c,k)P(w',c,k')d(vk(w),vk'( k k\nwhere P(w, c, k) is the probability that w takes k-th sense given the context c defined in (5)\nTable 3 shows the detailed results on this task. Here we conclude that in general our lexeme rep resentations have a similar performance as the state-of-the-art on both soft and hard decisions. I1 is worth mentioning that the vanilla word2vec representation (which simply ignores the providec contexts) also has a comparable performance - this makes some sense since some of the words ir SCwS are monosemous (and hence their meanings are context-dependent).\nTo separate out the effect of the combination of monosemous and polysemous words in the SCwS. dataset, we expurgate monosemous words from the corpus creating a smaller version that we denoted\nwhere k, and k' are the senses obtained via (4), vg(w) and vg'(w) are the lexeme representations for the two senses, d(v, v') is the cosine similarity between two vectors v and v', i.e., (d(v, v') =- vTv/\\|v||lv'D, and\nby SCws-lite. In SCws-lite, we only consider the pairs of sentences where both target words in. one pair have the same surface form but different contexts (and hence potentially different senses). SCwS-lite now contains 241 pairs of words, which is roughly 12% of the original dataset. The. performances of the various algorithms are detailed in Table 3, from where we see that our represen. tations (and corresponding algorithms) outperform the state-of-the-art and showcases the superior. representational performance when it comes to context-specific polysemy disambiguation..\nTable 3: Performance (Spearman's rank correlation x100) on SCwS task.\nPolice Lineup Task This task is proposed in Arora et al. (2016b) to evaluate the efficacy of their sense representations (via the vector representations of the discourse atoms). The testbed contains 200 polysemous words and their 704 senses, where each sense is defined by eight related words For example, the \"tool/weapon\"' sense of \"axes\"' is represented by \"handle\", \"harvest\", \"cutting\" \"'split', \"tool', and \"wood\", \"battle\", \"chop\". The task is the following: given a polysemous word, the algorithm needs to identify the true senses of this word from a group of m senses (which in cludes many distractors) by outputting k senses from the group. The algorithm is evaluated by the corresponding precision and recall scores.\nThis task offers another opportunity to test our representations with the others in the literature, and also provide insight into some of those representations themselves. One possible algorithm is to simply use that proposed in Arora et al. (2016b) where we replace their sense representations with ours: Let sk(w) denote a sense of w, and let L denote the set of words representing a sense. We define a similarity score between a sense representation of w and a sense set from the m senses as,\nFigure 2: Precision and recall curve in the sense identification task where we let m = 20 and k vary from 1 to 6.\nFigure 2 shows the precision and recall curve in the polysemy test where we let m = 20 and let k vary from 1 to 6. First, we observe that our representations are uniformly better over the precision-recall curve than the state of the art, although by a relatively small margin. Soft decoding performs slightly better than hard decoding over this task. Second, the surprising finding is that the\n0.9 1.0 word2vec word2vec Arora 2016 0.9 0.8 Arora 2016 # cluster = 2 # cluster = 2 0.8 # cluster = 5 prrreeeon # cluster = 5 prrreeon 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 recall recall (a) hard (b) soft\nbaseline we create using vanilla word2vec representations (precise details of this baseline algorithm are provided in Appendix H for completeness) performs as well as the state-of-the-art described in Arora et al. (2016b). A careful look shows that all algorithm outputs (word2vec, ours and those in Arora et al. (2016b)) are highly correlated - they all make correct calls on obvious instances and all make mistakes for confusing instances. We believe that this is because of the following: (a) the algorithm 1 in Arora et al. (2016b) is highly correlated with word2vec since their overall similarity measure uses a linear combination of the similarity measures associated with the sense (discourse atom) vector and the word vector (see Step 6 of Algorithm 1 in Arora et al. (2016b)); (b) word2vec embeddings appear to have an ability to capture two different senses of a polysemous word (as discussed earlier ); (c) the instances where the errors occur all seem to be either genuinely subtle or rare in the domain where embeddings were trained (for instance \"bat' in the sense of fluttering eyelashes is rare in the Wikipedia corpus, and is one of the error instances)."}, {"section_index": "8", "section_name": "4 CONCLUSION", "section_text": "In this paper, we study the geometry of contexts and polysemy and propose a three-fold approach (entitled K-Grassmeans) to model target polysemous words in an unsupervised fashion: (a) we represent a context (non-function words surrounding the target word) by a low rank subspace, (b) induce word senses by clustering the subspaces in terms of a distance to an intersection vector anc (c) representing lexemes (as a word and sense pair) by labeling the corpus. Our representations are novel and involve nonlinear (Grassmannian) geometry of subspaces and the clustering algorithms are designed to harness this specific geometry. The overall performance of the method is evaluated quantitatively on standardized word sense induction and word similarity tasks and we present new state-of-the-art results. Several new avenues of research in natural language representations arise from the ideas in this work and we discuss a few items in Appendix I."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. A latent variable mode approach to pmi-based word embeddings. Transactions of the Association for Computationa. Linguistics, 4:385-399, 2016a. ISSN 2307-387X. URL https://transac1.org/ojs/ index.php/tacl/article/view/742\nSanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. Linear algebraic struc ture of word senses, with applications to polysemy. arXiv preprint arXiv:1601.03764, 2016b\nAsaf Cidon, Kanthi Nagaraj, Sachin Katti, and Pramod Viswanath. . Flashback: decoupled lightweight wireless control. ACM SIGCOMM Computer Communication Review, 42(4):223. 234, 2012.\nJohn Rupert Firth. Papers in linguistics, 1934-1951. Oxford University Press, 1957\nDavid Jurgens and Keith Stevens. Hermit: Flexible clustering for the semeval-2 wsi task. In Pro ceedings of the 5th international workshop on semantic evaluation, pp. 359-362. Association for Computational Linguistics, 2010.\nYoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic anouage mode 1110 (Feh:1137_1155. 200?\nXinxiong Chen, Zhiyuan Liu, and Maosong Sun. A unified model for word sense representation and disambiguation. In EMNLP, pp. 1025-1035. Citeseer, 2014\nIoannis Korkontzelos and Suresh Manandhar. Uoy: Graphs of unambiguous vertices for word sense induction and disambiguation. In Proceedings of the 5th international workshop on semantic evaluation, pp. 355-358. Association for Computational Linguistics, 2010.\nOmer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Ad 2177-2185.2014\nSuresh Manandhar, Ioannis P Klapaftis, Dmitriy Dligach, and Sameer S Pradhan. Semeval-2010 task 14: Word sense induction & disambiguation. In Proceedings of the 5th international workshop on semantic evaluation, pp. 63-68. Association for Computational Linguistics, 2010.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neural network based language model. In Interspeech, volume 2, pp. 3, 2010..\nGeorge A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11) 3941, 1995.\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, volume 14, pp. 1532-43, 2014.\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075. 2015\nJean Veronis. Hyperlex: lexical cartography for information retrieval. Computer Speech & Lan guage, 18(3):223-252, 2004.\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represen tations in vector space. arXiy preprint arXiv:1301.3781. 2013\nJoseph Reisinger and Raymond J Mooney. Multi-prototype vector-space models of word meaning In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 109-117. Association for Computational Linguistics, 2010."}, {"section_index": "10", "section_name": "A RELATED WORK", "section_text": "There are two main approaches to model polysemy: one is supervised and uses linguistic resoures. (Chen et al., 2014; Rothe & Schutze, 2015) and the other is unsupervised inferring senses directly. from a large text corpus (Huang et al., 2012; Neelakantan et al., 2015; Li & Jurafsky, 2015; Arora et al., 2016b). Our approach belongs to the latter category..\nThere are differing approaches to harness hand-crafted lexical resources (such as WordNet): Chen et al. (2014) leverages a \"gloss\"' as a definition for each lexeme, and uses this to model and disam- biguate word senses. Rothe & Schutze (2015) models sense and lexeme representations through the ontology of WordNet. While the approaches are natural and interesting, they are inherently limited due to (a) the coverage of WordNet: WordNet only covers 26k polysemous words, and the senses for polysemous words are not complete and are domain-agnostic (for example, the mathematical sense for \"ring\" is missing in WordNet and a majority of occurrences of \"ring\" mean exactly this sense in the Wikipedia corpus) and (b) the fine-grained nature of WordNet: WordNet senses appear at times too pedantic to be useful in practical downstream applications (for example, \"air' in \"This show will air Saturdays at 2 P.M.\" and \"air\"' in \"We cannot air this X-rated song\" are identified to have different meanings).\nThe unsupervised methods do not suffer from the idiosyncracies of linguistic resources, but ar inherently more challenging to pursue since they only rely on the latent structures of the wor senses embedded inside their contexts. Existing unsupervised approaches can be divided into twc categories, based on what aspects of the contexts of target words are used: (a) global structures o contexts and (b) local structures of contexts.\nGlobal structure: Arora et al. (2016b) hypothesizes that the global word representation is a lir. ear combination of its sense vectors. This linear algebraic hypothesis is validated by a surprisin. experiment wherein a single artificial polysemous word is created by merging two random word.. The experiment is ingenious and the finding quite surprising but was under a restricted setting:. single artificial polysemous word is created by merging only two random words. Upon enlargenin. these parameters (i.e., many artificial polysemous words are created by merging multiple randor. words) to better suit the landscape of polysemy in natural language, We find the linear-algebrai. nypothesis to be fragile: Figure 3 plots the linearity fit as a function of the number of artifici. oolysemous words created, and also as a function of how many words were merged to create an. oolysemous word. We see that the linearity fit worsens fairly quickly as the number of polysemou. words increases, a scenario that is typical of natural languages..\n0.40 0.35 0.30 0.25 csnnne 0.20 # merged words = 2 0.15 I I # merged words = 3 I I # merged words = 4 0.10 I I # merged words = 5 0.05 1 5 10 50 100 500 1000 5000 # new words\n0.35 0.30 0.25 csnnne 0.20 # merged words = 2 0.15 # merged words = 3 I # merged words = 4 0.10 # merged words = 5 0.05 1 5 10 50 100 500 1000 5000 # new words\nFigure 3: A synthetic experiment to study the linear algebraic structure of word senses\nThe main reason for this effect appears to be that the linearity fit is quite sensitive to the interactio. between the word vectors, caused by the polysemous nature of the words. The linear algebrai hypothesis is mathematically justified in Section 5 in Arora et al. (2016b) in terms of the RAND. WALK generative model of Arora et al. (2016a) with three extra simplifications. If one were t generalize this proof to handle multiple artificial words at the same time, it appears particularly. relevant that the simplification 2 should continue to hold. This simplification step involved th assumption that if w1, ..., wn be the random words being merged, then (a) w; and w; do not occu."}, {"section_index": "11", "section_name": "Appendix: Geometry of Polysemy", "section_text": "together in a context window for any i j and (b) any other word k can only occur with a single one of the w;'s in all context windows. This simplification step clearly no longer holds when n increases. and especially so when n nears the size of vocabulary. However, this latter scenario (of n being the. size of the vocabulary) is the very basis of of the sparse-coding algorithm proposed in (Arora et al.. 2016b) where the latent structure of the multiple senses is modeled as a corpus of discourse atoms. where every atom interacts with all the others..\nThe experiment, whose results are depicted in Figure 3, is designed to mimic these underlying. simplifications of the proof in Arora et al. (2016b): we train word vectors via the skip-gram ver. sion of word2vec using the following steps. (a) We initialize the newly generated artificial pol. ysemous words by random vectors; (b) we initialize, and do not update the (two sets of), vector. representations of other words k by the existing word vectors. The embeddings are learnt on the. 2016/06/01 Wikipedia dump, tokenized via WikiExtractor (http://medialab.di.unipi.. it/wiki/wikipedia_Ext ractor); words that occur less than 1,oo0 times are ignored; words. being merged are chosen randomly in proportion to their frequencies. Due to computational con. straints, each instance of mergers is subjected to a single run of the word2vec algorithm.\nLocal structure: Huang et al. (2012); Neelakantan et al. (2015) model a context by the averag. of its constituent word embeddings and use this average vector as a feature to induce word sense. by partitioning context instances into groups and to disambiguate word senses for new context in stances. Li & Jurafsky (2015) models the senses for a target word in a given context by a Chines. restaurant process, models the contexts also by averaging its constituent word embeddings and the. applies standard word embedding algorithms (continuous bag-of-words (CBOw) or skip-gram. Our approach is broadly similar in spirit to these approaches, in that a local lexical-level model i. conceived, but we depart in several ways, the most prominent one being the modeling of the context as subspaces (and not as vectors, which is what an average of constituent word embeddings woul. entail).\n0.5 N=3 0.4 N = 4 N = 5 0.3 rreeneeey 0.2 0.1 0.0 -0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 variance ratio\n0.5 N = 3 0.4 N = 4 N = 5 0.3 0.2 0.1 0.0 -0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 variance ratio\nFigure 4: An experiment to study the low rank structure of context word embeddings. Histograms of the variance ratio are plotted for rank-N PCAs of the context embeddings.\nGiven a random word and a set of its contexts (culled from the set of all sentences where the target. word appears), we use principle component analysis (PCA) to project the context word embeddings. for every context into an N-dimensional subspace and measure the low dimensional nature of con-. text word embeddings. We randomly sampled 500 words whose occurrence (frequency) is larger. than 1O,O0o, extracted their contexts from Wikipedia, and plotted the histogram of the variance. ratios being captured by rank-N PCA in Figure 4 for N = 3, 4, 5. We make the following observa-. tions: even rank-3 PCA captures at least 45% of the energy (i.e., variance ratio) of the context word representations and rank-4 PCA can capture at least half of the energy almost surely. As comparison,. we note that the average the number of context words is roughly 21 and a rank-4 PCA over a random collection of 21 words would be expected to capture only 20% of the energy (this calculation is jus-. tified because word vectors have been observed to possess a spatial isotropy property (Arora et al.,. 2016a)). All word vectors were trained on the Wikipedia corpus with dimension d = 300 using the. skip-gram of word2vec (Mikolov et al., 2013).\nThis experiment immediately suggests the low-dimensional nature of contexts, and that the context be represented in the space of subspaces, i.e., the Grassmannian manifold:\nWe illustrate the intersection phenomenon via another experiment. Consider a monosemous wor. 'typhoon' and consider all contexts in Wikipedia corpus where this word appears (there are 14,82. contexts). We represent each context by the rank-N PCA subspace of all the vectors (with N = 3 associated with the words in the context and consider their intersection. Each of these subspace. is 3 d dimensional (where d = 300 is the dimension of the word vectors). We find that cosin. similarity (normalized projection distance) between the vector associated with \"typhoon'' and eac. context subspace is very small: the average is 0.693 with standard deviation 0.211. For comparison. we randomly sample 14,829 contexts and find the average is 0.305 with standard deviation 0.041 ( detailed histogram is also provided in Figure 5(a)). This corroborates with the hypothesis that th. target word vector is in the intersecton of the context subspaces..\n8000 Wikipedia 2 7000 X random 6000 1 frenneeey 5000 4000 0 3000 2000 -1 1000 210.12 0 0.0 0.2 0.4 0.6 0.8 1.0 -1 -2 -2 cosine similarity 1 0 2\nFigure 5: The geometry of contexts for monosemy\nVisualization of Intersection Hypothesis A visual representation of this geometric phenomenon. is in Figure 5(b), where we have projected the d-dimensional word representations into 3-. dimensional vectors and use these 3-dimensional word vectors to get the subspaces for the four contexts (we set N = 2 for visualization) in Table 4 of the target word \"typhoon\"', and plot the sub-. spaces as 2-dimensional planes. From Figure 5, we can see that all the context subspaces roughly. intersect at a common direction, thus empirically justifying the intersection hypothesis..\nypnoon powerful typhoon that affected southern japan in july it was the sixth named storm and second typhoon of the pacific typhoon season originating from an area of low pressure near. wake island on july the precursor to maon gradually developed. typhoon ineng was a powerful typhoon that affected southern japan in july it was the sixth. named storm and second typhoon of the pacific typhoon season originating from an area of low pressure near wake island on july the precursor. crossing through a gulf of the south china sea patsy weakened to a mph kmh tropical storm before the joint typhoon warning center ceased following the system on the morning of. august as it made landfall near the city of. bay were completely wiped out while all airplanes at the local airports were damaged this is the first active pacific typhoon season on record typhoon barbara formed on march and moved west it strengthened briefly to a category three with.\nTable 4: Contexts containing a monosemous word \"typhoon\"'"}, {"section_index": "12", "section_name": "EMPIRICAL VALIDATION OF THE POLYSEMY HYPOTHESIS", "section_text": "This intuition behind the polysemy hypothesis is validated by the following experiment, which con tinues on the same theme as the one done for the monosemous word \"typhoon\"'. Now we study the geometry of contexts for a polysemous word \"crane\"', which can either mean a large, tall machine used for moving heavy objects or a tall, long-legged, long-necked bird. We list four contexts for each sense of \"crane\" in Table 5, repeat the experiment as conducted above for the monosemous word 'typhoon'' and visualize the context subspaces for two senses in Figure 6(a) and 6(b) respectively Figure 6(c) plots the direction of two intersections. This immediately suggests that the contexts where \"crane\"' stands for a bird intersect at one direction and the contexts where \"crane'' stands fo. a machine, intersect at a different direction as visualized in 3 dimensions.\ncrane: machine. crane: bird 2 2 1 1 0 0 0 -1 -1 1 -2 -2 0 -2 -1 210~12 -2 -1 0 1 0 2 2 2 (a) crane: machine (b) crane: bird (c) intersection\nFigure 6: Geometry of contexts for a polysemous word \"crane\"': (a) all contexts where \"crane means a machine roughly intersect at one direction: (b) all contexts where \"crane\"' means a birc roughly intersect at another direction; (c) two directions representing \"crane\" as a machine and as a bird.\nTable 5: Contexts containing a polysemous word \"crane\""}, {"section_index": "13", "section_name": "E A OUALITATIVE STUDY OF K-GRASSMEANS", "section_text": "To get a qualitative feel of this algorithm on real data, we consider an example target worc \"columbia' with K = 5 senses. We considered 1ooK sentences, extracted from the Wikipedia corpus. The goal of sense induction is to partition the set of contexts into 5 groups, so that within each group the target word \"columbia' has the same sense. We run K-Grassmeans for this targe word and extract the intersect vectors u1, ... uk for K = 5. One sample sentence for each group is given in Table 6 as an example, from which we can see the first group corresponds to British Columbia in Canada, the second one corresponds to Columbia records, the third one corresponds tc\nColumbia University in New York, the fourth one corresponds to the District of Columbia, and the fifth one corresponds to Columbia River.\nTable 6: Semantics of 5 groups for target word \"columbia\"'\nThe performance of K-Grassmeans in the context of the target word \"columbia\"' can also be un derstood via disambiguation. We apply our hard decoding algorithm (4) and our soft decoding algorithm 5 on the five sentences listed in Table 6, the optimal k*'s returned by hard decoding algorithm and the probability distributions P(w, c, k) returned by the soft decoding algorithm are provided in Table 7. From Table 7 we can see that even though the hard decoding algorithm outputs the correct label, some information is missing if we return a single label k*. For example, since we take bag-of-words model in K-Grassmean, some words (e.g. \"school\"' and \"new york city' ir context (c) provided in Table 6) suggest that the meaning for \"columbia'' in this instance migh also be Columbia University. The function of those words reflects in the probability distributior returned by the soft decoding algorithm, where we can see the probability that \"columbia' in this instance means Columbia University is around O.13. The misleading result mainly comes from the bag-of-words model, and how to resolve it remains open.\nTable 7: Hard decoding and soft decoding for disambiguation of \"columbia' in five sentences giver in Table 6.\nsoft decoding P(w, c, k Context No. hard decoding (k*) k = 1 k = 2 k = 3 k = 4 k = 5 (a) 1 0.81 0.01 0.01 0.05 0.13 (b) 2 0.02 0.92 0.01 0.04 0.01 (c) 3 0.01 0.00 0.91 0.06 0.01 (d) 4 0.07 0.03 0.13 0.70 0.07 (e) 5 0.04 0.01 0.01 0.05 0.90\nWe observe another phenomenon that can support our intersection hypothesis - the intersection vector is close to the word representation vector for many monosemous word. We perform an experiment to directly confirm this phonomenon: we randomly sample 500 monosemous words\nGroup No. contexts 1 (a) research centres in canada it is located on patricia bay and the former british columbia highway a in sidney british columbia vancouver island just west of victoria international airport the institute is paired with a canadian coast guard base 2 (b) her big break performing on the arthur godfrey show and had since then released a series of successful singles through columbia records hechtlancaster productions first published the music from their film marty in april and june through cromwell music this. 3 (c) fellow at the merrill school of journalism university of maryland college park. in she was a visiting scholar at the columbia university center for the study of. human rights in haddadcompleted a master of international policy. 4 (d) signed into law by president benjamin harrison in march a site for the new. national conservatory in the district of columbia was never selected much less built the school continued to function in new york city existing solely from phi-. lanthropy 5 (e) in cowlitz county washington the caples community is located west of wood-. land along caples road on the east shore of columbia river and across the river. from columbia city oregon the caples community is part of the woodland school. district\nFigure 7: A histogram of the cosine similarities between word representations and intersection vec tors for monosemous words.\nHowever, this phenomenon does not hold for polysemous words. It turns out, the intersection vec tors for polysemous words are concentrated on a relatively small surface area on the sphere. Th histogram of the cosine similarity between two random intersection vectors among 10,000 inter sections (five intersections for 2,o00 polysemous words) and the histogram of the cosine similarit between two random word vectors among 10,000 word embeddings are provided in Figure 8\n6 6 5 5 hreeneeey frenneeey 4 4 3 3 2 2 1 1 0 0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 cosine similarity cosine similarity\n(a) A histogram of the cosine similarities be- (b) A histogram of the cosine similarities tween intersection directions between word representation.\nFigure 8: Cosine similarities between intersection directions and word representations respectively"}, {"section_index": "14", "section_name": "G A DETAILED DESCRIPTION OF WSI DATASETS", "section_text": "We test our induction algorithm, K -Grassmeans, on two datasets - one standard from SemEval-201 and the other custom-built.\nwhich occur at least 10,o00 times, for each word we compute the intersection vector and check the. cosine similarity between the intersection vector and the corresponding word representation of these monosemous words. We find that on average the cosine similarity score is a very high O.607, with a. small standard deviation of O.095 (a detailed histogram is provided in Figure 7..\n4.5 4.0 3.5 3.0 frenneeey 2.5 2.0 1.5 1.0 0.5 0.0 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 cosine similarity\nSemEval-2010: The test set of SemEval-2010 shared task 14 (Manandhar et al., 2010) contains 50 polysemous nouns and 50 polysemous verbs whose senses are extracted from. OntoNotes (Hovy et al., 2006), and in total 8,915 instances are expected to be disam-. biguated. The context instances are extracted from various sources including CNN and ABC. Makes-Sense-2016: Several word senses from SemEval-2010 are too fine-grained in our view (no performance results on tests with native human speakers' is provided in the lit-. erature) - this creates \"noise\"' that that reduces the performance of all the algorithms, and. the required senses are perhaps not that useful to downstream applications. For example. 'guarantee\"' (as a noun) is labeled to have four different meanings in the following four. sentences:"}, {"section_index": "15", "section_name": "ALGORITHMS FOR POLICE LINEUP TASK", "section_text": "We first introduce the baseline algorithm for word2vec and our algorithm, as given in Algorithm and 2. Both algorithms are motivated by the algorithm in (Arora et al., 2016b).\n1/p scorep o(Sk(w),L)< Uk. w'EL w'EV \\w'EL\nIgorithm 1: The baseline algorithm with word2vec for Princeton Police Lineup Task\nBoth our algorithm and the algorithm in (Arora et al., 2016b) do not take into account that one aton represents one sense of the target word, and thus some atoms might generate two senses in the outpu k candidates. A more sophisticated algorithm is required to address this issue.\n-It has provided a legal guarantee to the development of China's Red Cross cause and. connections with the International Red Cross movement, signifying that China's Red Cross cause has entered a new historical phase.. - Some hotels in the hurricane - stricken Caribbean promise money - back guarantees - Many agencies roll over their debt , paying off delinquent loans by issuing new loans. , or converting defaulted loan guarantees into direct loans.. itigotic fox10\nHowever, in general they all mean \"a formal promise or pledge''. Towards a more human. interpretable version of the WsI task, we custom-build a dataset whose senses are coarser and clearer. We do this by repurposing a recent dataset created in (Arora et al., 2016b),. as part of their \"police lineup' task. Our dataset contains 50 polysemous words, togethei with their senses (on average 2.78 senses per word) borrowed from (Arora et al., 2016b).. We generate the testing instances for a target word by extracting all occurrences of it in. the Wikipedia Corpus, analyzing its Wikipedia annotations (if any), grouping those which. have the same annotations, and finally merging annotations where the target word shares. the same sense. Since the senses are quite readily distinguishable from the perspective of. native/fluent speakers of English, the disambiguation variability, among the human raters. we tested our dataset on, is negligible (this effect is also seen in Figure 6 of (Arora et al.,. 2016b)).\nIn our algorithm, the similarity score can be thought as a mean value of the word similarities between a target word w and a definition word w' in the given sense L, we take the power mean with p = 2. Our algorithm can be adapted to different choice of p, i.e..\nDifferent choice of p leads to different preferences of the similarities between w and w' E L generally speaking larger weights put on relevant words with larger p:\nIf we take p = 0, score1(w, L) turns to be an average of the similarities;. . If we take p = oo, scoreo (w, L) measures the similarity between w and L by the similarit between w and the most relevant word w' E L, i.e.,.\nscoreo(sk(w),L) max Vk(W)v(W) w'EL\nn our case we take p = 2 to allow enough (but not too much) influence from the most relevan wOrds.\n1 score(w, Li) L (v(w)Tv(w') (v(w\")Tv(w')) w'ELi W'ELi W'!\nAlgorithm 2: Our algorithm for Princeton Police Lineup Task"}, {"section_index": "16", "section_name": "I FUTURE WORK", "section_text": "Several new avenues of research in natural language representations arise from the ideas in this wot and we discuss a few items below.\nscore (Vk(W) w'EL (v(w\")Tv(w))2 >(v(w)Tv(w' w'EL w\"EV w'EL\n1. Interactions between Polysemous Words: One of the findings of this work, via the ex periments in Appendix A, is that polysemous words interact with each other in the corpus One natural way to harness these intersections, and hence to sharpen K-Grassmeans, is to do an iterative labeling procedure. Both hard decoding and soft decoding (discussed in Section 2.3) can benefit from iterations. In hard decoding, the \"IDK\" labels may be re- solved over multiple iterations since (a) the rare senses can become dominant senses once the major senses are already labeled, and (b) a confusing sense can be disambiguated once the polysemous words in its context are disambiguated. In soft decoding, the probability can be expected to concentrate on one sense since each iteration yields yet more precise context word embeddings. This hypothesis is inspired by the success of such procedures inherent in the message passing algorithms for turbo and LDPC codes in reliable wire- less communication which share a fair bit of commonality with the setting of polysemy disambiguation (Richardson & Urbanke, 20o8). A quantitative tool to measure the disam biguation improvements from iterations and when to stop the iterative process (akin to the EXIT charts for message passing iterative decoders of LDPC codes (Ten Brink, 2001)) is an interesting research direction, as is a detailed algorithm design of the iterative decoding and its implementation; both of these are beyond the scope of this paper. 2. Low Dimensional Context Representation: A surprising finding of this work is that con- texts (sentences) that contain a common target word tend to reside in a low dimensional subspace, as justified via empirical observations in Figure 4. Understanding this geomet- rical phenomenon in the context of a generative model (for instance, RAND-WALK of (Arora et al., 2016a) is not able to explain this) is a basic problem of interest, with several relevant applications (including language modeling Iyer et al. (1994); Chen & Goodman (1996) and semantic parsing of sentences (textual entailment, for example (Dagan et al., 2006; Bar-Haim et al., 2006; Giampiccolo et al., 2007))). Such an understanding could also provide new ideas for the topical subject of representing sentences and paragraphs (Le & Mikoloy. 2014: Wieting et al.. 2015: Kenter et al.. 2016: Kusner et al.. 2015) and eventy\nThe pseudocode for context representation (c.f. Section 2.1) is provided in Algorithm 3\nAlgorithm 3: The algorithm for context representation\nAlgorithm 3: The algorithm for context representation.. Input : a context c, word embeddings v(), and a PCA rank N. Compute the first N principle components of samples {v(w'), w' E c} U1,.., uN PCA({v(w'),w' E c}),. N : QnUn,Qn E R Output: N orthonormal basis u1, ..., uv and a subspace S..\nOutput: N orthonormal basis u1, ..., uv and a subspace S\nThe pseudocode for word sense induction (c.f. Section 2.2) is provided in Algorithm 4\nAlgorithm 4: Word sense induction algorithm with given number of senses\nally combining with document/topic modeling methods such as Latent Dirichlet Allocation. Blei et al. (2003). 3. Combining Linguistic Resources: Presently the methods of lexeme representation are ei-. ther exclusive external resource-based or entirely unsupervised. The unsupervised method. of K-Grassmeans reveals robust clusters of senses (and also provides a soft score measur-. ing the robustness (in terms of how frequent the sense is and how sharply/crisply it is used of the identified sense). On the other hand, WordNet lists a very detailed number of senses.. some frequent and robust but many others very fine grained; the lack of any accompanying. metric that relates to the frequency and robustness of this sense (which could potentially. be domain/corpus specific) really makes this resource hard to make computational use of.. at least within the context of polysemy representations. An interesting research direction. would be to try to combine K-Grassmeans and existing linguistic resources to automati. cally define senses of multiple granularities, along with metrics relating to frequency and. robustness of the identified senses.\nU1,...,UN PCA({v(w'),w'E c}) N S : QnUn, Qn E R n=1\nSk<{cm:d(uk,S(cm\\w)) d(uk',S(cm\\w)) Vk}\n) d2(u,S(c\\w)) Uk argmin U cESk K L+ d2(uk,S(c\\w)), k=1 cESk\nTo ensure good performance, we randomize the intersection directions with multiple different seeds and output the best one in terms of the objective function L; this step is analogous to the random initialization conducted in kmeans++ in classical clustering literature (Ostrovsky et al., 2006; Arthur & Vassilvitskii, 2007).\nThe pseudocode for the word sense disambiguation (c.f. Section 2.2) is provided in Algorithm 5\nAlgorithm 5: The algorithm for sense disambiguation. Input : a context c, word embeddings v(), a PCA rank N, a set of intersection directions uk (w)'s, and a threshold 0.. 1 Compute denoised context subspace, S S(c\\w), 2 Compute the distance between S(cw) and intersections,. dk d(uk,S) 3 if hard decoding then. 4 Get the closest cluster. k*+ arg min dk, 5 Check the threshold.. if dk* < 0 then 6 7 return k*; 8 end 9 return IDK; 10 end 11 if soft decoding then. 12 Compute the probabilities: exp(-10d(uk(w),S(c\\w))) P(w, c, k) k,exp(-10d(uk'(w),S(c\\w))) 13 end 14 return P(w, c, k) for k = 1,..., K;. Output: An label indicating which sense this word means in this context..\nAlgorithm 5: The algorithm for sense disambiguation."}, {"section_index": "17", "section_name": "M V-MEASURE AND PAIRED F-SCORE", "section_text": "Clustering algorithms partition N data points into a set of clusters K = {1, 2, ..., K}. Given the ground truth, i.e., another partition of data into another set of clusters T = {1, 2, ..., T}, the perfor mance of a clustering algorithm is evaluated based on a contingency table A = atk representing the clustering algorithm, where atk is the number of data whose ground truth label is t and algorithm label is k. There are two intrinsic properties of all desirable evaluation measures:\nV-Measure (Rosenberg & Hirschberg, 2007) and paired F-Score (Artiles et al., 2009) are two stan dard measures, the definitions of which are given below."}, {"section_index": "18", "section_name": "M.1 V-MEASURE", "section_text": "V-Measure is an entropy-based metric, defined as a harmonic mean of homogeneity and complete ness.\nSS(c\\w)\ndk <d(uk,S)\nexp(-10d(ux(w),S(c\\w)) P(w,c,k) k,exp(-10d(uk'(w),S(cw)))\nThe measure should be permutation-invariant, i.e., the measure should be the same if we permutate the labels in I or T.. The measure should encourage intra-cluster similarity and penalize inter-cluster similarity..\nCompleteness is analogous to homogeity. Formally this is defined as\nGiven h and c. the V-Measure is their harmonic mean. i.e\nPaired F-score evaluates clustering performance by converting the clustering into a binary classifi cation problem - given two instances, do they belong to the same cluster or not?.\nFor each cluster k identified by the algorithm, we generate t=1 atk) instance pairs, and for each ground true cluster, we generate (k=1 at) instances pairs. Let F(IC) be the set of instance pairs from the algorithm clusters and let F(T) be the set of instances pairs from ground truth clusters. Precision and recall is defined accordingly:\nwhereF(K)]F(T uted using the matrix A as below: can be c\n1 if H(T) = 0 H(T|K) otherwise, H(T) K T Atk H(T(c) =- atk log k=1 t=1 ltk T K atk ack H(T)=- k=1 log k=1 N N t=1\n1 if H(T) = 0 H(T|C) Otherwise, H(T)\nK T atk H(T(k)=- N log atk T k=1 t=1 Otk T K H(T) =- Atk k=1 Ock N N t=1\nif H(K) = 0 H(K|T) otherwise, H(K)\nTK atk I(K(T)=-)> atk log Atk7 K t=1 k=1 k=1 K T T atk ack H(K) =- t=1 log N N k=1\n2hc V h+c\nF(K)nF(T) P z F(K) F(K)nF(T)] R = F(S)\n|F(c)= tk 2 k=1 T K |F(T)|= Utk K 2 t=1 T K Atk |F(K) nF(T)|= 2 t=1 k=1"}, {"section_index": "19", "section_name": "REFERENCES", "section_text": "Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. A latent variable model approach to pmi-based word embeddings. Transactions of the Association for Computationa Linguistics, 4:385-399, 2016a. ISsN 2307-387X. URL https://transacl.org/ojs/ index.php/tacl/article/view/742\nSanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. Linear algebraic struc ture of word senses, with applications to polysemy. arXiv preprint arXiv:1601.03764, 2016b.\nDavid Arthur and Sergei Vassilvitskii. k-means++: The advantages of careful seeding. In Proceed ings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, pp. 1027-1035 Society for Industrial and Applied Mathematics, 2007..\nJavier Artiles, Enrique Amigo, and Julio Gonzalo. The role of named entities in web people search In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Volume 2- Volume 2, pp. 534-542. Association for Computational Linguistics, 2009.\nIdo Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object clas sification, and recognising tectual entailment, pp. 177-190. Springer, 2006.\nEduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. Ontonotes. the 90% solution. In Proceedings of the human language technology conference of the NAACL. Companion Volume: Short Papers, pp. 57-60. Association for Computational Linguistics, 2006\nTom Kenter, Alexey Borisov, and Maarten de Rijke. Siamese cbow: Optimizing word embeddings for sentence representations. arXiv preprint arXiv:1606.04640. 2016\nRoy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. The second pascal recognising textual entailment challenge. In Proceedings of. the second PASCAL challenges workshop on recognising textual entailment, volume 6, pp. 6-4, 2006.\nXinxiong Chen, Zhiyuan Liu, and Maosong Sun. A unified model for word sense representation and disambiguation. In EMNLP, pp. 1025-1035. Citeseer, 2014.\nSuresh Manandhar, Ioannis P Klapaftis, Dmitriy Dligach, and Sameer S Pradhan. Semeval-2010 task 14: Word sense induction & disambiguation. In Proceedings of the 5th international workshop on semantic evaluation, pp. 63-68. Association for Computational Linguistics. 2010.\nTom Richardson and Ruediger Urbanke. Modern coding theory. Cambridge University Press, 200\nAndrew Rosenberg and Julia Hirschberg. V-measure: A conditional entropy-based external cluste. evaluation measure. In EMNLP-CoNLL, volume 7, pp. 410-420, 2007.\nSascha Rothe and Hinrich Schutze.. Autoextend: Extending word embeddings to embeddings for 1507\nJohn Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. Towards universal paraphrastic sentence embeddings. arXiv preprint arXiv:1511.08198, 2015."}]
Bk8aOm9xl
[{"section_index": "0", "section_name": "SURPRISE-BASED INTRINSIC MOTIVATION FOR DEEP REINFORCEMENT LEARNING", "section_text": "Joshua Achiam & Shankar Sastry\nDepartment of Electrical Engineering and Computer Science UC Berkeley\njachiam@berkeley.edu, sastry@coe.berkeley.edu\nExploration in complex domains is a key challenge in reinforcement learning, espe. cially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as e-greedy action selection or Gaussian control noise, but there are many. tasks where these methods are insufficient to make any learning progress. Here we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent's surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the. other gives the k-step learning progress. We show that our incentives enable agents. to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A reinforcement learning agent uses experiences obtained from interacting with an unknown env onment to learn behavior that maximizes a reward signal. The optimality of the learned behavio s strongly dependent on how the agent approaches the exploration/exploitation trade-off in tha environment. If it explores poorly or too little, it may never find rewards from which to learn, and it oehavior will always remain suboptimal; if it does find rewards but exploits them too intensely, i may wind up prematurely converging to suboptimal behaviors, and fail to discover more rewardin opportunities. Although substantial theoretical work has been done on optimal exploration strategie for environments with finite state and action spaces, we are here concerned with problems that hav continuous state and/or action spaces, where algorithms with theoretical guarantees admit no obviou generalization or are prohibitively impractical to implement.\nSimple heuristic methods of exploring such as e-greedy action selection and Gaussian control noise have been successful on a wide range of tasks, but are inadequate when rewards are especially sparse For example, the Deep Q-Network approach of Mnih et al. [13] used e-greedy exploration in training deep neural networks to play Atari games directly from raw pixels. On many games, the algorithm resulted in superhuman play; however, on games like Montezuma's Revenge, where rewards are extremely sparse, DQN (and its variants [25], [26], [15], [12]) with e-greedy exploration failed to achieve scores even at the level of a novice human. Similarly, in benchmarking deep reinforcement learning for continuous control, Duan et al.[5] found that policy optimization algorithms that explored by acting according to the current stochastic policy, including REINFORCE and Trust Region Policy Optimization (TRPO), could succeed across a diverse slate of simulated robotics control tasks with well-defined, non-sparse reward signals (like rewards proportional to the forward velocity of the robot). Yet, when tested in environments with sparse rewards-where the agent would only be able to attain rewards after first figuring out complex motion primitives without reinforcement-every algorithm failed to attain scores better than random agents. The failure modes in all of these cases pertained to the nature of the exploration: the agents encountered reward signals so infrequently that they were never able to learn reward-seeking behavior."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "One approach to encourage better exploration is via intrinsic motivation, where an agent has a task-independent, often information-theoretic intrinsic reward function which it seeks to maximize in addition to the reward from the environment. Examples of intrinsic motivation include empowerment where the agent enjoys the level of control it has about its future; surprise, where the agent is excited to see outcomes that run contrary to its understanding of the world; and novelty, where the agent is excited to see new states (which is tightly connected to surprise, as shown in [2J). For in-depth reviews of the different types of intrinsic motivation, we direct the reader to [1] and [17].\nRecently, several applications of intrinsic motivation to the deep reinforcement learning setting (such. as [2], [7], [22]) have found promising success. In this work, we build on that success by exploring scalable measures of surprise for intrinsic motivation in deep reinforcement learning. We formulate. surprise as the KL-divergence of the true transition probability distribution from a transition model. which is learned concurrently with the policy, and consider two approximations to this divergence which are easy to compute in practice. One of these approximations results in using the surprisal of a. transition as an intrinsic reward; the other results in using a measure of learning progress which is. closer to a Bayesian concept of surprise. Our contributions are as follows:.\nWe distinguish our work from prior work in a number of implementation details: unlike Bellemar et al. [2], we learn a transition model as opposed to a state-action occupancy density; unlike Stadi et al. [22], our formulation naturally encompasses environments with stochastic dynamics; unlike Houthooft et al. [7], we avoid the overhead of maintaining a distribution over possible dynamics models. and learn a single deep dynamics model.\nIn our empirical evaluations, we compare the performance of our proposed intrinsic rewards with other. heuristic intrinsic reward schemes and to recent results from the literature. In particular, we compare. to Variational Information Maximizing Exploration (VIME) [7], a method which approximately. maximizes Bayesian surprise and currently achieves state-of-the-art performance on continuous control with sparse rewards. We show that our incentives can perform on the level of VIME at a. lower computational cost.\nWe begin by introducing notation which we will use throughout the paper. A Markov decision. process (MDP) is a tuple, (S, A, R, P, ), where S is the set of states, A is the set of actions.. R : S A S -> R is the reward function, P : S A S -> [0, 1] is the transition probability. function (where P(s'[s, a) is the probability of transitioning to state s' given that the previous state. was s and the agent took action a in s), and : S -> [0, 1| is the starting state distribution. A policy. : S A -> [0, 1] is a distribution over actions per state, with (a[s) the probability of selecting. a in state s. We aim to select a policy which maximizes a performance measure, L(), which. usually takes the form of expected finite-horizon total return (sum of rewards in a fixed time period) or expected infinite-horizon discounted total return (discounted sum of all rewards forever). In this. paper, we use the finite-horizon total return formulation..\n1. we investigate surprisal and learning progress as intrinsic rewards across a wide range of environments in the deep reinforcement learning setting, and demonstrate empirically that. the incentives (especially surprisal) result in efficient exploration,. 2. we evaluate the difficulty of the slate of sparse reward continuous control tasks introduced by Houthooft et al. [7] to benchmark exploration incentives, and introduce a new task to. complement the slate, 3. and we present an efficient method for learning the dynamics model (transition probabilities). concurrently with a policy\nTo train an agent with surprise-based exploration, we alternate between making an update step to a. dynamics model (an approximator of the MDP's transition probability function), and making a policy. update step that maximizes a trade-off between policy performance and a surprise measure\nThe dynamics model step makes prog ress on the optimization problem.\n1 log Po(s'[s,a) + af($) mlr D O (s,a,s')eD\nwhere n > 0 is an explore-exploit trade-off coefficient. The exploration incentive in (2), which we select to be the on-policy average KL-divergence of P, from P, is intended to capture the agent's surprise about its experience. The dynamics model Ps should only be close to P on regions of the transition state space that the agent has already visited (because those transitions will appear in D and thus the model will be fit to them), and as a result, the KL divergence of Ps and P will be highe. in unfamiliar places. Essentially, this exploits the generalization in the model to encourage the agen to go where it has not gone before. The surprise incentive in (2) gives the net effect of performing a. reward shaping of the form.\nr'(s, a, s) = r(s, a, s) + n(log P(s'[s, a) - log Po(ss,a))\nwhere r(s, a, s') is the original reward and r'(s, a, s') is the transformed reward, so ideally we could solve (2) by applying any reinforcement learning algorithm with these reshaped rewards. In practice we cannot directly implement this reward reshaping because P is unknown. Instead, we consider two ways of finding an approximate solution to (2).\nIn one method, we approximate the KL-divergence by the cross-entropy, which is reasonable wher H(P) is finite (and small) and P is sufficiently far from F1f that is, denoting the cross-entropy by H(P, Po)[s, a] = Es'~P(ls.g)[- log Po(s'[s, a)], we assume\nIn the other method, we maximize a lower bound on the objective in (2) by lower bounding the surprise term:\nr'(s, a, s) = r(s, a, s) + n (log Ps'(ss, a) - log Po(s's, a)\n(s, a, s) = r(s, a, s) + n (log Po+(s[s, a) - log Pot-r(s's, a\nwhere D is is a dataset of transition tuples from the environment, Ps is the model we are learning, f. is a regularization function, and a > 0 is a regularization trade-off coefficient. The policy update step makes progress on an approximation to the optimization problem.\nmaxL()+n E [DkL(PPo)[s,a]] TT s,a~\nDkL(PP)[s,a] = H(P, Po)[s,a] H(P)[s, a ~ H(P, P)[s, a].\nr'(s,a,s)=r s, a, s - nlog Po(s'[s, a)\nDkL(P]|Po)s,a= DkL(P]|Po)s,a+ E O8 E log\n'On the other hand, if H(P)[s, a] is non-finite everywhere-- -for instance if the MDP has continuous states and deterministic transitions---then as long as it has the same sign everywhere, Es,a~ [H(P)[s, a]] is a constant with respect to and we can drop it from the optimization problem anyway."}, {"section_index": "3", "section_name": "3.1 DISCUSSION", "section_text": "Ideally, we would like the intrinsic rewards to vanish in the limit as Po -> P, because in this cas the agent should have sufficiently explored the state space, and should primarily learn from extrinsi rewards. For the proposed intrinsic reward in (5), this is not the case, and it may result in poc performance in that limit. The thinking goes that when Po = P, the agent will be incentivized t seek out states with the noisiest transitions. However, we argue that this may not be an issue, becaus the intrinsic motivation seems mostly useful long before the dynamics model is fully learned. As lon as the agent is able to find the extrinsic rewards before the intrinsic reward is just the entropy in I the pathological noise-seeking behavior should not happen. On the other hand, the intrinsic reward i 8) should not suffer from this pathology, because in the limit, as the dynamics model converges, w should have Pot ~ Pot-k. Then the intrinsic reward will vanish as desired.\nHere, P(|ht) is meant to represent a distribution over possible dynamics models parametrized by given the preceding history of observed states and actions ht (so ht includes st), and P([ht, at, St+1). is the posterior distribution over dynamics models after observing (at, St+1). By Bayes' rule, the dynamics prior and posterior are related to the model-based transition probabilities by.\nE [log P(st+1[ht, at, $)] log E[P(st+1[ht,At,$)] $~Pt+1 ~P\nwhere Pt+1 = P([ht, at, St+1) is the posterior and Pt = P([ht) is the prior. In this form, the resemblance between (9) and (8) is clarified. Although the update from $t-k to $t is not Bayesian and is performed in batch, instead of per transition sample--we can imagine (8) might contain similar information to (9).\nOur implementation uses L2 regularization in the dynamics model fitting, and we impose an additional constraint to keep model iterates close in the KL-divergence sense. Denoting the average divergence.\nThe constraint value k is a hyper-parameter of the algorithm. We solve this optimization problen approximately using a single second-order step with a line search, as described by [20]; full details are given in supplementary material. D is a FIFO replay memory, and at each iteration, instead of using the entirety of D for the update step we sub-sample a batch d C D. Also, similarly to [7], we adjust the bonus coefficient n at each iteration, to keep the average bonus magnitude upper-boundec (and usually fixed). Let no denote the desired average bonus, and r+(s, a, s') denote the intrinsic reward: then. at each iteration. we set\nno max (s,a,s'eBr+(s, a, s'\nwhere B is the batch of data used for the policy update step. This normalization improves the stability of the algorithm by keeping the scale of the bonuses fixed with respect to the scale of the extrinsic rewards. Also, in environments where the agent can die, we avoid the possibility of the intrinsic rewards becoming a living cost by translating all bonuses so that the mean is nonnegative. The basic outline of the algorithm is given as Algorithm|1 In all experiments, we use fully-factored Gaussian distributions for the dynamics models, where the means and variances are the outputs of neural networks.\nP($|ht)P(St+1|ht,at, P($|ht,At, St+1) Eo~P([ht) [P(St+1|ht,At,$)]\n1 DkL(Po'[[Po) DkL(PoPo)[s,a] D (s,a)ED\n1 Pi+1 = arg min logPo(s's,a) +a[$l[2 : DkL(Po[[Po)< k D s,a,s')eD\nAlgorithm 1 Reinforcement Learning with Surprise Incentive\nDIISCIICCICIVE Input: Initial policy o, dynamics model Po repeat collect rollouts on current policy T add rollout (s, a, s') tuples to replay memory D compute reshaped rewards using (5) or (8) with dynamics model Po. normalize n by the average intrinsic reward of the current batch of data update policy to i+1 using any RL algorithm with the reshaped rewards update the dynamics model to Pi+1 according to (11) until training is completed\nInput: Initial policy o, dynamics model Poc repeat collect rollouts on current policy ; add rollout (s, a, s') tuples to replay memory D compute reshaped rewards using (5) or (8) with dynamics model Po, normalize n by the average intrinsic reward of the current batch of data update policy to i+1 using any RL algorithm with the reshaped rewards update the dynamics model to Pi+1 according to (11) until training is completed"}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "We evaluate our proposed surprise incentives on a wide range of benchmarks that are challenging fo naive exploration methods, including continuous control and discrete control tasks. Our continuous control tasks include the slate of sparse reward tasks introduced by Houthooft et al. [7]: sparse MountainCar, sparse CartPoleSwingup, and sparse HalfCheetah, as well as a new sparse reward tasl that we introduce here: sparse Swimmer. (We refer to these environments with the prefix 'sparse' tc differentiate them from other versions which appear in the literature, where agents receive non-sparse reward signals.) Additionally, we evaluate performance on a highly-challenging hierarchical sparse reward task introduced by Duan et al j5], SwimmerGather. The discrete action tasks are severa games from the Atari RAM domain of the OpenAI Gym [4]: Pong, BankHeist, Freeway, and Venture\nEnvironments with deterministic and stochastic dynamics are represented in our benchmarks: the continuous control domains have deterministic dynamics, while the Gym Atari RAM games have stochastic dynamics. (In the Atari games, actions are repeated for a random number of frames.)\nWe use Trust Region Policy Optimization (TRPO) [20], a state-of-the-art policy gradient method as our base reinforcement learning algorithm throughout our experiments, and we use the rllab implementations of TRPO and the continuous control tasks [5]. Full details for the experimental set-up are included in the appendix\nOn all tasks, we compare against TRPO without intrinsic rewards, which we refer to as using naive. exploration (in contrast to intrinsically motivated exploration). For the continuous control tasks, we also compare against intrinsic motivation using the L2 model prediction error,.\nwhere o is the mean of the learned Gaussian distribution Po. The model prediction error was investigated as intrinsic motivation for deep reinforcement learning by Stadie et al [22], althougl they used a different method for learning the model o. This comparison helps us verify whethe. or not our proposed form of surprise, as a KL-divergence from the true dynamics model, is useful Additionally, we compare our performance against the performance reported by Houthooft et al. for Variational Information Maximizing Exploration (VIME), a method where the intrinsic rewarc. associated with a transition approximates its Bayesian surprise using variational methods. Currently. VIME has achieved state-of-the-art results on intrinsic motivation for continuous control.."}, {"section_index": "5", "section_name": "4.1 CONTINUOUS CONTROL RESULTS", "section_text": "Median performance curves are shown in Figure[1with interquartile ranges shown in shaded areas Note that TRPO without intrinsic motivation failed on all tasks: the median score and upper quartile range for naive exploration were zero everywhere. Also note that TRPO with random exploration bonuses failed on most tasks, as shown separately in Figure2 We found that surprise was not needed to solve MountainCar, but was necessary to perform well on the other tasks.\nr+(s,a.s s' - os, a)l2\nAs a final check for the continuous control tasks, we benchmark the tasks themselves, by measuring the performance of the surprisal bonus without any dynamics learning: r+(s, a, s) = - log Poo(s'[s, a) where o are the original random parameters of Po. This allows us to verify whether our benchmark tasks actually require surprise to solve at all, or if random exploration strategies successfully solve them.\n200 100 TRPO+AKL-1 TRPO+AKL-1 TRPO+AKL-1 TRPO+AKL-10 TRPO+AKL-10 TRPO+AKL-10 0.8 TRPO+NLL TRPO+NLL 80 TRPO+NLL TRPO+PRED 150 TRPO+PRED TRPO+PRED TRPO TRPO TRPO 0.6 TRPO+VIME TRPO+VIME 60 TRPO+VIME 100 0.4 40 50 0.2 20 5 10 15 20 25 0 200 400 600 800 1000 200 400 600 800 1000 (a) MountainCar (b) CartpoleSwingup (c) HalfCheetah 200 TRPO+AKL-1 TRPO+NLL TRPO+AKL-10 0.2 TRPO+PRED TRPO+NLL TRPO 150 TRPO+PRED 0.15 TRPO+VIME TRPO 100 0.1 0.05 50 0 200 400 600 800 1000 0 200 400 600 800 1000 (d) Swimmer (e) SwimmerGather\n1 200 100 TRPO+AKL-1 TRPO+AKL-1 TRPO+AKL-1 TRPO+AKL-10 TRPO+AKL-10 TRPO+AKL-10 0.8 TRPO+NLL TRPO+NLL 80 TRPO+NLL TRPO+PRED 150 TRPO+PRED TRPO+PRED TRPO TRPO TRPO 0.6 TRPO+VIME TRPO+VIME 60 TRPO+VIME 100 0.4 40 50 0.2 20 0 5 10 15 20 25 200 400 600 800 1000 200 400 600 800 1000\n200 TRPO+AKL-1 TRPO+NLL TRPO+AKL-10 0.2 TRPO+PRED TRPO+NLL TRPO 150 TRPO+PRED TRPO+VIME TRPO 0.15 100 0.1 0.05 50 00 200 400 600 800 1000 0 200 400 600 800 1000 (d) Swimmer (e) SwimmerGather\nFigure 1: Median performance for the continuous control tasks over 10 runs with a fixed set of seeds with interquartile ranges shown in shaded areas. The x-axis is iterations of training; the y-axis is average undiscounted return. AKL-k refers to learning progress (8), NLL to surprisal (5), and PRED to (12). For the first four tasks, no = 0.001; for SwimmerGather, no = 0.0001. Results for VIME are from Houthooft et al. [7], reproduced here with permission. We note that the performance curve fo. VIME in the SwimmerGather environment represents only 2 random seeds, not 10.\n200 TRPO+RAN TRPO+RAN TRPO+RAN 200 TRPO+RAN TRPO+RAN 0.8 150 150 0.15 0.6 100 0.1 0.4 50 0.05 0.2 10 15 20 25 200 400 600 800 1000 200 400 600 800 1000 200 400 600 800 1000 O 200 400 600\nRPO+RAI (a) MountainCar (b) CartpoleSwingup (c) HalfCheetah (d) Swimmer (e) SwimmerGather\nFigure 2: Benchmarking the benchmarks: median performance for the continuous control tasks over 10 runs with a fixed set of seeds, with interquartile ranges shown in shaded areas, using the surprisal. without learning bonus. RAN refers to the fact that this is essentially a random exploration bonus.\nThe surprisal bonus was especially robust across tasks, achieving good results in all domains an. substantially exceeding the other baselines on the more challenging ones. The learning progres. bonus for k = 1 was successful on CartpoleSwingup and HalfCheetah but it faltered in the others. It. weak performance in MountainCar was due to premature convergence of the dynamics model, whic. resulted in the agent receiving intrinsic rewards that were identically zero. (Given the simplicity o. the environment, it is not surprising that the dynamics model converged so quickly.) In Swimme. however, it seems that the learning progress bonuses did not inspire sufficient exploration. Because. the Swimmer environment is effectively a stepping stone to the harder SwimmerGather, where th. agent has to learn a motion primitive and collect target pellets, on SwimmerGather, we only evaluate. the intrinsic rewards that had been successful on Swimmer..\nBoth surprisal and learning progress (with k = 1) exceeded the reported performance of VIME on HalfCheetah by learning to solve the task more quickly. On CartpoleSwingup, however, both were more susceptible to getting stuck in locally optimal policies, resulting in lower median scores than VIME. Surprisal performed comparably to VIME on SwimmerGather, the hardest task in the slate-in the sense that after 1o00 iterations, they both reached approximately the same median score-although with greater variance than VIME.\nOur results suggest that surprisal is a viable alternative to VIME in terms of performance, and is. highly favorable in terms of computational cost. In VIME, a backwards pass through the dynamics. model must be computed for every transition tuple separately to compute the intrinsic rewards whereas our surprisal bonus only requires forward passes through the dynamics model for intrinsic.\n1500 30 1000 TRPO+AKL-1 TRPO+AKL-10 800 TRPO+NLL TRPO 600 15 400 TRPO+AKL-1 500 TRPO 10 TRPO TRPO+AKL-10 TRPO+AKL-1 TRPO+AKL-1 TRPO+NLL TRPO+AKL-10 TRPO+AKL-10 200 TRPO TRPO+NLL TRPO+ NLL 200 400 600 800 1000 50 100 150 200 100 200 300 400 500 100 200 300 400\n30 1000 TRPO+AKL-1 TRPO+AKL-10 800 TRPO+NLL TRPO RPO+AKL- 10 TRPC TRPO+AKL-10 TRPO+AKL-1 TRPO+AKL-1 TRPO+NLL TRPO+AKL-10 TRPO+AKL-10 200 TRPO TRPO+NLL TRPO+ NLL 200 400 600 800 1000 50 100 150 200 100 200 300 400 500 100 200 300 400 500 (a) Pong-RAM (b) BankHeist-RAM (c) Freeway-RAM (d) Venture-RAM\nFigure 4: Median performance for the Atari RAM tasks over 10 runs with a fixed set of seeds, with interquartile ranges shown in shaded areas. The x-axis is iterations of training; the y-axis is average. undiscounted return. AKL-k refers to learning progress (8), and NLL to surprisal (5).\nreward computation. (Limitations of current deep learning tool kits make it difficult to efficientl compute separate backwards passes, whereas almost all of them support highly parallel forwarc computations.) Furthermore, our dynamics model is substantially simpler than the Bayesian neura network dynamics model of VIME. To illustrate this point, in Figure|3|we show the results of a speec comparison making use of the open-source VIME code [6], with the settings described in the VIMI paper. In our speed test, our bonus had a per-iteration speedup of a factor of 3 over VIME||we give a full analysis of the potential speedup in Appendix C."}, {"section_index": "6", "section_name": "4.2 ATARI RAM DOMAIN RESULTS", "section_text": "In Pong, naive exploration naturally succeeds, so we are not surprised to see that intrinsic motivation does not improve performance. However, this serves as a sanity check to verify that our intrinsic rewards do not degrade performance. (As an aside, we note that the performance here falls short of the standard score of 20 for this domain because we truncate play at 5000 timesteps.).\nIn BankHeist, we find that intrinsic motivation accelerates the learning significantly. The agents with surprisal incentives reached high levels of performance (scores > 1000) 10% sooner than naive exploration, while agents with learning progress incentives reached high levels almost 20% sooner.\nIn Freeway, the median performance for TRPO without intrinsic motivation was adequate, but the lower quartile range was quite poor-only 6 out of 10 runs ever found rewards. With the learning progress incentives, 8 out of 10 runs found rewards; with the surprisal incentive, all 10 did. Freeway is a game with very sparse rewards, where the agent effectively has to cross a long hallway before i can score a point, so naive exploration tends to exhibit random walk behavior and only rarely reaches the reward state. The intrinsic motivation helps the agent explore more purposefully.\nTRPO+NLL TRPO VIME Surprisal No Bonus TRPO+VIME Avg. Initialization Time 3 min, 52 s 0 min, 30 s 0 min, 13 s Avg. Time to 15 Iterations 6 min, 21 s 3 min, 23 s 1 min, 51 s Iterations 10\nFigure 3: Speed test: comparing the performance of VIME against our proposed intrinsic reward schemes, average compute time over 5 random runs. Tests were run on a Thinkpad T440p with four physical Intel i7-470oMQ cores, in the sparse HalfCheetah environment. VIME's greater initialization time, which is primarily spent in computation graph compilation, reflects the complexity of the Bayesian neural network model.\n2we compute this by comparing the marginal time cost incurred just by the bonus in each case: that is, if Tvime, Tsurprisal, and Tnobonus denote the times to 15 iterations, we obtain the speedup as\nTvime - Tnobonus Tsurprisal - Tnobonus\nIn Venture, we obtain our strongest results in the Atari domain. Venture is extremely difficult because the agent has to navigate a large map to find very sparse rewards, and the agent can be killed by enemies interspersed throughout. We found that our intrinsic rewards were able to substantially improve performance over naive exploration in this challenging environment. Here, the best performance was again obtained by the surprisal incentive, which usually inspired the agent to reach scores greater than 500."}, {"section_index": "7", "section_name": "4.3 COMPARING INCENTIVES", "section_text": "Among our proposed incentives, we found that surprisal worked the best overall, achieving the most consistent performance across tasks. The learning progress-based incentives worked well on some domains, but generally not as well as surprisal. Interestingly, learning progress with k = 10 performed much worse on the continuous control tasks than with k = 1, but we observed virtually no difference in their performance on the Atari games; it is unclear why this should be the case.\nn (S- ,(s,a))2 log P(s'|s, a) = + log d 203,(s,a) i=1\nwhich essentially includes the L2-squared error norm as a sub-expression. The relative difference in performance suggests that the variance terms confer additional useful information about the novelty of a state-action pair.\nRecently, several intrinsic motivation strategies that deal specifically with deep reinforcement learning have been proposed. Stadie et al. [22] learn deterministic dynamics models by minimizing Euclidean loss-whereas in our work, we learn stochastic dynamics with cross entropy loss-and use L2 prediction errors for intrinsic motivation. Houthooft et al. [7] train Bayesian neural networks to approximate posterior distributions over dynamics models given observed data, by maximizing a variational lower bound; they then use second-order approximations of the Bayesian surprise as intrinsic motivation. Bellemare et al. [2] derived pseudo-counts from CTS density models over states and used those to form intrinsic rewards, notably resulting in dramatic performance improvement on Montezuma's Revenge, one of the hardest games in the Atari domain. Mohamed and Rezende [14] developed a scalable method of approximating empowerment, the mutual information between an agent's actions and the future state of the environment, using variational methods. Oh et al. [16] estimated state visit frequency using Gaussian kernels to compare against a replay memory, and used these estimates for directed exploration.\nIn this work, we formulated surprise for intrinsic motivation as the KL-divergence of the true transition probabilities from learned model probabilities, and derived two approximations--surprisal and k-step\nSurprisal strongly outperformed the L2 error based incentive on the harder continuous control tasks earning to solve them more quickly and without forgetting. Because we used fully-factored Gaussians for all of our dyanmics models, the surprisal had the form\nSubstantial theoretical work has been done on optimal exploration in finite MDPs, resulting in. algorithms such as E3 [10], R-max [3], and UCRL [9], which scale polynomially with MDP size However, these works do not permit obvious generalizations to MDPs with continuous state and. action spaces. C-PACE [18] provides a theoretical foundation for PAC-optimal exploration in MDPs with continuous state spaces, but it requires a metric on state spaces. Lopes et al. [11] investigated. exploration driven by learning progress and proved theoretical guarantees for their approach in the. finite MDP case, but they did not address the question of scaling their approach to continuous or. high-dimensional MDPs. Also, although they formulated learning progress in the same way as (8). they formed intrinsic rewards differently. Conceptually and mathematically, our work is closest to. prior work on curiosity and surprise [8][19] 23][24], although these works focus mainly on small finite MDPs.\nlearning progress--that are scalable, computationally inexpensive, and suitable for application tc. high-dimensional and continuous control tasks. We showed that empirically, motivation by surprisa. and 1-step learning progress resulted in efficient exploration on several hard deep reinforcemen. learning benchmarks. In particular, we found that surprisal was a robust and effective intrinsi. motivator, outperforming other heuristics on a wide range of tasks, and competitive with the curren state-of-the-art for intrinsic motivation in continuous control.."}, {"section_index": "8", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Rein Houthooft for interesting discussions and for sharing data from the original VIME experiments. We also thank Rocky Duan, Carlos Florensa, Vicenc Rubies-Royo, Dexter Scobee, and Eric Mazumdar for insightful discussions and reviews of the preliminary manuscript..\nThis work is supported by TRUST (Team for Research in Ubiquitous Secure Technology) which receives support from NSF (award number CCF-0424422)."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "[1] Andrew Barto, Marco Mirolli, and Gianluca Baldassarre. Novelty or Surprise? Frontiers in Psychology, 4(DEC), 2013. [2] Marc G Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, Google Deepmind, and Remi Munos. Unifying Count-Based Exploration and Intrinsic Motivation arXiv, (Im), 2016. [3] Ronen I Brafman and Moshe Tennenholtz. R-max - A General Polynomial Time Algorithm for Near-Optimal Reinforcement Learning. Journal of Machine Learning Research, 3:213-231 2002. [4] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang and Wojciech Zaremba. OpenAI Gym. 2016. [5] Yan Duan, Xi Chen, John Schulman, and Pieter Abbeel. Benchmarking Deep Reinforcement Learning for Continuous Control. The 33rd International Conference on Machine Learning (1CML 2016) (2016), 48:14, 2016. [6] Rein Houthooft. VIME Open-Source Code. https://github.com/openai/vime 2016. [7] Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel Variational Information Maximizing Exploration. In Advances in Neural Information Processing Systems (NIPS)(2016), 2016. [8] Laurent Itti and Pierre Baldi. Bayesian surprise attracts human attention. Vision Research. 49(10):1295-1306. 2009. [9] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal Regret Bounds for Reinforcement Learning. Journal of Machine Learning Research, 11(1):1563-1600, 2010. [10] Michael Kearns and Satinder Singh. Near Optimal Reinforcement Learning in Polynomial Time Proceedings of the 15th International Conference on Machine Learning, pages 260-268, 1998. [11] Manuel Lopes, Tobias Lang, Marc Toussaint, and Py Oudeyer. Exploration in model-based reinforcement learning by empirically estimating learning progress. Advances in Neural Information Processing Systems (NIPS) (2012), 2012. [12] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lilli crap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous Methods for Deep Reinforcement Learning. In ICML, 2016, 2016. [13] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei a Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Pe tersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015."}, {"section_index": "10", "section_name": "A SINGLE STEP SECOND-ORDER OPTIMIZATION", "section_text": "We consider the optimization problem\np * =maxL(0) : D(0) < s 0\nwhere 0 E Rn, and for some 0otd we have D(0old) = 0, VeD(0oid) = 0, and V?D(0otd) 0; alsc V0, D(0) 0.\nL(0) ~ L(0otd) + g (0 - 0old g = VeL(0old) 0old)T A(0-0old) A = V?D(0old)\n[14] Shakir Mohamed and Danilo J Rezende. Variational Information Maximisation for Intrinsi cally Motivated Reinforcement Learning. In Proceedings of the 29th Conference on Neural. Information Processing Systems (NIPS 2015), 2015. [15] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De. Maria, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, and. David Silver. Massively Parallel Methods for Deep Reinforcement Learning. ICML Deep. Learning Workshop 2015, 2015. [16] Junhyuk Oh, Guo Xiaoxiao, Lee Honglak, Lewis Richard, and Singh Satinder. Action- Conditional Video Prediction using Deep Networks in Atari Games. In NIPs 2015, 2015.. [17] Pierre-Yves Oudeyer and Frederic Kaplan. How can we define intrinsic motivation? In 8th International Conference on Epigenetic Robotics, 2008. [18] Jason Pazis and Ronald Parr. PAC Optimal Exploration in Continuous Space Markov Decision Processes. pages 774-781, 2013. [19] Jurgen Schmidhuber. Curious Model-Building Control Systems. International Joint Conference. on Neural Networks, 2:1458-1463, 1991. [20] John Schulman, Philipp Moritz, Michael Jordan, and Pieter Abbeel. Trust Region Policy. Optimization. In ICML, 2015, 2015. [21] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-. Dimensional Continuous Control Using Generalized Advantage Estimation. In ICLR, 2016,. 2016. [22] Bradly C. Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models. arXiv, 2015.. [23] Jan Storck, Sepp Hochreiter, and Jurgen Schmidhuber. Reinforcement driven information. acquisition in non-deterministic environments. In Proceedings of the International . .., volume 2.. pages 159-164, 1995. [24] Yi Sun, Faustino Gomez, and Jurgen Schmidhuber. Planning to be surprised: Optimal Bayesian. exploration in dynamic environments. In International Conference on Artificial General. Intelligence, volume 6830 LNAI, pages 41-51, 2011. [25] Hado van Hasselt, Arthur Guez, and David Silver. Deep Reinforcement Learning with Double. Q-learning. In AAAI 2016, 2016. [26] Ziyu Wang, Nando de Freitas, and Marc Lanctot. Dueling Network Architectures for Deep Reinforcement Learning. arXiv, (9), 2016.\nIn our experiments, we approximately solve several optimization problems by using a single second- order step with a line search. This section will describe the exact methodology, which was originally given by Schulman et al. [20].\nWe suppose that & is small, so the optimal point will be close to Ootd. We also suppose that the. curvature of the constraint is much greater than the curvature of the objective. As a result, we feel. justified in approximating the objective to linear order and the constraint to quadratic order:\nWe now consider the approximate optimization problem\n~ max gf'(e - TA(0-0od)< 8 e\nThis optimization problem is convex as long as A 0, which is an assumption that we make. (If this assumption seems to be empirically invalid, then we repair the issue by using the substitution A -> A + eI, where I is the identity matrix, and e > 0 is a small constant chosen so that we usually have A + eI 0.) This problem can be solved analytically by applying methods of duality, and its optimal point is\n28 A* = 0old +\n28 0 =0old+ s\nBecause the optimization problems we solve with this method tend to involve thousands of parameters inverting A is prohibitively computationally expensive. Thus in the implementation of this algorithn that we use, the search direction x = A--g is found by using the conjugate gradient method to solve Ax = g; this avoids the need to invert A.\nWhen A and g are sample averages meant to stand in for expectations, we employ an additional trick to reduce the total number of computations necessary to solve Ax = g. The computation of A is more expensive than g, and so we use a smaller fraction of the population to estimate it quickly. Concretely, suppose that the original optimization problem's objective is Ez~p[L(0, z)] and the constraint is Ez~P[D(0, z)] , where z is some random variable and P is its distribution; furthermore, suppose that we have a dataset of samples D = {zi}i=1,..,v drawn on P, and we form an approximate optimization problem using these samples. Defining g(z) = VeL(0old, z) and A(z) = V?D(0old, z), we would need to solve\n1 1 A(z) x = [D] Z D zED z ED\n1 1 A(z) ~ [6 D z~P zEb zED\nis good, and the search direction we obtain by solving\n1 1 A(z) [6 X g(z D zEb zED\nis reasonable. The sub-sample ratio [b[/|D] is a hyp. arameter of the algorithm"}, {"section_index": "11", "section_name": "B.1 ENVIRONMENTS", "section_text": "The environments have the following state and action spaces: for the sparse MountainCar environment S C R2, A C R1; for the sparse CartpoleSwingup task, S C R4, A C R1; for the sparse HalfCheetah\nIt is possible that the parameter update step given by (14) may not exactly solve the original. optimization problem (13)-in fact, it may not even satisfy the constraint-so we perform a line search between 0otd and 0*. Our update with the line search included is given by.\nwhere s E (0, 1) is a backtracking coefficient, and k is the smallest integer for which L(0) L(old) and D(0) < 8. We select k by checking each of k = 1, 2, ..., K, where K is the maximum number of backtracks. If there is no value of k in that range which satisfies the conditions, no update is performed.\nto obtain the search direction x. However, because the computation of the average Hessian is expensive, we sub-sample a batch b C D to form it. As long as b is a large enough set, then the. approximation\n1 x S 3 128\nFor all continuous control tasks we used fully-factored Gaussian policies, where the means of the action distributions were the outputs of neural networks, and the variances were separate trainable parameters. For the sparse MountainCar and sparse CartpoleSwingup tasks, the policy mean networks had a single hidden layer of 32 units. For sparse HalfCheetah, sparse Swimmer, and SwimmerGather the policy mean networks were of size (64, 32). For the Atari RAM tasks, we used categorical distributions over actions, produced by neural networks of size (64, 32).\nThe value functions used for the sparse MountainCar and sparse CartpoleSwingup tasks were neural networks with a single hidden layer of 32 units. For sparse HalfCheetah, sparse Swimmer, and SwimmerGather, time-varying linear value functions were used, as described by Duan et al. [5]. For the Atari RAM tasks, the value functions were neural networks of size (64, 32). The neural network value functions were learned via single second-order step optimization; the linear baselines were obtained by least-squares fit at each iteration.\nAll neural networks were feed-forward, fully-connected networks with tanh activation units"}, {"section_index": "12", "section_name": "B.3 TRPO HYPERPARAMETERS", "section_text": "For all tasks, the MDP discount factor y was fixed to 0.995, and generalized advantage estimator (GAE) [21] were used, with the GAE X parameter fixed to 0.95.\nEnvironments Batch Size. Sub-Sample Max Rollout Length. dKL Mountaincar, Cartpole Swingup 5000 1 500 0.01 HalfCheetah, Swimmer. 5000 1 500 0.05 SwimmerGather 50, 000 0.1 500 0.01 Pong 10, 000 1 5000 0.01 Bankheist, Freeway 13, 500 1 5000 0.01 Venture 50, 000 0.2 7000 0.01"}, {"section_index": "13", "section_name": "B.4 EXPLORATION HYPERPARAMETERS", "section_text": "For all tasks. fully-factored Gaussian distributions were used as dynamics models, where the mean and variances of the distributions were the outputs of neural networks\nFor the sparse MountainCar and sparse CartpoleSwingup tasks, the means and variances were parametrized by single hidden layer neural networks with 32 units. For all other tasks, the means anc variances were parametrized by neural networks with two hidden layers of size 64 units each. Al networks used tanh activation functions.\nFor the sparse MountainCar task, the agent receives a reward of 1 only when it escapes the valley For the sparse CartpoleSwingup task, the agent receives a reward of 1 only when cos() > 0.8, with. 3 the pole angle. For the sparse HalfCheetah task, the agent receives a reward of 1 when xbody 5.. For the sparse Swimmer task, the agent receives a reward of 1 + |Vbody] when |xbodu! 2.\nAtari RAM states, by default, take on values from 0 to 256 in integer intervals. We use a simple preprocessing step to map them onto values in (-1/3, 1/3). Let x denote the raw RAM state, and s the preprocessed RAM state:\nIn the table below, we show several other TRPO hyperparameters. Batch size refers to steps of experience collected at each iteration. The sub-sample factor is for the second-order optimization step, as detailed in Appendix A.\nTable 1: TRPO hyperparameters for our experiments\nFor all continuous control tasks, the L2 penalty coefficient was set to a = 1. For the Atari RAM tasks except for Venture, it was set to a = 0.01. For Venture, it was set to a = 0.1.\nFor all continuous control tasks except SwimmerGather, no = 0.001. For SwimmerGather, no - 0.0001. For the Atari RAM tasks, no = 0.005.\nAt each iteration, bonuses based on learned dynamics models incur two primary costs\nthe time cost of fitting the dynamics model. and the time cost of computing the rewards\nFor the time cost of computing rewards, we first introduce the following quantities\nn: the number of CPU threads available. t f: time for a forward pass through the model,. th: time for a backward pass through the model,. N: batch size (number of samples per iteration). k: the number of forward passes that can be performed si\nFor our method. the time cost of computing rewards is\nNtf Trew ours kn\nFor VIME, things are more complex. Each reward requires the computation of a gradient through its model, which necessitates a forward and a backward pass. Because gradient calculations cannot be efficiently parallelized by any deep learning toolkits currently availablq3| each (s, a, s') tuple requires its own forward/backward pass. As a result, the time cost of computing rewards for VIME is:\nIn the limit of large N, and with the approximation that t : ~ tb, the speedup is a factor of ~ 2k\n31f this is not correct, please contact the authors so that we can issue a correction! But to the best of our knowledge, this is currently true, at time of publication.\nFor all continuous control tasks except SwimmerGather. we used replay memories of size 5. O00. O00 and a KL-divergence step size of k = 0.001. For SwimmerGather, the replay memory was the same. size, but we set the KL-divergence size to k = 0.005. For the Atari RAM domain tasks, we used. replay memories of size 1. 000. 000. and a KL-divergence step size of k = 0.01.\nFor all tasks except SwimmerGather and Venture, 5000 time steps of experience were sampled from the replay memory at each iteration of dynamics model learning to take a stochastic step on (11) and a sub-sample factor of 1 was used in the second-order step optimizer. For SwimmerGather and Venture, 10, 000 time steps of experience were sampled at each iteration, and a sub-sample factor of 0.5 was used in the optimizer.\nIn this section, we provide an analysis of the time cost incurred by using VIME or our bonuses, and derive the potential magnitude of speedup attained by our bonuses versus VIME..\nWe denote the dynamics fitting costs for VIME and our methods as Tfit Uime Bayesian neural network dynamics model for VIME is more complex than our model, the fit times can work out to be similar depending on the choice of fitting algorithm. In our speed test, the fit times were nearly equivalent, but used different algorithms\nN(tf+tb vime n\nrfit N(tf+tb) vime n Ntf Tfit Iours kn"}]
Byj72udxe
[{"section_index": "0", "section_name": "POINTER SENTINEL MIXTURE E MODELS", "section_text": "Stephen Merity, Caiming Xiong, James Bradbury & Richard Soche MetaMind - A Salesforce Company\nStephen Merity.. James Bradbury & Richard Socher\nsmerity,cxiong,. iames.bradbury rsocher}@salesforce.com\nRecent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and. large vocabularies. Even then they struggle to predict rare or unseen words even if. the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard. softmax classifier. We explore applying the pointer sentinel mixture model to the LSTM, a standard recurrent neural network building block. Utilizing an LSTM that achieves 80.6 perplexity on the Penn Treebank, the pointer sentinel-LSTM. model pushes perplexity down to 70.9 while using far fewer parameters than an. LSTM that achieves similar results. In order to evaluate how well language models. can exploit longer contexts and deal with more realistic vocabularies and corpora we also introduce the freely available WikiText corpus.1."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Models with soft attention or memory components have been proposed to help deal with this chal lenge, aiming to allow for the retrieval and use of relevant previous hidden states, in effect increasin hidden state capacity and providing a path for gradients not tied to timesteps. Even with attention the standard softmax classifier that is being used in these models often struggles to correctly predic rare or previously unknown words.\nPointer networks (Vinyals et al., 2015) provide one potential solution for rare and out of vocabulary (OoV) words as a pointer network uses attention to select an element from the input as output. This allows it to produce previously unseen input tokens. While pointer networks improve performance on rare words and long-term dependencies they are unable to select words that do not exist in the input.\nWe introduce a mixture model, illustrated in Fig. 1, that combines the advantages of standar. softmax classifiers with those of a pointer component for effective and efficient language mod eling. Rather than relying on the RNN hidden state to decide when to use the pointer, as in the recent work of Gulcehre et al. (2016), we allow the pointer component itself to decide when to use the softmax vocabulary through a sentinel. The model improves the state of the art perplexity on the. Penn Treebank. Since this commonly used dataset is small and no other freely available alternative. exists that allows for learning long range dependencies, we also introduce a new benchmark datase for language modeling called WikiText..\nAvailable for download at the WikiText dataset site"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "A major difficulty in language modeling is learning when to predict specific words from the imme- diate context. For instance, imagine a new person is introduced and two paragraphs later the context would allow one to very accurately predict this person's name as the next word. For standard neural sequence models to predict this name, they would have to encode the name, store it for many time steps in their hidden state, and then decode it when appropriate. As the hidden state is limited in capacity and the optimization of such models suffer from the vanishing gradient problem, this is a lossy operation when performed over many timesteps. This is especially true for rare words.\nFed Chair Janet Yellen raised rates Ms. ??? Poonterr Sentinel Pptr Yellen) 9 sornex aardvark Bernanke Rosenthal Yellen zebra NNN Pvocab(Yellen) o(Yellen) = Yellen+ (Yellen)\nFed Chair Janet Yellen raised rates Ms. ??? 4 Ppoonter Sentinel Pptr Yellen) g soonax aardvark Bernanke Rosenthal Yellen zebra RNN 4 4 1 Pvocab(Yellen)\nFigure 1: Ilustration of the pointer sentinel-RNN mixture model. g is the mixture gate which use the sentinel to dictate how much probability mass to give to the vocabulary..\nGiven a sequence of words w1, Wy-1, our task is to predict the next word wN\nRecurrent neural networks (RNNs) have seen widespread use for language modeling (Mikolov et al., 201O) due to their ability to, at least in theory, retain long term dependencies. RNNs em- ploy the chain rule to factorize the joint probabilities over a sequence of tokens: p(w1, ... , wn) = =1 P(w;|w1, ..., w;-1). More precisely, at each time step i, we compute the RNN hidden state hi according to the previous hidden state hi-1 and the input x; such that hi = RNN(xi, hi-1). When all the N - 1 words have been processed by the RNN, the final state hy-1 is fed into a softmax layer which computes the probability over a vocabulary of possible words: pvocab(w) =- size. RNNs can suffer from the vanishing gradient problem. The LSTM (Hochreiter & Schmidhu ber, 1997) architecture has been proposed to deal with this by updating the hidden state according to a set of gates. Our work focuses on the LSTM but can be applied to any RNN architecture that ends in a vocabulary softmax"}, {"section_index": "3", "section_name": "2.2 THE POINTER NETWORK COMPONENT", "section_text": "In this section, we propose a modification to pointer networks for language modeling. To predic the next word in the sequence, a pointer network would select the member of the input sequence p(W1,:: . ., wy-1) with the maximal attention score as the output.\nThe simplest way to compute an attention score for a specific hidden state is an inner product wit all the past hidden states h, with each hidden state h E RH. However, if we want to comput such a score for the most recent word (since this word may be repeated), we need to include the la hidden state itself in this inner product. Taking the inner product of a vector with itself results i the vector's magnitude squared, meaning the attention scores would be strongly biased towards th most recent word. Hence we project the current hidden state to a query vector q first. To produc the query q we compute q = tanh(Whn-1 + b), where W E RHxH, b E RH, and q E RH To generate the pointer attention scores, we compute the match between the previous RNN outpi states h; and the query q by taking the inner product, followed by a softmax activation function t obtain a probability distribution:\na = softmax(\nwhere z E RL, a E RL, and L is the total number of hidden states. The probability mass assigned. to a given word is the sum of the probability mass given to all token positions where the given word appears:\nPptr(w) = ai7 iEI(w,x)\nFigure 2: Visualization of the pointer sentinel-RNN mixture model. The query, produced from. applying an MLP to the last output of the RNN, is used by the pointer network to identify likely matching words from the past. The O nodes are inner products between the query and the RNN hidden states. If the pointer component is not confident, probability mass can be directed to the. RNN by increasing the value of the mixture gate g via the sentinel, seen in grey. If g = 1 then only. the RNN is used. If g = 0 then only the pointer is used..\nwhere I(w, x) results in all positions of the word w in the input x and Pptr E RV. This technique. referred to as pointer sum attention, has been used for question answering (Kadlec et al., 2016).\nGiven the length of the documents used in language modeling, it may not be feasible for the pointer network to evaluate an attention score for all the words back to the beginning of the dataset. Instead. we may elect to maintain only a window of the L most recent words for the pointer to match against The length L of the window is a hyperparameter that can be tuned on a held out dataset or by empirically analyzing how frequently a word at position t appears within the last L words.\nTo illustrate the advantages of this approach, consider a long article featuring two sentences Pres- ident Obama discussed the economy and President Obama then flew to Prague. If the query was Which President is the article about?, probability mass could be applied to Obama in either sentence. If the question was instead Who flew to Prague?, only the latter occurrence of Obama provides the proper context. The attention sum model ensures that, as long as the entire attention probability mass is distributed on the occurrences of Obama, the pointer network can achieve zero loss. This flexibility provides supervision without forcing the model to put mass on supervision signals that may be incorrect or lack proper context.\nWhile pointer networks have proven to be effective, they cannot predict output words that are not present in the input, a common scenario in language modeling. We propose to resolve this by using a mixture model that combines a standard softmax with a pointer.\nOur mixture model has two base distributions: the softmax vocabulary of the RNN output anc. the positional vocabulary of the pointer model. We refer to these as the RNN component and the. pointer component respectively. To combine the two base distributions, we use a gating functior. g = p(zi = k|x) where zi is the latent variable stating which base distribution the data poini. belongs to. As we only have two base distributions, g can produce a scalar in the range [0, 1]. A. value of 0 implies that only the pointer is used and 1 means only the softmax-RNN is used.\np(yi[xi)=g Pvocab(yi|xi)+ (1-g) Pptr(yi|xi)\np(yi[xi) = g Pvocab(yi[xi)+ (1-g) Pptr(yi[xi)\nWhile the models could be entirely separate, we re-use many of the parameters for the softmax RNN and pointer components. This sharing minimizes the total number of parameters in the mode. and capitalizes on the pointer network's supervision for the RNN component.\nTo compute the new pointer sentinel gate g, we modify the pointer component. In particular, we. add an additional element to z, the vector of attention scores as defined in Eq. 1. This element is\nOutput Distribution. pyNW1.WN-1 Pointer Distribution Mixture gate g !. Pptr(yN|W1,...,WN-1) Softmax Sentinel@ Sentinel Query RNN Softmax RNN Distribution Embed Pvocab(yN|W1,..., WN-1)\ncomputed using an inner product between the query and the sentinel2 vector s E RH. This change can be summarized by changing Eq. 2 to a = softmax ([z; qT s]). We define a E RV+1 to be the attention distribution over both the words in the pointer window as well as the sentinel state. We interpret the last element of this vector to be the gate value: q = a[V + 1].\nAny probability mass assigned to g is given to the standard softmax vocabulary of the RNN. The final updated, normalized pointer probability over the vocabulary in the window then becomes\n1 Pptr(Yi|xi) a|1: V\nwhere we denoted 1 : V|to mean the first V elements of the vector. The final mixture model is the same as Eq. 4 but with the updated Eq. 5 for the pointer probability.\nThis setup encourages the model to have both components compete: use pointers whenever possible and back-off to the standard softmax otherwise. By integrating the gating function into the pointer computation, it is influenced by both the RNN hidden state and the pointer window's hidden states"}, {"section_index": "4", "section_name": "2.5 MOTIVATION FOR THE SENTINEL AS GATING FUNCTION", "section_text": "To make the best decision possible regarding which component to use the gating function must have as much context as possible. As we increase the window of words for the pointer component to. consider, the RNN hidden state by itself isn't guaranteed to accurately recall the identity or order o words it has recently seen (Adi et al., 2016). This is an obvious limitation of encoding a variable. length sequence into a fixed dimensionality vector..\nIf we want a pointer window where the length L is in the hundreds, accurately modeling all of thi information within the RNN hidden state is impractical. The position of specific words is also a vita feature as relevant words eventually fall out of the pointer component's window. To correctly mode this would require the RNN hidden state to store both the identity and position of each word in the pointer window. This is far beyond the capability of the fixed dimensionality RNN hidden state.\nFor this reason, we integrate the gating function directly into the pointer network by use of the sentinel. The decision to back-off to the softmax vocabulary is then informed by both the query. q, generated using the RNN hidden state hv-1, and from the contents of the hidden states in the. pointer window itself. This allows the model to accurately query what hidden states are contained in. the pointer window and avoid maintaining state for words that may have fallen out of the window.."}, {"section_index": "5", "section_name": "2.6 POINTER SENTINEL LOSS FUNCTION", "section_text": "We minimize the cross-entropy loss of , yij log p(yij|;), where y, is a one hot encoding oj the correct output. During training, as yi is one hot, only a single mixed probability p(yi ) must b computed for calculating the loss. This can result in a far more efficient GPU implementation. A prediction time, when we want all values for p(y|x), a maximum of L word probabilities must be mixed, as there is a maximum of L unique words in the pointer window of length L. This mixing can occur on the CPU where random access indexing is more efficient than the GPU.\n2A sentinel value is inserted at the end of a search space in order to ensure a search algorithm terminates if no matching item is found. Our sentinel value terminates the pointer search space and distributes the rest of the probability mass to the RNN vocabulary.\nFollowing the pointer sum attention network, the aim is to place probability mass from the attention mechanism on the correct output yi if it exists in the input. In the case of our mixture model the pointer loss instead becomes - log (g + ie 1(y,x) ai ), where I (y, x) results in all positions of the correct output y in the input x. The gate g may be assigned all probability mass if, for instance, the correct output yi exists only in the softmax-RNN vocabulary. There is no penalty if the model places the entire probability mass on any of the instances of the correct word in the input window. If the pointer component places the entirety of the probability mass on the gate g, the pointer network incurs no penalty and the loss is entirely determined by the loss of the softmax-RNN component.\nPenn Treebank WikiText-2 WikiText-103 Train Valid Test Train Valid Test Train Valid Test Articles 600 60 60 28,475 60 60 Tokens 929k 73k 82k 2,088k 217k 245k 103,227k 217k 245k Vocab size. 10,000 33,278 267,735 OoV rate 4.8% 2.6% 0.4%\nTable 1: Statistics of the Penn Treebank, WikiText-2, and WikiText-103. The out of vocabulary (OoV) rate notes what percentage of tokens have been replaced by an (unk) token. The token count includes newlines which add to the structure of the WikiText datasets.."}, {"section_index": "6", "section_name": "2.7 PARAMETERS AND COMPUTATION TIME", "section_text": "The pointer sentinel-LSTM mixture model results in a relatively minor increase in parameters and. computation time, especially when compared to the model size required to achieve similar perfor. mance using a standard LSTM. The only two additional parameters required by the model are those required for computing q, specifically W E R H H and b E RH, and the sentinel vector embedding.. s E RH. This is independent of the depth of the RNN as the pointer component only interacts with. the output of the final RNN layer. The additional H2 + 2H parameters are minor compared to a. single LSTM layer's 8H2 + 4H parameters. Most models also use multiple LSTM layers..\nIn terms of additional computation, a pointer sentinel-LSTM of window size L only requires com. puting the query q (a linear layer with tanh activation), a total of L parallelizable inner product. calculations, and the attention scores for the L resulting scalars via the softmax function"}, {"section_index": "7", "section_name": "3 RELATED WORK", "section_text": "Mixture models composed of various knowledge sources have been proposed in the past for language. modeling. Rosenfeld (1996) uses a maximum entropy model to combine a variety of information. sources to improve language modeling on news text and speech. These information sources include. complex overlapping n-gram distributions and n-gram caches that aim to capture rare words\nBeyond n-grams, neural sequence models such as recurrent neural networks have been shown to. achieve state of the art results (Mikolov et al., 201O). A variety of RNN regularization methods have been explored, including a number of dropout variations (Zaremba et al., 2014; Gal, 2015). which prevent overfitting of complex LSTM language models. Other work has modified the RNN. architecture to better handle increased recurrence depth (Zilly et al., 2016)..\nIn order to increase capacity and minimize the impact of vanishing gradients, some language and. translation models have also added a soft attention or memory component (Bahdanau et al., 2015;. Sukhbaatar et al., 2015; Cheng et al., 2016; Kumar et al., 2016; Xiong et al., 2016; Ahn et al., 2016).. These mechanisms allow for the retrieval and use of relevant previous hidden states. Soft attention mechanisms need to first encode the relevant word into a state vector and then decode it again. even if the output word is identical to the input word used to compute that hidden state or memory. A drawback to soft attention is that if, for instance, January and March are both equally attended candidates, the attention mechanism may blend the two vectors, resulting in a context vector closest to February (Kadlec et al., 2016). Even with attention, the standard softmax classifier being used in. these models often struggles to correctly predict rare or previously unknown words..\nAttention-based pointer mechanisms were introduced in Vinyals et al. (2015) where the pointer. network is able to select elements from the input as output. In the above example, only January or. March would be available as options, as February does not appear in the input. The use of pointer networks have been shown to help with geometric problems (Vinyals et al., 2015), code generation (Ling et al., 2016), summarization (Gu et al., 2016; Gulcehre et al., 2016), question answering (Kadlec et al., 2016). While pointer networks improve performance on rare words and long-term. dependencies they are unable to select words that do not exist in the input..\nGulcehre et al. (2016) introduce a pointer softmax model that can generate output from either th. vocabulary softmax of an RNN or the location softmax of the pointer network. Not only does thi. allow for producing OoV words which are not in the input, the pointer softmax model is able t better deal with rare and unknown words than a model only featuring an RNN softmax. Rathe. than constructing a mixture model as in our work, they use a switching network to decide whicl component to use. For neural machine translation, the switching network is conditioned on th representation of the context of the source text and the hidden state of the decoder. The pointe. network is not used as a source of information for the switching network as in our model. The. pointer and RNN softmax are scaled according to the switching network and the word or locatior. with the highest final attention score is selected for output. Although this approach uses both a pointer and RNN component, it is not a mixture model and does not combine the probabilities fo. a word if it occurs in both the pointer location softmax and the RNN vocabulary softmax. In ou. model the word probability is a mix of both the RNN and pointer components, allowing for bette. predictions when the context may be ambiguous.\nExtending this concept further, the latent predictor network (Ling et al., 2016) generates an output sequence conditioned on an arbitrary number of base models where each base model may have differing granularity. In their task of code generation, the output could be produced one character at a time using a standard softmax or instead copy entire words from referenced text fields using a pointer network. As opposed to Gulcehre et al. (2016), all states which produce the same output are merged by summing their probabilities. The model requires a complex training process involving the forward-backward algorithm for Semi-Markov models to prevent an exponential path explosion"}, {"section_index": "8", "section_name": "WIKITEXT - A BENCHMARK FOR LANGUAGE MODELING", "section_text": "We first describe the most commonly used language modeling dataset and its pre-processing in ord to then motivate the need for a new benchmark dataset."}, {"section_index": "9", "section_name": "4.1 PENN TREEBANK", "section_text": "In order to compare our model to the many recent neural language models, we conduct word-leve prediction experiments on the Penn Treebank (PTB) dataset (Marcus et al., 1993), pre-processed by Mikolov et al. (2010). The dataset consists of 929k training, 73k validation, and 82k test words. A part of the pre-processing performed by Mikolov et al. (2010), words were lower-cased, number were replaced with N, newlines were replaced with (eos), and all other punctuation was removed The vocabulary is the most frequent 10k words with OoV tokens replaced by an (unk) token. Fo full statistics, refer to Table 1."}, {"section_index": "10", "section_name": "4.2 REASONS FOR A NEW DATASET", "section_text": "While the processed version of the PTB above has been frequently used for language modeling. it has many limitations. The tokens in PTB are all lower case, stripped of any punctuation, anc. limited to a vocabulary of only 1Ok words. These limitations mean that the PTB is unrealistic fo. real language use, especially when far larger vocabularies with many rare words are involved. The. appendix contains a graph illustrating this using a Zipfian plot over the training partition of the PTB. with the curve stopping abruptly at the 10k limit. Given that accurately predicting rare words, such. as named entities, is an important task for many applications, the lack of a long tail is problematic..\nOther larger scale language modeling datasets exist. Unfortunately, they either have restrictive li- censing which prevents widespread use or have randomized sentence ordering (Chelba et al., 2013) which is unrealistic for most language use and prevents the effective learning and evaluation of longer term dependencies. Hence, we constructed a language modeling dataset using text extracted from Wikipedia and have made this available to the community."}, {"section_index": "11", "section_name": "4.3 CONSTRUCTION AND PRE-PROCESSING", "section_text": "We selected articles only fitting the Good or Featured article criteria specified by editors on Wikipedia. These articles have been reviewed by humans and are considered well written, factu- ally accurate, broad in coverage, neutral in point of view, and stable. This resulted in 23,805 Good\narticles and 4,790 Featured articles. The text for each article was extracted using the Wikipedia API Extracting text from Wikipedia mark-up is nontrivial due to the large number of macros in use, used for metric conversions, abbreviations, language notation, and date handling.\nOnce extracted, specific sections which primarily featured lists were removed by default. Other minor bugs, such as sort keys and Edit buttons that leaked in from the HTML, were also removed Mathematical formulae and LATX code were replaced with (formula) tokens. Normalization anc tokenization were performed using the Moses tokenizer (Koehn et al., 20o7), slightly augmented to further split numbers (8,600 -> 8 @,@ 600) and with some additional minor fixes. Following Chelba et al. (2013) a vocabulary was constructed by discarding all words with a count below 3 Words outside of the vocabulary were mapped to the (unk) token, also a part of the vocabulary."}, {"section_index": "12", "section_name": "4.4 STATISTICS", "section_text": "The full WikiText dataset is over 103 million words in size, a hundred times larger than the PTB. It is also a tenth the size of the One Billion Word Benchmark (Chelba et al., 2013), one of the largest publicly available language modeling benchmarks, whilst consisting of articles that allow for the. capture and usage of longer term dependencies as might be found in many real world tasks..\nThe dataset is available in two different sizes: WikiText-2 and WikiText-103. Both feature punctua tion, original casing, a larger vocabulary, and numbers. WikiText-2 is two times the size of the Penn Treebank dataset. WikiText-103 features all extracted articles. Both datasets use the same articles for validation and testing, only differing in the vocabularies. For full statistics, refer to Table 1."}, {"section_index": "13", "section_name": "5.1 TRAINING DETAILS", "section_text": "As the pointer sentinel mixture model uses the outputs of the RNN from up to L timesteps back this presents a challenge for training. If we do not regenerate the stale historical outputs of the RNN when we update the gradients, backpropagation through these stale outputs may result in incorrect gradient updates. If we do regenerate all stale outputs of the RNN, the training process is far slower As we can make no theoretical guarantees on the impact of stale outputs on gradient updates, we opt to regenerate the window of RNN outputs used by the pointer component after each gradient update.\nWe also use truncated backpropagation through time (BPTT) in a different manner to many other. RNN language models. Truncated BPTT allows for practical time-efficient training of RNN models. but has fundamental trade-offs that are rarely discussed. For running truncated BPTT, BPTT is run. for k2 timesteps once every k1 timesteps. For many RNN language modeling training schemes.. k1 = k2, meaning that every k timesteps truncated BPTT is performed for the k previous timesteps This results in only a single RNN output receiving backpropagation for k timesteps, with the other. extreme being that the first token receives backpropagation for O timesteps. As such, most words in the training data will never experience a full backpropagation for k timesteps..\nIn our task, the pointer component always looks L timesteps into the past if L past timesteps are. available. We select k1 = 1 and k2 = L such that for each timestep we perform backpropagation for. L timesteps and advance one timestep at a time. Only the loss for the final predicted word is used for backpropagation through the window.."}, {"section_index": "14", "section_name": "5.2 MODEL DETAILS", "section_text": "Our experimental setup reflects that of Zaremba et al. (2014) and Gal (2015). We increased the number of timesteps used during training from 35 to 100, matching the length of the window L Batch size was increased to 32 from 20. We also halve the learning rate when validation perplexity is worse than the previous iteration, stopping training when validation perplexity fails to improve for three epochs or when 64 epochs are reached. The gradients are rescaled if their global norm exceeds 1 (Pascanu et al., 2013b).3 We evaluate the medium model configuration which features a two layer\n3The highly aggressive clipping is likely due to the increased BPTT length. Even with such clipping earl batches may experience excessively high perplexity, though this settles rapidly..\nModel Parameters Validation Test Mikolov & Zweig (2012) - KN-5 2M+ 141.2 Mikolov & Zweig (2012) - KN5 + cache 2M+ 125.7 Mikolov & Zweig (2012) - RNN 6M+ 124.7 Mikolov & Zweig (2012) - RNN-LDA 7M+ 113.7 Mikolov & Zweig (2012) - RNN-LDA + KN-5 + cache 9M+ 92.0 Pascanu et al. (2013a) - Deep RNN 6M 107.5 Cheng et al. (2014) - Sum-Prod Net 5M+ 100.0 Zaremba et al. (2014) - LSTM (medium) 20M 86.2 82.7 Zaremba et al. (2014) - LSTM (large) 66M 82.2 78.4 Ga1 (2015) - Variational LSTM (medium, untied) 20M 81.9 0.2 79.7 0.1 Ga1 (2015) - Variational LSTM (medium, untied, MC) 20M 78.6 0.1 Gal (2015) - Variational LSTM (large, untied) 66M 77.9 0.3 75.2 0.2 Gal (2015) - Variational LSTM (large, untied, MC) 66M 73.4 0.0 Kim et al. (2016) - CharCNN 19M 78.9 Zilly et al. (2016) - Variational RHN 32M 72.8 71.3 Zoneout + Variational LSTM (medium) 20M 84.4 80.6 Pointer Sentinel-LSTM (medium) 21M 72.4 70.9\nWe produce results for two model types, an LSTM model that uses dropout regularization and the. pointer sentinel-LSTM model. The variants of dropout used were zoneout (Krueger et al., 2016. and variational inference based dropout (Gal, 2015). Zoneout, which stochastically forces some. recurrent units to maintain their previous values, was used for the recurrent connections within the LSTM. Variational inference based dropout, where the dropout mask for a layer is locked across. timesteps, was used on the input to each RNN layer and also on the output of the final RNN layer. We used a value of 0.5 for both dropout connections.."}, {"section_index": "15", "section_name": "5.3 COMPARISON OVER PENN TREEBANK", "section_text": "Table 2 compares the pointer sentinel-LSTM to a variety of other models on the Penn Treebank. dataset. The pointer sentinel-LSTM achieves the lowest perplexity, followed by the recent Recurrent. Highway Networks (Zilly et al., 2016). The medium pointer sentinel-LSTM model also achieves. lower perplexity than the large LSTM models. Note that the best performing large variational LSTM. model uses computationally intensive Monte Carlo (MC) dropout averaging. Monte Carlo dropout. averaging is a general improvement for any sequence model that uses dropout but comes at a greatly increased test time cost. In Gal (2015) it requires rerunning the test model with 1000 different dropout masks. The pointer sentinel-LSTM is able to achieve these results with far fewer parameters. than other models with comparable performance, specifically with less than a third the parameters. used in the large variational LSTM models.."}, {"section_index": "16", "section_name": "5.4 COMPARISON OVER WIKITEXT-2", "section_text": "As WikiText-2 is being introduced in this dataset, there are no existing baselines. We provide two baselines to compare the pointer sentinel-LSTM against: our variational LSTM using zoneout and\nTable 2: Single model perplexity on validation and test sets for the Penn Treebank language model- ing task. For our models and the models of Zaremba et al. (2014) and Gal (2015), medium and large refer to a 650 and 1500 unit two layer LSTM respectively. Parameter numbers with are estimates based upon our understanding of the model and with reference to Kim et al. (2016).\nWe also test a variational LSTM that uses zoneout, which serves as the RNN component of our pointer sentinel-LSTM mixture. This variational LSTM model performs BPTT for the same length L as the pointer sentinel-LSTM, where L = 100 timesteps. The results for this model ablation are worse than that of Gal (2015)'s variational LSTM without Monte Carlo dropout averaging..\nModel Parameters Validation Test Variational LSTM implementation from Gal (2015) 20M 101.7 96.3 Zoneout + Variational LSTM. 20M 108.7 100.9 Pointer Sentinel-LSTM 21M 84.8 80.8\nTable 3: Single model perplexity on validation and test sets for the WikiText-2 language modeling task. All compared models use a two layer LSTM with a hidden size of 650..\nthe medium variational LSTM used in Gal (2015).4 Attempts to run the Gal (2015) large model variant, a two layer LSTM with hidden size 1500, resulted in out of memory errors on a 12GB K80 GPU, likely due to the increased vocabulary size. We chose the best hyperparameters from PTB. experiments for all models. Table 3 shows a similar gain made by the pointer sentinel-LSTM over. the variational LSTM models. The variational LSTM from Gal (2015) again beats out the variational LSTM used as a base for our experiments..\nA hypothesis as to why the pointer sentinel-LSTM can outperform an LSTM is that the pointer component allows the model to effectively reproduce rare words. The RNN may better use hidden state capacity by relying on the pointer component. The pointer component may also allow for a sharper selection of a single word than may be possible using only the softmax.\nThe appendix contains a graph which shows the improvement of perplexity when comparing the LSTM to the pointer sentinel-LSTM. Words are split across buckets according to frequency. As the words become rarer, the pointer sentinel-LSTM has stronger improvements in perplexity. Even on the Penn Treebank, where there is a relative absence of rare words due to only selecting the mosi frequent 10k words, we can see the pointer sentinel-LSTM mixture model provides a direct benefit\nWhile the improvements are largest on rare words, we can see the pointer sentinel-LSTM is stil. helpful on relatively frequent words. This may be the pointer component directly selecting the worc or through the pointer supervision signal improving the RNN by allowing gradients to flow directly to other occurrences of the word in that window..\nIn a qualitative analysis, we visualized the gate use and pointer attention for a variety of examples in the validation set, focusing on predictions where the gate primarily used the pointer component. These visualizations are available in the appendix..\nAs expected, the pointer component is heavily used for rare names such as Seidman (23 times in. training), Iverson (7 times in training), and Rosenthal (3 times in training). The pointer component was also heavily used when it came to other named entity names such as companies like Honeywell. (8 times in training) and Integrated (41 times in training, though due to lowercasing of words this. includes integrated circuits, fully integrated, and other generic usage). Surprisingly, the pointer. component was also used for many frequent tokens. For selecting units of measurement (tons. kilograms, ...) or the short scale of numbers (thousands, millions, billions, ...), the pointer would refer to recent usage. This is to be expected, especially when phrases are of the form increased from. N tons to N tons. The model can even be found relying on a mixture of the softmax and the pointer. for predicting frequent verbs such as said..\nFinally, the pointer component can be seen pointing to words at the very end of the 100 worc. window (position 97), a far longer horizon than the 35 steps that most language models truncate. their backpropagation training to. This illustrates why the gating function must be integrated intc. the pointer component. If the gating function could only use the RNN hidden state, it would neec. to be wary of words that were near the tail of the pointer, especially if it was not able to accurately\n'https://github.com/yaringal/BayesianRNI\ntrack exactly how long it was since seeing a word. By integrating the gating function into the pointe. component, we avoid the RNN hidden state having to maintain this intensive bookkeeping.."}, {"section_index": "17", "section_name": "7 CONCLUSION", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Joint Learning to Align and Translate. In ICLR, 2015..\nJianpeng Cheng, Li Dong, and Mirella Lapata. Long Short-Term Memory-Networks for Machine Reading. CoRR, abs/1601.06733, 2016.\nJiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. Incorporating Copying Mechanism in Sequence-to-Sequence Learning. CoRR, abs/1603.06393, 2016.\nRudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text Understanding with the Attention Sum Reader Network. arXiv preprint arXiv:1603.01547, 2016\nYoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. Character-aware neural languag models. CoRR, abs/1508.06615, 2016\nPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondej Bojar Alexandra Constantin, and Evan Herbst. Moses: Open Source Toolkit for Statistical Machine Translation. In ACL, 2007.\nWe introduced the pointer sentinel mixture model and the WikiText language modeling dataset. The. pointer sentinel mixture model can be applied to any classifier that ends in a softmax, including. various recurrent neural network building blocks. When applied to a standard LSTM, the pointer. sentinel-LSTM achieves state of the art results in language modeling over the Penn Treebank while. using few additional parameters and little additional computational complexity at prediction time.\nWe have also motivated the need to move from Penn Treebank to a new language modeling dataset for long range dependencies, providing WikiText-2 and WikiText-103 as potential options. We hope these new datasets can serve as a platform to improve handling of rare words and the usage of long term dependencies in language modeling.\nWei-Chen Cheng, Stanley Kok, Hoai Vu Pham, Hai Leong Chieu, and Kian Ming Adam Chai Language Modeling with Sum-Product Networks. In INTERSPEECH, 2014.\nMitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics. 19:313-330. 1993..\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurren neural network based language model. In INTERSPEECH, 2010.\nRazvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to Construct Deep Recurrent Neural Networks. CoRR, abs/1312.6026, 2013a..\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In ICML, 2013b.\nRoni Rosenfeld. A Maximum Entropy pproach to Adaptive Statistical Language Modeling. 1996\nSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-To-End Memory Net works. In NIPS, 2015.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neura. Information Processing Systems, pp. 2692-2700, 2015\nCaiming Xiong, Stephen Merity, and Richard Socher. Dynamic Memory Networks for Visual and Textua1 Ouestion Answering. In ICML. 2016.\nJulian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnik, and Jurgen Schmidhuber. Recurrent Highway Networks. arXiv preprint arXiv:1607.03474, 2016.\nDavid Krueger, Tegan Maharaj, Janos Kramar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regu larizing RNNs by Randomly Preserving Hidden Activations. arXiv preprint arXiv:1606.01305 2016.\nWang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, Andrew Senior Fumin Wang, and Phil Blunsom. Latent Predictor Networks for Code Generation. CoRR, abs/1603.06744, 2016.\nTomas Mikolov and Geoffrey Zweig. Context dependent recurrent neural network language model In SLT, 2012."}, {"section_index": "18", "section_name": "APPENDIX", "section_text": "PERPLEXITY IMPROVEMENTS FOR POINTER SENTINEL MIXTURE MODEL\n2.5 2.0 1.5 1.0 0.5 0.0 1 2 3 4 5 6 7 8 9 10 Word buckets of equal size (frequent words on left)\nFigure A1: Mean difference in log perplexity on PTB when using the pointer sentinel-LSTM. compared to the LSTM model. Words were sorted by frequency and split into equal sized buckets.\nFor a qualitative analysis, we visualize how the pointer component is used within the pointer sentine mixture model. The gate refers to the result of the gating function, with 1 indicating the RNN component is exclusively used whilst O indicates the pointer component is exclusively used. We begin with predictions that are using the RNN component primarily and move to ones that use the pointer component primarily.\nFigure A2: In predicting the fall season has been a good one especially for those retailers, the pointer component suggests many words from the historical window that would fit - retailers, in- vestments, chains, and institutions. The gate is still primarily weighted towards the RNN component however.\nFigure A3: In predicting the national cancer institute also projected that overall u.s. mortality, the pointer component is focused on mortality and rates, both of which would fit. The gate is still primarily weighted towards the RNN component..\nFigure A4: In predicting people do n't seem to be unhappy with it he said, the pointer component correctly selects said and is almost equally weighted with the RNN component. This is surprising given how frequent the word said is used within the Penn Treebank\nPredicting retailers using 100 words of history (gate = 0.81)\nPredicting retailers usin WOOSOT history (gate = 0.81\nPredicting mortality using 100 words of history (gate = 0.59)\nPredicting mortality using 100 words of history (gate = 0.59)\nPredicting said using 100 words of history (gate = 0.55)\nPredicting said using 100 words of history (gate = 0.5)\nFigure A5: For predicting the federal government has had to pump in $ N billion, the pointer. component focuses on the recent usage of billion with highly similar context. The pointer component is also relied upon more heavily than the RNN component - surprising given the frequency of billion. within the Penn Treebank and that the usage was quite recent..\nPredicting noriega using 100 words of history (gate = 0.12)\nFigure A6: For predicting (unk) 's ghost sometimes runs through the e ring dressed like gen. nor iega, the pointer component reaches 97 timesteps back to retrieve gen. douglas. Unfortunately this prediction is incorrect but without additional context a human would have guessed the same word. This additionally illustrates why the gating function must be integrated into the pointer component The named entity gen. douglas would have fallen out of the window in only four more timesteps, a. fact that the RNN hidden state would not be able to accurately retain for almost 100 timesteps..\nFigure A7: For predicting mr. iverson, the pointer component has learned the ability to point to the last name of the most recent named entity. The named entity also occurs 45 timesteps ago, which is longer than the 35 steps that most language models truncate their backpropagation to.\nFigure A8: For predicting mr. rosenthal, the pointer is almost exclusively used and reaches back 6. timesteps to identify bruce rosenthal as the person speaking, correctly only selecting the last name\nPredicting integrated using 100 words of history (gate = 0.00)\nFigure A9: For predicting in composite trading on the new york stock exchange yesterday inte grated, the company Integrated and the (unk) token are primarily attended to by the pointer com ponent, with nearly the full prediction being determined by the pointer component..\nPredicting billion using 100 words of history (gate = 0.44\nPredicting billion using 100 words of history (gate = 0.44)\nPredicting noriega using 100 words of history (gate = 0.12)\nPredicting iverson using 100 words of history (gate = 0.03)\nPredicting iverson using 100 words of history (gate = 0.03)\nPredicting rosenthal using 100 words of history (gate = 0.00)\nPredicting rosenthal using 100 words of history (gate = 0.00)\nPredicting integrated using 100 words of history (gate = 0.00\nZIPFIAN PLOT OVER PTB AND WIKITEXT-2\nFigure A10: Zipfian plot over the training partition in Penn Treebank and WikiText-2 datasets Notice the severe drop on the Penn Treebank when the vocabulary hits 104. Two thirds of the vocabulary in WikiText-2 are past the vocabulary cut-off of the Penn Treebank.\nZipf plot for WikiText 107 the to 106 or 105 years 9 State social 104 guest handed diary 103 Andreas servitude 102 Schmerber Goddet 101 100 100 101 102 103 104 105 106 Frequency rank of token\nFigure A11: Zipfian plot over the training partition in the WikiText-103 dataset. With the datasel. containing over 100 million tokens, there is reasonable coverage of the long tail of the vocabulary\nZipf plot for Penn Treebank Zipf plot for WikiText-2 106 106 the 105 105 the <unk> of in N Ir een by 104 104 fookk IS mr. first freeneney which during Lyears 103 103 episode months third trade players see Pacific restructuring artists 102 associates 102 throw patients renewed display solving beauty 101 101 Megalosaurus 1560 Idael 100 100 100 101 102 103 104 100 101 102 103 104 10 Frequency rank of token Frequency rank of token"}]
BydrOIcle
[{"section_index": "0", "section_name": "UNROLLED GENERATIVE ADVERSARIAL NETWORKS", "section_text": "Ben Poole\nLuke Metz\nStanford University\njaschasd@google.com\nWe introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal dis- criminator in the generator's objective, which is ideal but infeasible in practice and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators,. and increases diversity and coverage of the data distribution by the generator.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The use of deep neural networks as generative models for complex data has made great advances in recent years. This success has been achieved through a surprising diversity of training losses and model architectures, including denoising autoencoders (Vincent et al 2010). variational au toencoders (Kingma & Welling]2013] Rezende et al.]2014} Gregor et al. 2015} Kulkarni et al. 2015 Burda et al.[2015Kingma et al.2016), generative stochastic networks (Alain et al.2015) diffusion probabilistic models (Sohl-Dickstein et al.||2015), autoregressive models (Theis & Bethge 2015)van den Oord et al.]2016a b), real non-volume preserving transformations (Dinh et al.|2014 2016), Helmholtz machines (Dayan et al.1995 Bornschein et al.2015), and Generative Adversar- ial Networks (GANs) (Goodfellow et al.|2014).\nWhile most deep generative models are trained by maximizing log likelihood or a lower bound on. log likelihood, GANs take a radically different approach that does not require inference or explicil calculation of the data likelihood. Instead, two models are used to solve a minimax game: a genera. tor which samples data, and a discriminator which classifies the data as real or generated. In theory these models are capable of modeling an arbitrarily complex probability distribution. When using. the optimal discriminator for a given class of generators, the original GAN proposed by Goodfellou et al. minimizes the Jensen-Shannon divergence between the data distribution and the generator and extensions generalize this to a wider class of divergences (Nowozin et al.|2016]Sonderby et al.. 2016 Poole et al.]2016).\nThe ability to train extremely flexible generating functions, without explicitly computing likeli. hoods or performing inference, and while targeting more mode-seeking divergences as made GANs. extremely successful in image generation (Odena et al.. 2016, Salimans et al. 2016, :Radford et al. 2015), and image super resolution (Ledig et al.|2016). The flexibility of the GAN framework has. also enabled a number of successful extensions of the technique, for instance for structured predic tion (Reed et al.[2016a b [Odena et al.]2016), training energy based models (Zhao et al.2016), and combining the GAN loss with a mutual information loss (Chen et al.]2016).\n*Work done as a member of the Google Brain Residency program (g.co/brainresidency tWork completed as part of a Google Brain internship\nGoogle DeepMind"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In practice, however, GANs suffer from many issues, particularly during training. One common failure mode involves the generator collapsing to produce only a single sample or a small family of. very similar samples. Another involves the generator and discriminator oscillating during training. rather than converging to a fixed point. In addition, if one agent becomes much more powerful than. the other, the learning signal to the other agent becomes useless, and the system does not learn.. To train GANs many tricks must be employed, such as careful selection of architectures (Radford. et al.] 2015), minibatch discrimination (Salimans et al.]2016), and noise injection (Salimans et al. 2016 Sonderby et al.[|2016). Even with these tricks the set of hyperparameters for which training is. successful is generally very small in practice.\nOnce converged, the generative models produced by the GAN training procedure normally do not cover the whole distribution (Dumoulin et al.]2016f Che et al.]2016), even when targeting a mode covering divergence such as KL. Additionally, because it is intractable to compute the GAN training loss, and because approximate measures of performance such as Parzen window estimates suffer from major flaws (Theis et al.2016), evaluation of GAN performance is challenging. Currently human judgement of sample quality is one of the leading metrics for evaluating GANs. In practice this metric does not take into account mode dropping if the number of modes is greater than the number of samples one is visualizing. In fact, the mode dropping problem generally helps visual sample quality as the model can choose to focus on only the most common modes. These common modes correspond, by definition, to more typical samples. Additionally, the generative model is able to allocate more expressive power to the modes it does cover than it would if it attempted to cover all modes.\nMany optimization schemes, including SGD, RMSProp (Tieleman & Hinton2012), and Adai (Kingma & Ba]2014), consist of a sequence of differentiable updates to parameters. Gradients ca be backpropagated through unrolled optimization updates in a similar fashion to backpropagatic through a recurrent neural network. The parameters output by the optimizer can thus be includec in a differentiable way, in another objective (Maclaurin et al.]2015). This idea was first suggeste for minimax problems in (Pearlmutter & Siskind] 2008), while (Zhang & Lesser2010) provide a theoretical analysis and experimental results on differentiating through a single step of gradie ascent for simple matrix games. Differentiating through unrolled optimization was first scaled t deep networks in (Maclaurin et al.||2015), where it was used for hyperparameter optimization. Mor recently, (Belanger & McCallum[2015|Han et al.]2016][Andrychowicz et al.|2016) backpropaga through optimization procedures in contexts unrelated to GANs or minimax games.\nIn this work we address the challenges of unstable optimization and mode collapse in GANs by unrolling optimization of the discriminator objective during training.\nf (0G,0D) = Ex~Pdata [log(D (x;0D))] + Ez~N(o,I) [log(1- D(G(z;0G);0D))\nHere x E ' is the data variable, z E Z is the latent variable, Pdata is the data distribution, the discriminator D (; 0p) : X -> [0, 1] outputs the estimated probability that a sample x comes from the data distribution, 0p and 0g are the discriminator and generator parameters, and the generator function G (; 0g) : Z -> transforms a sample in the latent space into a sample in the data space.\n0* = argmin max f (0g, 0D) 0G 0 D - argmin f (0g, 0* (0g)) 0G 9* (0g) = argmax f (0g, 0p) 0 D\nWhen the generator loss in Eq.2|is rewritten directly in terms of pg (x) and Eq. 5[rather than 0G and 0* (0g), then it is similarly a smooth function of pg (x). These smoothness guarantees are typically lost when D (x; 0p) and G (z; 0G) are drawn from parametric families. They nonetheless suggest that the true generator objective in Eq. 2 will often be well behaved, and is a desirable target for direct optimization.\nExplicitly solving for the optimal discriminator parameters 0* (0g) for every update step of the generator G is computationally infeasible for discriminators based on neural networks. Therefore this minimax optimization problem is typically solved by alternating gradient descent on 0g anc ascent on ID.\nThe optimal solution 0* = {0*, 0*} is a fixed point of these iterative learning dynamics. Addition. ally, if f (0g, 0p) is convex in Og and concave in 0p, then alternating gradient descent (ascent) trus1 region updates are guaranteed to converge to the fixed point, under certain additional weak assump tions (Juditsky et al.||2011). However in practice f (0g, 0p) is typically very far from convex in 0g. and concave in Op, and updates are not constrained in an appropriate way. As a result GAN training. suffers from mode collapse, undamped oscillations, and other problems detailed in Section|1.1 Ir order to address these difficulties, we will introduce a surrogate objective function fk (0g, 0p) fo. training the generator which more closely resembles the true generator objective f (0g, 0*, (0g)).."}, {"section_index": "3", "section_name": "2.2 UNROLLING GANS", "section_text": "A local optimum of the discriminator parameters 0*, can be expressed as the fixed point of an iterative optimization procedure.\ndf(0G,0b) k+1 1 d0% 0p(0G) = lim 0b k->00\nBy unrolling for K steps, we create a surrogate objective for the update of the generator\nThe generator and discriminator parameter updates using this surrogate loss are\nFor clarity we use full batch steepest gradient descent (ascent) with stepsize n above, while in ex periments we instead use minibatch Adam for both updates. The gradient in Eq.10|requires back propagating through the optimization process in Eq.7] A clear description of differentiation through\nPdata () D* (x) = Pdata (x) + pG (x)\nwhere nk is the learning rate schedule. For clarity, we have expressed Eq.7|as a full batch steepest gradient ascent equation. More sophisticated optimizers can be similarly unrolled. In our experi- ments we unroll Adam (Kingma & Ba2014).\nfK (0g,0D) = f(0G,0K (0g,0p))\nWhen K = 0 this objective corresponds exactly to the standard GAN objective, while as K -> oo. it corresponds to the true generator objective function f (0g, 0* (G)). By adjusting the number of unrolling steps K, we are thus able to interpolate between standard GAN training dynamics with. their associated pathologies, and more costly gradient descent on the true generator loss..\ndfK(0g,0p) 0G0G-n d0G df(0g,0D) 0p+0p+n d0 p\nForward 0. Gradien O. Gradien. 0 1 2 D 00 fo(9g,9D) f(0G,0D) .0p Unrolling SGD SGD f,(0g SGD Gradients 0g 0g 0g\nFigure 1: An illustration of the computation graph for an unrolled GAN with 3 unrolling steps The generator update in Equation|1OJinvolves backpropagating the generator gradient (blue arrows) through the unrolled optimization. Each step k in the unrolled optimization uses the gradients of fk with respect to 0, as described in Equation7and indicated by the green arrows. The discriminator update in Equation|11 does not depend on the unrolled optimization (red arrow).\ngradient descent is given as Algorithm 2 in (Maclaurin et al.2015), though in practice the use of an automatic differentiation package means this step does not need to be programmed explicitly. A pictorial representation of these updates is provided in Figure|1\nIt is important to distinguish this from an approach suggested in (Goodfellow et al.]2014), that several update steps of the discriminator parameters should be run before each single update step for the generator. In that approach, the update steps for both models are still gradient descent (ascent) with respect to fixed values of the other model parameters, rather than the surrogate loss we describe in Eq.9l Performing K steps of discriminator update between each single step of generator update corresponds to updating the generator parameters 0g using only the first term in Eq. 12|below.."}, {"section_index": "4", "section_name": "2.4 THE MISSING GRADIENT TERM", "section_text": "To better understand the behavior of the surrogate loss fk (0g, 0p), we examine its gradient with respect to the generator parameters OG\ndfK (0G,0D) df (0g,0K (0G,0D)) df (0G,0K (0G,0D)) d0K (0G,0D) dAG a0K (0G,0D) d0G\nStandard GAN training corresponds exactly to updating the generator parameters using only th. first term in this gradient, with 0K (0G, 0p) being the parameters resulting from the discriminato. update step. An optimal generator for any fixed discriminator is a delta function at the x to which th discriminator assigns highest data probability. Therefore, in standard GAN training, each generato. update step is a partial collapse towards a delta function..\nThe second term captures how the discriminator would react to a change in the generator. It reduces the tendency of the generator to engage in mode collapse. For instance, the second term reflects that as the generator collapses towards a delta function, the discriminator reacts and assigns lower probability to that state, increasing the generator loss. It therefore discourages the generator from collapsing, and may improve stability..\nAs K -> oo, 0K goes to a local optimum of f, where. d f = 0. and therefore the second term in. Eq.12|goes to 0 (Danskin 1967). The gradient of the unrolled surrogate loss fk (0g, 0p) with. respect to 0G is thus identical to the gradient of the standard GAN loss f (0g, 0p) both when K = 0. and when K -> oo, where we take K -> oo to imply that in the standard GAN the discriminator is also fully optimized between each generator update. Between these two extremes, fk (0g, 0p. captures additional information about the response of the discriminator to changes in the generator.\nForward Pass 0, Gradients 0 Gradients 0 D fo(9g,0D) SGD f(0g,0b) SGD f(9g,0D) Unrolling SGD Gradients 00\nGANs can be thought of as a game between the discriminator (D) and the generator (G). The agents. take turns taking actions and updating their parameters until a Nash equilibrium is reached. The Pdata(x) optimal action for D is to evaluate the probability ratio. for the generator's move x PG(x)+Pdata(x) (Eq. 5). The optimal generator action is to move its mass to maximize this ratio..\nThe initial move for G will be to move as much mass as its parametric family and update step permits. to the single point that maximizes the ratio of probability densities. The action D will then take is quite simple. It will track that point, and to the extent allowed by its own parametric family and update step assign low data probability to it, and uniform probability everywhere else. This cycle. of G moving and D following will repeat forever or converge depending on the rate of change of the two agents. This is similar to the situation in simple matrix games like rock-paper-scissors and matching pennies, where alternating gradient descent (ascent) with a fixed learning rate is known. not to converge (Singh et al. 2000Bowling & Veloso2002).\nIn the unrolled case, however, this undesirable behavior no longer occurs. Now G's actions take into account how D will respond. In particular, G will try to make steps that D will have a hard time responding to. This extra information helps the generator spread its mass to make the next D step less effective instead of collapsing to a point.\nIn principle, a surrogate loss function could be used for both D and G. In the case of 1-step unrolled optimization this is known to lead to convergence for games in which gradient descent (ascent) fails (Zhang & Lesser 2010). However, the motivation for using the surrogate generator loss in Section. 2.2] of unrolling the inner of two nested min and max functions, does not apply to using a surrogate. discriminator loss. Additionally, it is more common for the discriminator to overpower the generator than vice-versa when training a GAN. Giving more information to G by allowing it to 'see into the future' may thus help the two models be more balanced..\nIn this section we demonstrate improved mode coverage and stability by applying this technique tc five datasets of increasing complexity. Evaluation of generative models is a notoriously hard problem (Theis et al.] 2016). As such the de facto standard in GAN literature has become sample quality as evaluated by a human and/or evaluated by a heuristic (Inception score for example, (Salimans et al. 2016)). While these evaluation metrics do a reasonable job capturing sample quality, they fail to capture sample diversity. In our first 2 experiments diversity is easily evaluated via visual inspection In our later experiments this is not the case, and we will use a variety of methods to quantify coverage of samples. Our measures are individually strongly suggestive of unrolling reducing mode-collapse and improving stability, but none of them alone are conclusive. We believe that taken together however, they provide extremely compelling evidence for the advantages of unrolling.\nWhen doing stochastic optimization, we must choose which minibatches to use in the unrolling updates in Eq.7] We experimented with both a fixed minibatch and re-sampled minibatches for. each unrolling step, and found it did not significantly impact the result. We use fixed minibatches for all experiments in this section..\nWe provide a reference implementation of this technique at github.com/poolio/unrolled_gar"}, {"section_index": "5", "section_name": "3.1 MIXTURE OF GAUSSIANS DATASET", "section_text": "To illustrate the impact of discriminator unrolling, we train a simple GAN architecture on a 2D mixture of 8 Gaussians arranged in a circle. For a detailed list of architecture and hyperparameters see Appendix [A] Figure2 shows the dynamics of this model through time. Without unrolling the generator rotates around the valid modes of the data distribution but is never able to spread out. mass. When adding in unrolling steps G quickly learns to spread probability mass and the system. converges to the data distribution.\nIn Appendix [B we perform further experiments on this toy dataset. We explore how unrolling compares to historical averaging, and compares to using the unrolled discriminator to update the\nStep 0 Step 5k Step 10k Step 15k Step 20k Step 25k Target\nFigure 2: Unrolling the discriminator stabilizes GAN training on a toy 2D mixture of Gaussians dataset. Columns show a heatmap of the generator distribution after increasing numbers of training steps. The final column shows the data distribution. The top row shows training for a GAN with 10 unrolling steps. Its generator quickly spreads out and converges to the target distribution. The bottom row shows standard GAN training. The generator rotates through the modes of the data distribution. It never converges to a fixed distribution, and only ever assigns significant probability mass to a single data mode at once.\n10k steps 20k steps 50K steps 100k steps\nFigure 3: Unrolled GAN training increases stability for an RNN generator and convolutional dis criminator trained on MNIST. The top row was run with 2O unrolling steps. The bottom row is a. standard GAN, with O unrolling steps. Images are samples from the generator after the indicated. number of training steps.\ngenerator, but without backpropagating through the generator. In both cases we find that the unrolled objective performs better.\nTo evaluate the ability of this approach to improve trainability, we look to a traditionally challenging family of models to train - recurrent neural networks (RNNs). In this experiment we try to generate MNIST samples using an LSTM (Hochreiter & Schmidhuber 1997). MNIST digits are 28x28 pixel images. At each timestep of the generator LSTM, it outputs one column of this image, so that after 28 timesteps it has output the entire sample. We use a convolutional neural network as the discriminator. See Appendix C for the full model and training details. Unlike in all previously successful GAN models, there is no symmetry between the generator and the discriminator in this task, resulting in a more complex power balance. Results can be seen in Figure 3] Once again. without unrolling the model quickly collapses, and rotates through a sequence of single modes Instead of rotating spatially, it cycles through proto-digit like blobs. When running with unrolling steps the generator disperses and appears to cover the whole data distribution, as in the 2D example\nStep 0 Step 5k Step 10k Step 15k Step 20k Step 25k Target\nTable 1: Unrolled GANs cover more discrete modes when modeling a dataset with 1,000 data modes corresponding to all combinations of three MNIST digits (103 digit combinations). The number of modes covered is given for different numbers of unrolling steps, and for two different architectures. The reverse KL divergence between model and data is also given. Standard error is provided for both measures.\nGANs suffer from two different types of model collapse - collapse to a subset of data modes, anc collapse to a sub-manifold within the data distribution. In these experiments we isolate both effects using artificially constructed datasets, and demonstrate that unrolling can largely rescue both types of collapse."}, {"section_index": "6", "section_name": "3.3.1 DISCRETE MODE COLLAPSE", "section_text": "To explore the degree to which GANs drop discrete modes in a dataset, we use a technique simila to one from (Che et al.]2016). We construct a dataset by stacking three randomly chosen MNIST digits, so as to construct an RGB image with a different MNIST digit in each color channel. Thi new dataset has 1,O00 distinct modes, corresponding to each combination of the ten MNIST classe in the three channels.\nWe train a GAN on this dataset, and generate samples from the trained model (25,600 samples for al). experiments). We then compute the predicted class label of each color channel using a pre-trained. MNIST classifier. To evaluate performance, we use two metrics: the number of modes for which the. generator produced at least one sample, and the KL divergence between the model and the expected. data distribution. Within this discrete label space, a KL divergence can be estimated tractably be-. tween the generated samples and the data distribution over classes, where the data distribution is a uniform distribution over all 1,000 classes.\nAs presented in Table[1] as the number of unrolling steps is increased, both mode coverage and re. verse KL divergence improve. Contrary to (Che et al.f2016), we found that reasonably sized models. (such as the one used in Section 3.4) covered all 1,000 modes even without unrolling. As such we use smaller convolutional GAN models. Details on the models used are provided in Appendix |E\nWe observe an additional interesting effect in this experiment. The benefits of unrolling increase as the discriminator size is reduced. We believe unrolling effectively increases the capacity of the. discriminator. The unrolled discriminator can better react to any specific way in which the generator. is producing non-data-like samples. When the discriminator is weak, the positive impact of unrolling. is thus larger."}, {"section_index": "7", "section_name": "3.3.2 MANIFOLD COLLAPSE", "section_text": "5\nIn addition to discrete modes, we examine the effect of unrolling when modeling continuous mani folds. To get at this quantity, we constructed a dataset consisting of colored MNIST digits. Unlike in the previous experiment, a single MNIST digit was chosen, and then assigned a single monochro- matic color. With a perfect generator, one should be able to recover the distribution of colors used to generate the digits. We use colored MNIST digits so that the generator also has to model the digits, which makes the task sufficiently complex that the generator is unable to perfectly solve it. The color of each digit is sampled from a 3D normal distribution. Details of this dataset are provided in Appendix F We will examine the distribution of colors in the samples generated by the trained GAN. As will also be true in the CIFAR10 example in Section[3.4] the lack of diversity in gener- ated colors is almost invisible using only visual inspection of the samples. Samples can be found in Appendix F\nTable 2: Unrolled GANs better model a continuous distribution. GANs are trained to model ran domly colored MNIST digits, where the color is drawn from a Gaussian distribution. The JS diver gence between the data and model distributions over digit colors is then reported, along with standard error in the JS divergence. More unrolling steps, and larger models, lead to better JS divergence\n0 step 1 step 5 step 10 step\nFigure 4: Visual perception of sample quality and diversity is very similar for models trained with different numbers of unrolling steps. Actual sample diversity is higher with more unrolling steps. Each pane shows samples generated after training a model on CIFAR10 with 0, 1, 5, and 10 steps of unrolling."}, {"section_index": "8", "section_name": "3.4 IMAGE MODELING OF CIFAR10", "section_text": "By training with an unrolled discriminator, we expect to generate more diverse samples which more closely resemble the underlying data distribution. We introduce two techniques to examine sample diversity: inference via optimization, and pairwise distance distributions.\nIn order to recover the color the GAN assigned to the digit, we used k-means with 2 clusters, to pick out the foreground color from the background. We then performed this transformation for both the training data and the generated images. Next we fit a Gaussian kernel density estimator to both distributions over digit colors. Finally, we computed the JS divergence between the model and data distributions over colors. Results can be found in Table 2 for several model sizes. Details of the models are provided in AppendixF\nIn general, the best performing models are unrolled for 5-10 steps, and larger models perform better. than smaller models. Counter-intuitively, taking 1 unrolling step seems to hurt this measure of diversity. We suspect that this is due to it introducing oscillatory dynamics into training. Taking. more unrolling steps however leads to improved performance with unrolling..\nHere we test our technique on a more traditional convolutional GAN architecture and task, similar to those used in (Radford et al.]2015] Salimans et al.||2016). In the previous experiments we tested models where the standard GAN training algorithm would not converge. In this section we improve. a standard model by reducing its tendency to engage in mode collapse. We ran 4 configurations of. this model, varying the number of unrolling steps to be 0, 1, 5, or 10. Each configuration was run 5 times with different random seeds. For full training details see AppendixD] Samples from each of. the 4 configurations can be found in Figure4 There is no obvious difference in visual quality across. these model configurations. Visual inspection however provides only a poor measure of sample. diversity.\nTable 3: GANs trained with unrolling are better able to match images in the training set thar standard GANs, likely due to mode dropping by the standard GAN. Results show the MSE between training images and the best reconstruction for a model with the given number of unrolling steps The fraction of training images best reconstructed by a given model is given in the final column The best reconstructions is found by optimizing the latent representation z to produce the closest matching pixel output G (z; 0). Results are averaged over all 5 runs of each model with differen random seeds."}, {"section_index": "9", "section_name": "3.4.1 INFERENCE VIA OPTIMIZATION", "section_text": "Since likelihood cannot be tractably computed, over-fitting of GANs is typically tested by taking samples and computing the nearest-neighbor images in pixel space from the training data (Goodfel- low et al.[[2014). We will do the reverse, and measure the ability of the generative model to generate images that look like specific samples from the training data. If we did this by generating random samples from the model, we would need an exponentially large number of samples. We instead treat finding the nearest neighbor xnearest to a target image xtarget as an optimization task,\nZnearest = argmin||G (z; 0G) - Xtarget! Z Xnearest = G (Znearest; 0G) .\nWe apply this technique to each of the models trained. We optimize with 3 random starts using. LBFGS, which is the optimizer typically used in similar settings such as style transfer (Johnson. et al.] [2016] [Champandard] 2016). Results comparing average mean squared errors between xnearest and xtarget in pixel space can be found in Table[3] In addition we compute the percent of images for. which a certain configuration achieves the lowest loss when compared to the other configurations\nIn the zero step case, there is poor reconstruction and less than 1% of the time does it obtain the lowest error of the 4 configurations. Taking 1 unrolling step results in a significant improvement in MSE. Taking 10 unrolling steps results in more modest improvement, but continues to reduce the reconstruction MSE.\nTo visually see this, we compare the result of the optimization process for 0, 1, 5, and 10 step configurations in Figure 5] To select for images where differences in behavior is most apparent we sort the data by the absolute value of a fractional difference in MsE between the O and 10 step"}, {"section_index": "10", "section_name": "3.4.2 PAIRWISE DISTANCES", "section_text": "A second complementary approach is to compare statistics of data samples to the corresponding statistics for samples generated by the various models. One particularly simple and relevant statistic is the distribution over pairwise distances between random pairs of samples. In the case of mode collapse, greater probability mass will be concentrated in smaller volumes, and the distribution over inter-sample distances should be skewed towards smaller distances. We sample random pairs of images from each model, as well as from the training data, and compute histograms of the l distances between those sample pairs. As illustrated in Figure 6] the standard GAN, with zero unrolling steps, has its probability mass skewed towards smaller l2 intersample distances, compared\nconfigurations in Figure 5 To select for images where differences in behavior is most apparent.. we sort the data by the absolute value of a fractional difference in MSE between the O and 10 step. l0step-l10step models, This highlights examples where either the 0 or 10 step model cannot. 1(l0step+l10step) accurately fit the data example but the other can. In Appendix G we show the same comparison for. models initialized using different random seeds. Many of the zero step images are fuzzy and ill-. defined suggesting that these images cannot be generated by the standard GAN generative model,. and come from a dropped mode. As more unrolling steps are added, the outlines become more clear. and well defined - the model covers more of the distribution and thus can recreate these samples\n0.026 0.0154 0.016 0.0084 0.0212 0.0136 0.0133 0.0078 0.0326 0.0146 0.0158 0.0107 0.0158 0.0108 0.0103 0.006 0.0323 0.0103 0.0158 0.0108 0.0612 0.027 0.0317 0.0239 0.0327 0.0193 0.0184 0.0113 0.0254 0.0083 0.0128 0.0102\nFigure 5: Training set images are more accurately reconstructed using GANs trained with unrolling. than by a standard (O step) GAN, likely due to mode dropping by the standard GAN. Raw data is on the left, and the optimized images to reach this target follow for 0, 1, 5, and 10 unrolling steps.. The reconstruction MSE is listed below each sample. A random 1280 images where selected from the training set, and corresponding best reconstructions for each model were found via optimiza. tion. Shown here are the eight images with the largest absolute fractional difference between GANs. trained with 0 and 10 unrolling steps.\nto real data. As the number of unrolling steps is increased, the histograms over intersample distances increasingly come to resemble that for the data distribution. This is further evidence in support of. unrolling decreasing the mode collapse behavior of GANs.."}, {"section_index": "11", "section_name": "4 DISCUSSION", "section_text": "In this work we developed a method to stabilize GAN training and reduce mode collapse by definin, the generator objective with respect to unrolled optimization of the discriminator. We then demon. strated the application of this method to several tasks, where it either rescued unstable training, o1. reduced the tendency of the model to drop regions of the data distribution..\nThe main drawback to this method is computational cost of each training step, which increases linearly with the number of unrolling steps. There is a tradeoff between better approximating the true generator loss and the computation required to make this estimate. Depending on the architecture, one unrolling step can be enough. In other more unstable models, such as the RNN case, more are needed to stabilize training. We have some initial positive results suggesting it may be sufficient to further perturb the training gradient in the same direction that a single unrolling step perturbs it. While this is more computationally efficient, further investigation is required.\nThe method presented here bridges some of the gap between theoretical and practical results for. training of GANs. We believe developing better update rules for the generator and discriminator is an important line of work for GAN training. In this work we have only considered a small fraction of the design space. For instance, the approach could be extended to unroll G when updating D. as well - letting the discriminator react to how the generator would move. It is also possible to unroll sequences of G and D updates. This would make updates that are recursive: G could react to. maximize performance as if G and D had already updated.."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Laurent Dinh, David Dohan, Vincent Dumoulin, Liam Fedus, Ishaan Gul rajani, Julian Ibarz, Eric Jang, Matthew Johnson, Marc Lanctot, Augustus Odena, Gabriel Pereyra\nData 0 step 1 step 5 step 10 step Data 0 step 1 step 5 step 10 step 0.026 0.0154 0.016 0.0084 0.0212 0.0136 0.0133 0.0078 0.0326 0.0146 0.0158 0.0107 0.0158 0.0108 0.0103 0.006 0.0323 0.0103 0.0158 0.0108 0.0612 0.027 0.0317 0.0239 0.0327 0.0193 0.0184 0.0113 0.0254 0.0083 0.0128 0.0102\nFigure 6: As the number of unrolling steps in GAN training is increased, the distribution of pairwis. distances between model samples more closely resembles the same distribution for the data. Here we plot histograms of pairwise distances between randomly selected samples. The red line give. pairwise distances in the data, while each of the five blue lines in each plot represents a mode. trained with a different random seed. The vertical lines are the medians of each distribution..\nColin Raffel, Sam Schoenholz, Ayush Sekhari, Jon Shlens, and Dale Schuurmans for insightful conversation, as well as the rest of the Google Brain Team.\nPairwise L2 Norm Distributior 3.0 2.5 data 2.0 1.5 0 step 1.0 0.5 0.0 0.4 0.6 0.8 1.0 3.0 2.5 data 2.0 1.5 1 step Ah!l!qeqoue 0.0 3.0 0.4 0.6 0.8 1.0 2.5 data 2.0 Q 1.5 5 step 1.0 0.5 0.0 0.4 0.6 0.8 1.0 3.0 2.5 data 2.0 1.5 10 step 1.0 0.5 0.0 0.4 0.6 0.8 1.0 I2 norm\nJorg Bornschein, Samira Shabanian, Asja Fischer, and Yoshua Bengio. Bidirectional helmholtz machines. arXiv preprint arXiv:1506.03877, 2015.\nMichael Bowling and Manuela Veloso. Multiagent learning using a variable learning rate. Artificia Intelligence, 136(2):215-250, 2002\nAlex J. Champandard. Semantic style transfer and turning two-bit doodles into fine artworks. arXi preprint arXiv:1603.01768, 2016.\nTong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. arXiv preprint arXiv: 1612.02136, 2016\nPeter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine Neural computation, 7(5):889-904, 1995\nTian Han, Yang Lu, Song-Chun Zhu, and Ying Nian Wu. Alternating back-propagation for generato network. 2016. URL ht.t.ps : arxiv.orq/abs/1606.08571\nLaurent Dinh, David Krueger, and Yoshua Bengio. NICE: non-linear independent components esti mation. arXiv preprint arXiv:1410.8516, 2014.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling. C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Pro- cessing Systems 27, pp. 2672-2680. Curran Associates, Inc., 2014. URLhttp: / /papers. nips.cc/paper/5423-generative-adversarial-nets.pdf\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by. reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 448-456, 2015. URLhttp : / / jm1r. org/proceedings/papers/v37/ioffe15.html\nJustin Johnson. Alexandre Alahi. and Fei-Fei Li. Perceptual losses for real-time style transfer an super-resolution. arXiv preprint arXiv:1603.08155, 2016.\nAnatoli Juditsky, Arkadi Nemirovski, et al. First order methods for nonsmooth convex large-scale optimization, i: general purpose methods. Optimization for Machine Learning, pp. 121-148 2011.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980. 2014\nDiederik P. Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. 2016.\nTejas D. Kulkarni, Will Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. Deep convolutional inverse graphics network. arXiv preprint arXiv:1503.03167, 2015.\nDougal Maclaurin, David Duvenaud, and Ryan P. Adams. Gradient-based hyperparameter optimiza tion through reversible learning, 2015.\nAnh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing. the preferred inputs for neurons in neural networks via deep generator networks. arXiv preprint arXiv:1605.09304, 2016.\nSebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural sampler using variational divergence minimization. arXiv preprint arXiv:1606.00709, 2016.\nAugustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxil arv classifier oans rXiv:1610.09585.2016\nBarak A. Pearlmutter and Jeffrey Mark Siskind. Reverse-mode ad in a functional framework. Lambda the ultimate backpropagator. ACM Trans. Program. Lang. Syst., 30(2):7:1-7:36, March. 2008. ISSN 0164-0925. doi: 10.1145/1330017.1330018. URLhttp://doi.acm.org/10. 1145/1330017.1330018\nBen Poole, Alexander A Alemi, Jascha Sohl-Dickstein, and Anelia Angelova. Improved generato objectives for gans. arXiv preprint arXiv:1612.02780, 2016.\nAlec Radford. Luke Metz, and Soumith Chintala. Unsupervised representation learning with dee convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nScott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn ing what and where to draw. In NIPS, 2016a..\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and variational inference in deep latent gaussian models. In International Conference on Machine Learning. Citeseer, 2014.\nTim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016\nKaren Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Vi sualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.\nSatinder Singh, Michael Kearns, and Yishay Mansour. Nash convergence of gradient dynamics in general-sum games. In Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence, pp. 541-548. Morgan Kaufmann Publishers Inc., 2000.\nCasper Kaae Sonderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszar. Amortised map inference for image super-resolution, 2016. URL https://arxiv.org/abs/1610. 0 44 90v1 L. Theis and M. Bethge. Generative image modeling using spatial lstms. In Advances in Neu ral Information Processing Systems 28, Dec 2015. URLhttp: //arxiv.org/abs/1506. 0 3 4 7 8 / L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. In In ternational Conference on Learning Representations, Apr 2016. URL http: / /arxiv. org/ abs/1511.01844\nT. Tieleman and G. Hinton. Lecture 6.5-RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012\nAaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Kc ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprir. arXiv:1606.05328, 2016b.\nPascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol Stacked denoising autoencoders: Learning useful representations in a deep network with a local. denoising criterion. J. Mach. Learn. Res., 11:3371-3408, December 2010. ISsN 1532-4435 URLhttp://dl.acm.0rg/citation.cfm?id=1756006.1953039\nJason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neura networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015\nJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network arXiv preprint arXiv:1609.03126, 2016\nJun- Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipula. tion on the natural image manifold. In Proceedings of European Conference on Computer Vision (ECCV), 2016."}, {"section_index": "13", "section_name": "Appendix", "section_text": "Network architecture and experimental details for the experiment in Section[3.1|are as follows.\nThe dataset is sampled from a mixture of 8 Gaussians of standard deviation O.02. The means are equally spaced around a circle of radius 2.\nThe generator network consists of a fully connected network with 2 hidden layers of size 128 witl relu activations followed by a linear projection to 2 dimensions. All weights are initialized to be. orthogonal with scaling of O.8..\nThe discriminator network first scales its input down by a factor of 4 (to roughly scale to (-1,1)) followed by 1 layer fully connected network with relu activations to a linear layer to of size 1 to act as the logit.\nThe generator minimizes Lg = log(D(x)) + log(1 - D(G(z))) and the discriminator minimizes Lp = -log(D(x)) - log(1 D(G(z))) where x is sampled from the data distribution and z ~ N(0, I256). Both networks are optimized using Adam (Kingma & Ba] 2014) with a learning rate of 1e-4 and 1 =0.5.\nThe network is trained by alternating updates of the generator and the discriminator. One step consists of either G or D updating.\nFirst, we looked at taking an ensemble of the last N steps, as shown in Figure|App.\n1 Fnnsnnes Nuqwer 20 50 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 C+\n1 es 5 er 20 qwnN 50 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Update Steps\nFigure App.1: Historical averaging does not visibly increase stability on the mixture of Gaussians task. Each row corresponds to an ensemble of discriminators which consists of the indicated number of immediately preceding discriminators. The columns correspond to different numbers of training. Steps.\nTo further explore this idea, we ran experiments with an ensemble of 5 discriminators, but with different periods between replacing discriminators in the ensemble. For example, if I sample at a rate of 100, it would take 500 steps to replace all 5 discriminators. Results can be seen in Figure\nWe observe that given longer and longer time delays, the model becomes less and less stable. We hypothesize that this is due to the initial shape of the discriminator loss surface. When training, the discriminator's estimates of probability densities are only accurate on regions where it was trained When fixing this discriminator, we are removing the feedback between the generator exploitation\nAnother comparison we looked at was with regard to historical averaging based approaches. Re. cently similarly inspired approaches have been used in (Salimans et al.|2016) to stabilize training For our study, we looked at taking an ensemble of discriminators over time.\nFigure App.2: Introducing longer time delays between the discriminator ensemble results in insta bility and probability distributions that are not in the window being visualized. The x axis is the number of weight updates and the y axis is how many steps to skip between discriminator update. when selecting the ensemble of 5 discriminators.\nand the discriminators ability to move. As a result, the generator is able to exploit these fixed area of poor performance for older discriminators in the ensemble. New discriminators (over)compensate for this, leading the system to diverge"}, {"section_index": "14", "section_name": "B.2 EFFECTS OF THE SECOND GRADIENT", "section_text": "A second factor we analyzed is the effect of backpropagating the learning signal through the un. rolling in Equation [12 We can turn on or off this backpropagation through the unrolling by in. troducing stop-gradient calls into our computation graph between each unrolling step. With the. stop-gradient in place, the update signal corresponds only to the first term in Equation [12 We looked at 3 configurations: without stop gradients; vanilla unrolled GAN, with stop gradients; and with stop gradients but taking the average over the k unrolling steps instead of taking the final value Results can be see in FigureApp.3\nWe initially observed no difference between unrolling with and without the second gradient, as both required 3 unrolling steps to become stable. When the discriminator is unrolled to convergence. the second gradient term becomes zero. Due to the simplicity of the problem, we suspect that the discriminator nearly converged for every generator step, and the second gradient term was thus. irrelevant.\nTo test this, we modified the dynamics to perform five generator steps for each discriminator update Results are shown in Figure[App.4 With the discriminator now kept out of equilibrium, successfu training can be achieved with half as many unrolling steps when using both terms in the gradien than when only including the first term.\nThe generator first scales the 256D noise vector through a 256 unit fully connected layer with relu activation. This is then fed into the initial state of a 256D LSTM(Hochreiter & Schmidhuber!1997 that runs 28 steps corresponding to the number of columns in MNIST. The resulting sequence of ac- tivations is projected through a fully connected layer with 28 outputs with a tanh activation function All weights are initialized via the \"Xavier\"' initialization (Glorot & Bengio] 2010). The forget bias on the LSTM is initialized to 1.\nThe discriminator network feeds the input into a Convolution(16, stride=2) followed by a Convo lution(32. stride=2) followed by Convolution(32, stride=2). All convolutions have stride 2. As ir (Radford et al.[2015) leaky rectifiers are used with a 0.3 leak. Batch normalization is applied after each layer (Ioffe & Szegedy2015). The resulting 4D tensor is then flattened and a linear projection is performed to a single scalar.\nles E 10 eweee 100 stte 1000 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Update Steps\nUnrolled GAN 0 1 skeps 3 10 20 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Update Steps Unrolled GAN without second gradient 0 1 sda ster 3 10 20 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Update Steps\nFigure App.3: If the discriminator remains nearly at its optimum during learning, then performance is nearly identical with and without the second gradient term in Equation 12 As shown in Figure App.4l when the discriminator lags behind the generator, backpropagating through unrolling aids convergence.\nThe generator network minimises Lg = log(D(G(z))) and the discriminator minimizes Lp = log(D(x)) + log(1 - D(G(z))). Both networks are trained with Adam(Kingma & Ba2014) with. learning rates of 1e-4 and 1=0.5. The network is trained alternating updating the generator and the discriminator for 150k steps. One step consists of just 1 network update.."}, {"section_index": "15", "section_name": "D CIEAR1O/MNIST TRAINING DETAILS", "section_text": "The network architectures for the discriminator, generator, and encoder as as follows. All convolu-. tions have a kernel size of 3x3 with batch normalization and leaky ReLU's with a O.3 leak\nThe generator network is defined as:\nnumber outputs stride Input: z ~ N(0, I256) Fully connected 4 * 4 * 512 Reshape to image 4,4,512 Convolution 256 2 Convolution 128 2 Convolution 64 2 Convolution 1 or 3 1\n0 1 seeps 10 20 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000\nUnrolled GAN with 5 G Steps per D without second gradien\nThe discriminator network is defined as\nnumber outputs stride Input: x ~ Pdata or G Transposed Convolution 64 2 Transposed Convolution 128 2 Transposed Convolution 256 2 Flatten Fully Connected. 1\nThe generator network minimises Lg = log(D(G(z))) and the discriminator minimizes Lp. log(D(x)) + log(1 - D(G(z))). The networks are trained with Adam with a generator learning rate of 1e-4, and a discriminator learning rate of 2e-4. The network is trained alternating updating the. generator and the discriminator for 100k steps. One step consists of just 1 network update..\nFigure App.4: Backpropagating through the unrolling process aids convergence when the dis- criminator does not fully converge between generator updates. When taking 5 generator steps per discriminator step unrolling greatly increases stability, requiring only 5 unrolling steps to converge. Without the second gradient it requires 10 unrolling steps. Also see Figure|App.3.\n4 * 4 * 64 64 32 2 16 2 8 2 3 1\nThe discriminator network is parametrized by a size X and is defined as follows. In our tests, w used X of 1/4 and 1/2."}, {"section_index": "16", "section_name": "F.1 DATASET", "section_text": "To generate this dataset we first took the mnist digit, I, scaled between O and 1. For each image. we sample a color, C, normally distributed with mean=0 and std=0.5. To generate a colored digit. between (-1, 1) we do I * C + (I - 1). Finally, we add a small amount of pixel independent noise sampled from a normal distribution with std=0.2, and the resulting values are cliped between (-1, 1). When visualized, this generates images and samples that can be seen in figure[App.5] Once again. it is very hard to visually see differences in sample diversity when comparing the 128 and the 512. sized models.\nFigure App.5: Right: samples from the data distribution. Middle: Samples from 1/4 size model witl O look ahead steps (worst diversity). Left: Samples from 1/1 size model with 10 look ahead steps. (most diversity).\nThe models used in this section are parametrized by a variable X to control capacity. A value of X=1 is same architecture used in the cifar10 experiments. We used 1/4, 1/2 and 1 as these values.\nThe generator network is defined as:\nThe discriminator network is defined as.\nMore examples of model based optimization. We performed 5 runs with different seeds of each of of the unrolling steps configuration. Bellow are comparisons for each run index. Ideally this would be a many to many comparison, but for space efficiency we grouped the runs by the index in which they were run.\nFigure App.6: Samples from 1/5 with different random seeds\nData 0 step 1 step 5 step 10 step Data 0 step 1 step 5 step 10 step 0.026 0.0154 0.016 0.0084 0.0351 0.0341 0.0216 0.0149 0.0326 0.0146 0.0158 0.0107 0.0183 0.0087 0.0073 0.0078 0.0103 0.0158 0.0108 0.0449 0.0251 0.0323 0.0238 0.0194 0.0193 0.0184 0.0113 0.0327 0.0185 0.009 0.0094 0.008 0.0212 0.0078 0.0136 0.0133 0.0672 0.0366 0.0428 0.0295 0.0158 0.0103 0.0108 0.006 0.0146 0.0112 0.0146 0.0064 0.0612 0.027 0.0317 0.0239 0.0465 0.0302 0.0272 0.0206 0.0254 0.0083 0.0128 0.0102 0.05 0.0274 0.0238 0.0223 0.0321 0.0182 0.0177 0.0128 0.012 0.007 0.0085 0.0054 0.0382 0.0139 0.0133 0.0154 0.0252 0.0167 0.0172 0.0113 0.0142 0.0059 0.0067 0.0059 0.0268 0.0157 0.0154 0.0121 0.0332 0.0119 0.0186 0.0139 0.0405 0.0189 0.0235 0.0185 0.0061 0.0049 0.0034 0.0026 0.048 0.0325 0.0295 0.0222\nFigure App.7: Samples from 2/5 with different random seeds\nData 0 step 1 step 5 step 10 step Data 0 step 1 step 5 step 10 step 0.0183 0.0151 0.00680.0051 0.0338 0.024 0.0232 0.0178 0.0216 0.0119 0.0058 0.0066 0.0273 0.0168 0.0259 0.0145 0.0555 0.0217 0.0182 0.017 0.0355 0.0232 0.0262 0.0189 0.0317 0.0141 0.0144 0.014 0.0151 0.0127 0.0137 0.008 0.0058 0.0035 0.0041 0.0213 0.0159 0.0221 0.0114 0.0026 0.0488 0.033 0.0316 0.0223 0.0368 0.0305 0.0255 0.0199 0.0165 0.0085 0.0106 0.0082 0.005 0.0046 0.0039 0.0027 0.0221 0.0134 0.0217 0.011 0.0292 0.0239 0.0211 0.016 0.0231 0.013 0.0168 0.0117 0.0263 0.0201 0.0217 0.0144 0.0627 0.0393 0.0536 0.032 0.0334 0.0213 0.0207 0.0183 0.0151 0.0129 0.0286 0.0079 0.0355 0.0339 0.0215 0.0196 0.0168 0.0101 0.0104 0.0087 0.0438 0.0211 0.0291 0.0242 0.0276 0.0217 0.0193 0.0146 0.0226 0.015 0.0156 0.0125\nFigure App.8: Samples from 3/5 with different random seeds\nData 0 step 1 step 5 step 10 step Data 0 step 1 step 5 step 10 step 0.03 0.00750.00660.0053 0.0403 0.02460.03520.0197 0.0213 0.0081 0.0096 0.006 0.0227 0.0164 0.0109 0.0111 0.0453 0.01570.01270.0143 0.0202 0.0148 0.0146 0.01 0.0161 0.0114 0.0075 0.0063 0.0269 0.02 0.0221 0.0133 0.0168 0.0125 0.0137 0.0066 0.0172 0.0144 0.0144 0.0086 0.0304 0.0186 0.0229 0.013 0.0223 0.0157 0.0196 0.0111 0.0421 0.0284 0.0272 0.0185 0.0509 0.0357 0.0351 0.0258 0.0234 0.0147 0.01730.0103 0.021 0.016 0.015 0.0107 0.0242 0.0156 0.0144 0.011 0.0219 0.0105 0.013 0.0112 0.0223 0.0152 0.0174 0.0103 0.0206 0.0177 0.0175 0.0105 0.0341 0.0274 0.02 0.016 0.024 0.0177 0.0256 0.0123 0.0079 0.0049 0.0052 0.0037 0.0286 0.019 0.01880.0146 0.0207 0.0133 0.011 0.0099 0.0169 0.015 0.0135 0.0087\nFigure App.9: Samples from 4/5 with different random seeds\nData O step I step 5 step 1o step Data O step I step 5 step 10 step 0.0259 0.016 0.0288 0.0096 0.0216 0.0155 0.0218 0.0112 0.0374 0.0388 0.0301 0.0161 0.0403 0.024 0.0244 0.0212 0.0228 0.0191 0.055 0.0363 0.0361 0.0429 0.021 0.0289 0.0233 0.0278 0.0222 0.011 0.0341 0.0236 0.0333 0.018 0.0528 0.0303 0.0274 0.0249 0.04140.0215 0.0274 0.0219 0.0329 0.0202 0.0274 0.0155 0.0167 0.01040.01220.0089 0.0286 0.0204 0.0164 0.0135 0.0241 0.0105 0.015 0.0129 0.022 0.0146 0.0155 0.0105 0.00630.00280.00460.0034 0.0366 0.0248 0.0205 0.0178 0.0402 0.0278 0.0314 0.0217 0.0336 0.0257 0.0164 0.0157 0.0228 0.0133 0.0142 0.0084 0.0191 0.0115 0.011 0.0096 0.0301 0.0281 0.0294 0.0163 0.0557 0.0322 0.0304 0.0282 0.0507 0.0362 0.0494 0.0277 0.0464 0.0261 0.0269 0.0239 0.0375 0.0323 0.0247 0.0206\nFigure App.10: Samples from 5/5 with different random seeds\nData 0 step 1 step 5 step 10 step Data 0 step 1 step 5 step 10 step 0.04520.0064 0.00640.0081 0.0557 0.0373 0.0344 0.0271 0.0333 0.0078 0.0076 0.0065 0.0565 0.031 0.0364 0.0276 0.043 0.0124 0.0235 0.0134 0.0315 0.0115 0.0137 0.0154 0.03 0.0064 0.008 0.0104 0.0285 0.0123 0.0183 0.014 0.0128 0.0058 0.0065 0.0058 0.0552 0.0314 0.0307 0.0271 0.0392 0.0195 0.0218 0.0177 0.0327 0.015 0.0172 0.0161 0.0402 0.0308 0.0286 0.0184 0.0735 0.0577 0.0386 0.0365 0.0168 0.0096 0.0119 0.0077 0.0204 0.0121 0.0111 0.0102 0.0402 0.0299 0.0233 0.0188 0.0291 0.0163 0.0195 0.0145 0.026 0.0144 0.0165 0.0122 0.0261 0.0135 0.015 0.013 0.0097 0.0061 0.005 0.0046 0.0286 0.0189 0.02 0.0143 0.0105 0.0051 0.0042 0.005 0.027 0.019 0.019 0.0135 0.0331 0.0236 0.0256 0.0158 0.0156 0.0091 0.012 0.0078"}]
rJPcZ3txx
[{"section_index": "0", "section_name": "FASTER CNNS WITH DIRECT SPARSE CONVOLUTIONS AND GUIDED PRUNING", "section_text": "Jongsoo Park"}, {"section_index": "1", "section_name": "Hai Li3", "section_text": "Yiran Chen\n1Intel Labs, 2Department of Electrical and Computing Engineering, University of Pittsburgl 3Department of Electrical and Computer Engineering, Duke University 1 {jongsoo.park, sheng.r.li, peter.tang, pradeep.dubey} @intel.com, {wew57} @pitt.edu, 3 { yiran.chen, hai.li} @ duke.edu\nPhenomenally successrul in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even. supercomputers. The number of parameters needed in CNNs, however, are often. large and undesirable. Consequently, various methods have been developed to prune. a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN's size considerably, it. does not improve inference speed noticeably as the compute heavy parts lie in. convolutions. Pruning CNNs in a way that increase inference speed often imposes. specific sparsity structures, thus limiting the achievable sparsity levels.. We present a method to realize simultaneously size economy and speed improve-. ment while pruning CNNs. Paramount to our success is an efficient general. sparse-with-dense matrix multiplication implementation that is applicable to convo-. ution of feature. with kernels of arbitrar. snars1tvnatte mnlementing\nSpecic sparsit aoicspalsllyleve We present a method to realize simultaneously size economy and speed improve. ment while pruning CNNs. Paramount to our success is an efficient general. sparse-with-dense matrix multiplication implementation that is applicable to convo-. lution of feature maps with kernels of arbitrary sparsity patterns. Complementing. this, we developed a performance model that predicts sweet spots of sparsity levels. for different layers and on different computer architectures. Together, these two. allow us to demonstrate 3.1-7.3 convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum. from mobile devices to supercomputers."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Due to the success of deep neural networks in a broad set of practical and even critical artificial intel ligence tasks, they are now widely deployed in a spectrum of platforms: smart phones, autonomous cars, data center servers, and even supercomputers. While suitably designed and trained CNNs can be. powerful, they are often large - requiring many parameters (e.g., the celebrated AlexNet (Krizhevsky. et al.] 2012) has 60 millions). That large neural network models incur cost in terms of memory,. energy, and inference speed is easy to see..\nThis motivated a line of research (Han et al.(2015] 2016b);Guo et al.(2016);[Denton et al.(2014) to name a few) that tries to prune the parameters after a CNN design is trained and proved useful A common thread is to post-process a trained CNN. Post-processing may consist of retraining with sparsity inducing regularization or of approximating tensors of parameters via tensor factorization These methods reduce the size of CNNs significantly while preserving inference accuracy. Neverthe less, the inference speed gains in pruned networks is not nearly as impressive as the size reduction. In this sense, the benefits of CNN pruning seem not fully realized.\nWhile seemingly unintuitive, that the significantly pruned CNNs run not nearly as significantly faster can be easily explained. First, fully connected (fc) layers usually contain the bulk of the parameters\nPing Tak Peter Tang"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "while convolutional (conv) layers consume the bulk of computation time. This property shows that reducing the size of just the fc layers will readily lead to meaningful reduction in size as in Han et al. (2016b);Guo et al.(2016); but little speed improvement.\nWe view sparse methods differently. Convolutions in CNNs involve multiple channels and thus offe. much higher data reuse than typical sparse matrix operations in scientific computing. Specifically, we present a highly efficient direct sparse convolution design formulated as sparse-matrix-dense-matrix. multiplication with the dense matrix columns generated on-the-fly from a single column vector. Ir. addition to being highly efficient, this sparse convolution design is friendly with convolution kernels. with arbitrary sparsity patterns. We call this element-wise sparsity to distinguish it from group-wise. sparsity mentioned previously. As shown later on, accepting element-wise sparsity significantly. increases the achievable sparsity level.\nComplementing our sparse convolution design, we formulate a performance model to elucidate. when and how best to use sparse convolutions on different computer architectures and at differen CNN layers. Our formulation follows the roofline model (Williams et al.] 2009). In particular, ou. model suggests (correctly) that sparse convolution can improve inference speed even with a moderat sparsity level of around 70%. In addition, the model provides upper and lower bounds of sparsit levels that can contribute to speed improvements. Sparsity higher than the upper bound offer nc. further speed improvement; and sparsity lower than the lower bound can in fact slow down inference. rather than accelerating it.\nCombining the sparse convolution design with the performance model allows us to prune a CNN in a co-design manner, with our proposed new pruning algorithm-Guided Sparsity Learning (GSL). As illustrated later, we can adjust sparsity targets precisely at different layers so as to maximize inference speed, best preserve accuracy, and minimize a network's size. In some cases, particular layers are identified as best not pruned at all due to no potential speedups, leaving them unchanged gives other layers more room for gainful pruning in terms of size and speed.\nOur paper makes the following contributions:\nA high performance sparse convolution design that takes advantage of arbitrary sparsity patterns and outperforms dense convolution even with a moderate sparsity.. A general performance model that (1) projects speedups over dense convolutions on varying. level/types of sparsity and different computing platforms and (2) provides training guidelines foi precisely targeting layers and sparsity ranges with potential to accelerate inference.. .Guided Sparsity Learning (GSL), the first pruning algorithm fusing the awareness of speedup. potential into sparsity learning; and its application to AlexNet and GoogLeNet. In particular, in. GoogLeNet, we prune out more than 80% of parameters of all 55/3 3 conv layers and fc layers with no accuracy drop. .An optimized sparse convolution implementation (http://github.com/IntelLabs/SkimCaffe) that. provides 7.3 , 3.4, 3.1 speedups of convolution layers in AlexNet over dense methods on Inte. Atom, Xeon, and Knights Landing processors, respectively, with no accuracy drop. In particular this paper is one of the first evaluations of Xeon Phi processors on deep learning algorithms..\nThe crux of speed improvement thus lie in actual fast convolution of sparse kernels with feature maps. (not just floating-point operations reduction), which is a challenging problem. It is well known in the field of numerical linear algebra that the performance of sparse matrix operations is typically. memory bandwidth bound. Direct application of the sparse matrix operations to compute the conv layers when the kernels are sparse will likely result in sub-optimal speed gains. This concern on low efficiency of sparse operations is also discussed in the design of GoogLeNet (Szegedy et al.2015). We will term methods that work directly with sparse data structures \"sparse methods. Alternative to sparse methods, \"dense methods' gather data in a way that allow the actual convolution be performed. by dense linear algebra functions such as GEMM. An example is found in (Lebedev & Lempitsky. 2015] Wen et al.[2016) which produces some group-wise sparsity patterns that facilitate the use of existing and highly tuned dense matrix computation library functions to perform the convolutions. However, imposing sparsity patterns limits the sparsity level that would otherwise be achievable. had arbitrary patterns been allowed. We note that high compression in the conv layers are gaining. importance as these layers consist a much larger percentage of parameters in recent networks such as GoogLeNet (Szegedy et al.2015) and ResNet (He et al.2015).\nWeight Tensor, W. Input Tensor, . Win N Hin R S n Sparse dot-product. +X \"M\"H N W) (sparse) CHn.Win virtual Sparse vector w/ 2 non-zeros Dense vector +X Virtual Dense yector H....W..\nFigure 1: Conceptual view of the direct sparse con- volution algorithm. Computation of output value at y, x)th position of nth output channel is highlighted.\nThe rest of the paper is organized as follows. Section2lpresents the details of our sparse convolution. design, formulation of the performance model, the Guided Sparsity Learning (GSL) pruning algorithm and how they are combined to prune and accelerate CNNs. Section|3|demonstrates the effectiveness of these developments on AlexNet and GoogLeNet on a variety of platforms. Section|4|discusses. related works and review the state of the art. Section 5|concludes and outlines a few next-steps..\nAs explained previously, prunning CNN models does not yet benefit inference speed as much as model size reduction. This section first presents our efficient direct sparse convolution design that. remedies this situation significantly. We then develop a performance model that projects speedup over. different sparsity levels and on different processor architectures. The model guides our speedup-aware. training method, Guided Sparstiy Learning (GSL).."}, {"section_index": "4", "section_name": "2.1 DIRECT SPARSE CONVOLUTION", "section_text": "A sparse convolution for the all output positions across all output channels can be eventually consid ered as a virtual sparse-matrix-dense-matrix multiplication (SpMDM), as described in the following Consider a bank of N filters each with size R S against an Hin Win feature with C input channels. We denote the filter bank as a 4-mode tensor W with size N C R S, the input feature as a 3-mode tensor I with size C Hin Win, and the output feature as a 3-mode tensor O with size N Hout Wout. The output value at (y, x)th position of nth output channel is computed by\nC-1R-1S-1 O(n,y,x)= W(n,c,r,s)I(c,y+r,x+s) c=0 r=0 s=0\nFigure 2: Sparse convolution pseudo code. Ma trix W has compressed sparse row (CsR) for-. mat, where rowptr [n] points to the first non-zero. weight of nth output channel. For the jth non- zero weight at (n,c,r,s), W.colidx[j] contains the offset to (c, r, s)th element of tensor in, which is pre-computed by layout function as f(c,r, s). If. in has CHW format, f(c,r,s) = (cHin + r)Win + s The \"virtual\"' dense matrix is formed on-the-fly by shifting in by (0,y,x)..\nwhich is a dot-product of two 3D tensors as shown in Figure[1] This can be treated as a vector dot-product by: first vectorizing the 3D subtensor of W corresponding to the nth output channel, then. vectorizing I (denoted as vec(I)), and finally stretching the first vector to match the dimension of. two vectors. When W is sparse, then this vector dot-product becomes a sparse-vector-dense-vector. dot-product. Consider flattening dimensions except the first one of W into a sparse matrix W(1) (i.e.. mode-1 matricization of w as in|Kolda & Bader(2009)), with its row vectors stretched to match the dimension of vec(I). O(n,y,x) is then the dot-product between the nth row of W(1) and vec(I). Subsequently, the values at the same given (y,x)th position of all N output channels can be computed.\nUsing the virtual dense matrix essentially skips the lowering step used in standard frameworl such Caffe, and, therefore, we call our method direct sparse convolution, to distinguish it fro sparse convolution with lowering such as the method used in Liu et al.(2015). The lowerir approach replicates the input feature multiple times, significantly reducing arithmetic intensity. Tl lowering process has demonstrated overhead for dense convolution as in|Hadjis et al. (2015);Chinta 2015), and is particularly problematic for sparse convolution with intensity already lower than i dense counter part. Figure 3 demonstrates the advantage of our direct sparse convolution, usir the performance model that will be developed Section |2.2] where our direct sparse convolutic significantly outperforms lowering-based methods at a high level of sparsity.\nEven though direct sparse convolution may seem conceptually more complicated than the usual SpMDM or the lowering-based methods, it can be concisely expressed in the pseudo code shown in Figure[2] To decouple from a specific layout of tensor I, the pseudo code uses layout function f such that f(c,y,x) maps to the offset corresponding to (c,y,x)th element of I (we assume f(c,y+ r,x+s) = f(c,y,x) + f(0,r,s)). For example, in CHW layout, f(c,y,x) = (c. Hin + y)Win + x.\nIn convolutional layers in CNN, an input channel is reused against multiple output channels and vice versa, and there is also ample reuse out of an input channel due to overlapping between dot-products. especially for a large filter size and a small stride. Therefore, the arithmetic intensity of sparse. convolution can be significantly higher than typical sparse-matrix computations such as SpMV, thus. leading to high compute efficiency. Our optimized implementatior'[fully takes advantage of the. reuse, applying loop tiling to both input and output channels, with column blocking (Buluc et al.. 2009) applied to the weight sparse matrix. SIMDification and register blocking optimizations are. also applied to the y and x loops in the pseudo code. Non-contiguous indirect access (i.e. gather is another overhead of typical sparse-matrix computations. However, as shown in our pseudo code. the values read from colidx and value arrays of a sparse matrix are reused Hout . Wout times. The. access to tensor in is also contiguous as long as the tensor's elements with contiguous x or y values. are stored contiguously, which is a common case as in the CHW format..\n2.2 ANALYTICAL PERFORMANCE MODELING: PROJECTIONS ON SPARSE CONVOLUTION SPEEDUP AND GUIDELINES ON USEFUL SPARSITY RANGE\nThe performance of sparse convolution depends highly on the sparsity level of the weight tensor. This. section develops a performance model to determine the appropriate target sparsity range for pruning and to project theoretical speedup for any given sparsity, using the roofline model (williams et al. 2009).\nWe denote the floating-point operations required in a convolution as C (in FLOP), the size of input and. output activation tensors as SA (in Bytes), and the size of weight tensor as Sw, all without considering. sparsity. We denote the density of non-zero in filters as x (the lower the x, the higher the sparsity of weight tensor), the compute capability of processor as F (in FLOP/s), and the memory bandwidth. as B (in B/s). With these parameters, the time for dense convolution (tdense), the time for sparse. convolution bound by compute (tsparse_compute) and by bandwidth (tsparse_bw), and theoretical speedup. can be modeled as follows (we assume dense convolution is not bandwidth bound):\nC OxC SA + xSw tdense speedup = t dense Isparse_compute I sparse_bw F B max(tsparse_compute,sparse_bw\nwhere a and denote the compute and storage overheads of sparse representations, respectively We observe ~ 3 on a Xeon E5-2697 v4 processor, and is typically 2 (in compressed sparse row representation, we need 4B column index for each 4B single-precision floating point non-zero value).\nhttps://github.com/IntelLabs/SkimCaffe/blob/intel_scnn/include/caffe/util/sconv.hpp\nO(1) (,yWout + x) = W(1) vec(Iy,x)\nwhere Iy.x denotes the tensor I with its last two dimensions shifted by (y,x). The values at different. output positions can be computed as a sparse-matrix-dense-matrix multiplication (SpMDM), where the columns of the dense matrix are actually the same vector vec(I) but with different offsets. In order to save bandwidth usage, we operate with a virtual dense matrix, Ivirtual, where its columns are. generated on the fly by adjusting indices through which we access vec(I)..\nThis analytical model is visualized in Figure[3] Here, we define effective FLOP/s with respect to the number of floating-point operations that would have been performed by dense convolutions including the ones for zero weights (i.e. effective FLOP/s = C/tsparse). With a moderate sparsity (e.g. x = O.2), convolution is likely to be compute bound, and hence effective FLOP/s rapidly increases as x decreases. For conv5 in AlexNet with x E (0.05, 0.15), a typical sparsity range without accuracy loss, direct sparse convolution can achieve 2-7 and 4-14 speedup on Xeon and Atom platforms respectively, as shown in Figure[3|and will be validated in Section|3.2\nHowever, decreasing arithmetic intensity further with lowering x eventually makes the performance. bandwidth bound. Thus, there is an upper bound of useful sparsity, and a sparsity higher than. it does not provide additional speedup, while only making training more challenging to preserve. accuracy. This upper bound can be found by solving for x such that tsparse_compute = tsparse_bw (e.g.. the upper bound sparsity for conv5 of AlexNet on the Xeon is x ~ O.02). This analysis can be. applied to various computing platforms including CPUs and GPUs because the model captures the. essential platform-dependent characteristic, the ratio of bandwidth compute capability to memory. bandwidth (F /B). When the compute to bandwidth ratio is lower as in a platform like Atom, the. performance will be less quickly bandwidth bound. For example, the lower bound of useful sparsity. for conv5 of AlexNet is x ~ 0.01 on Atom C2750, which is smaller than that of Xeon. The speedup. to sparsity relation also varies over layers. For example, since 11 convolutions in GoogLeNet has low arithmetic intensity to begin with, its performance quickly becomes bandwidth bound at lower. sparsity (or higher x).\nThe compute overhead, a, depends on the quality of sparse convolution implementation and on the. target processor architecture. Since tsparse_compute > tdense for x > 1/, there is a lower bound of useful sparsity such that, with a sparsity lower than that, sparse convolution becomes slower than dense. convolution. The previous section described our sparse convolution implementation that achieves a=3 (since a is the compute overhead, lower is better) on the Xeon instead of a=100 as conjectured bySzegedy et al.(20152\nThe upper and lower bounds on useful sparsity can provide important insights for training/pruning. The model can tell that sparse convolution is not useful for certain layers, in which case we can skip. pruning of those layers to provide more room for sparsity in the other layers. For example, layers like the first layer in AlexNet and GoogLeNet may not provide enough sparsity regardless of the amount. of regularization applied as long as the original inference accuracy is to be preserved. A layer may be. already bandwidth bound even before pruning like 1 1 convolution layers in GoogLeNet as shown. by inception_4a/5x5_reduce layer in Figure3\n20 1.2 20 AlexNet conv5 40 direct 18 18 35 16 1.0 16 T/3S --AlexNet conv5 TFS 30 14 14 snpeenp npeeeu 0.8 lowered 25 12 12 10 10 20 GoogLeNet 8 8 inception_3a/5 15 6 6 x5_reduce 10 4 0.2 4 5 2 2 0 0 0.0 0 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 Non-zero Density Non-zero Density (a) Xeon E5-2697 v4 (b) Atom C2750\nFigure 3: Projected performance of sparse convolution and its speedup over dense convolution for a Xeon pro cessor and an Atom processor. conv5 direct: direct spares convolution, conv5 lowered: sparse convolution on tensors lowered to matrices. We use the processors' achievable FLOP/S and memory bandwidth shown in Table|1and the compute overhead of sparse convolution measured in Section|3.2\n2The compute overhead of a=3 primarily comes from that access to input tensor is not aligned at cache line boundaries. Recent Xeon processors can execute 1 unaligned SIMD load per cycle, which is not enough to sustain 2 SIMD fused multiply-add operations per cycle. In addition to this 2 overhead, when Wout is not a multiple of SIMD width (8 for Xeon E5-2697 v4), we do not fully utilize the SIMD registers. Since Atom processors do not execute multiple SIMD floating operations per cycle anyway, and because its SIMD width is narrower as 4, its compute overhead is smaller as 1.2 as will be shown in Section|3.2\nGuided Sparsity Learning (GsL), our new pruning algorithm, is inspired by the insights and ou performance model. GSL is the first to fuse the awareness of speedup potential into sparsity learning GSL is a generic algorithm and accepts different regularization methods for pruning. When GSL i. used with element-wise regularization for pruning, thus denoted as Guided Element-wise Sparsit Learning (GESL), it learns the element-wise sparsity of layers where the model predicts speedup. GSL can also be used with regularization methods that are more complicated than basic ridge anc lasso regularization. For example, GSL can be combined with dynamic network surgery (Guo et al 2016), as will be shown in Section[3.1\nOur sparse CNN design is evaluated on three platforms shown in Table [1] Intel C2750 (Atom represents resource-constrained mobile platforms or micro servers optimized for energy efficiency. Xeon E5-2697 v4 (BDw) represents data-center servers. Xeon Phi 7250 (KNL) is designed for high performance computing, but its next version, Knights Mill, will specifically target machine learning. Our sparse CNN is implemented as an extension of Caffe deep learning framework (Jia et al.]2014. and is at https://github.com/IntelLabs/SkimCaffe. We use Intel compiler version 17.0.0 and use al. cores available. The SGEMM performance and achievable memory bandwidth listed are measured witl Intel MKL version 2017 and STREAM benchmark (McCalpin), respectively.\nWe train with the ImageNet ILSVRC-2012 dataset (Deng et al.]2009), starting from the pre-trained. Caffe reference model (a slight variation but we call it AlexNet for simplicity) and GoogLeNet model. from the Caffe model zoo. Since we find it is easy to have high sparsity with smaller networks and. datasets like LeNet and CIFAR regardless of pruning method, we do not present their results. Our. training process is based on the method described in|Wen et al.(2016) with the following differences. We look for element-wise sparsity with lasso instead of group lasso, and guide the training process to. target the layers and range of sparsity where we see speedup potential. We have explored various. solver methods and learning rate schedules, but found that they do not significantly affect the eventual. accuracy and sparsity, once hyper-parameters are tuned for the respective settings. In general, the pruning step no longer improves after 450K and 900K mini-batch iterations for AlexNet and. GoogLeNet, respectively. The re-training step saturates around 150K and 300K mini-batch iterations. To see trade-offs among accuracy, speed, and model size, we try various weight decays ranging from. 1e-5 to 1e-3, and, for AlexNet, decay multipliers for fc layer ranging from 1e-2 to 1. We find that the\nTable 1: Evaluated Platforms\nAtom C2750 (At om) Xeon E5-2697 v4 (BDW) Xeon Phi 7250 (KNL) Socket core SP-SIMD 1x8x4 1x18x4 1x6816 Clock (GHz) 2.4 2.3 1.4 SGEMM GFLOP/S 62 2,150 4,540 Achievable bandwidth (GB/s) 15 122 480\nAlthough GSL as described above aims primarily at inference speed, GSL can balance the implications of pruning on inference speed, accuracy, and model size. To do this, optional constraints can be given to GSL to prioritize pruning of different layers in the network. For example, by using different. regularization strengths on conv and fc, we can tune the priorities on speed and model size..\nESL 57.2% o DNS 56.9% + GSSL 68.7% GESL 57.5% x SSL 57.5% 1.0 +++***** GDNS 56.9% Peeensy Peeessr 1.0 x. X X X 0.8 0.8 x x X useful sparsity 0.6 x x 0.6 range ++ +x Norn-ero 0.4 0 .0 : 0.4 0.2 : 0.0 0.2 * X Xx fc8 i: #+ 0.0 7x7 5x5 conv 3x3 conv 1x1 conv fc FLOP: 92% 8% 7% 8% 61% 23% 0.2% Size: 4% 96% 0.1% 6% 33% 25% 35% (a) AlexNet (b) GoogLeNet\nFigure 4: Layer-by-layer sparsity from element-wise sparsity learning (ESL), guided ESL, dynamic networl surgery (DNS), guided DNS, and structured sparsity learning (SSL). The accuracies shown in percentage are top-1 accuracy measured with the ImageNet test set. The original AlexNet and GoogLeNet top-1 accuracie are 57.4% and 68.9%. DNS and SSL AlexNet results are from|Guo et al.[(2016) and|Wen et al.(2016). GDNs AlxeNet and SSL GoogLeNet results are our own but with the same code used in their papers. The shaded area marks the useful sparsity range predicted by our model for BDw. No shaded area means sparse convolution is no useful for the layer regardless of sparsity. In GoogLeNet, we organize layers by their types, and, within eacl layer type, layers are ordered from the earlier layers in forward propagation.\nstarting learning rate of 1e-3 and weight decay of 5e-5 in general gives a high sparsity with minima accuracy drop. We reduce the learning rate by 10 for re-training step.."}, {"section_index": "5", "section_name": "3.1 GUIDED TRAINING RESULTS", "section_text": "Figure|4|shows the effectiveness of our guided pruning and compares the level of element-wise and. group-wise sparsity we can obtain. We should look at layer-by-layer because the speedup over dense. convolution does not have a simple linear relation with sparsity as shown by our model, and, therefore the overall FLOP reduction does not necessarily closely correlate with the real speedup. In AlexNet using the same element-wise regularization factor across all layers (element-wise sparsity learning. ESL) provides non-zero densities around 0.4 for conv2-5. This is fine sparsity when the primary. goal is reducing model size, but not high enough for speeding-up inference. Therefore, guided ESI. (GESL) reduces the regularization factor of fc layers (as they have fewer FLOPS) and avoid pruning conv1 entirely (as its sparsity is too low for any potential speedups with more regularization). This. leads to less than 0.2 non-zero density for conv2-5, the range where we can get speedups from. sparse convolution. Similarly, applying GSL to dynamic network surgery (DNS), a recent proposal to obtain high sparsity, as Guided DNS (GDNS), we can see that GSL effectively improve the obtained. sparsity for accelerating inference by de-prioritizing conv1 and fc layers (we go further to not prune fc layers at all to see how much sparsity DNS can provide in conv layersJ3.\nStructured sparsity learning (SSL) provides group-wise sparsity, for which we can use dense methods. but its sparsity is lower because of constrained forms of sparsity. According to our model, SSI. performs better when xg < (a/agJx, where x and xg are non-zero density of ESL and SSL, and a. and ag are the compute overhead of ESL and SSL, respectively. Even if we use an ideal 100%. efficiency for SSL (g = 1|and the measured overhead = 3 for ESL, x, shown in Figure|4(a) is not small enough to outperform GESL. Note that our guiding principles are already applied to SSL. where conv1 and fc layers are not pruned. In short, sparsity SSL can currently obtain is too low to outperform once compared with an optimized sparse convolution design for element-wise sparsity such as ours. This motivates further investigation of pruning methods for higher group-wise sparsity.\nGoogLeNet has many 11 convolutions, for which sparse convolution does not provide speedups due to their low arithmetic intensity, as our model predicts. As shown in Figure 4[a), GESI successfully discovers this and avoids pruning the 11 convolutions for higher sparsity in 33 and 55 convolutions, where our model projects speedup, and, more importantly, almost recovers. the original accuracy. For 11 convolutions, group-wise sparsity implemented in SSL reduces to.\no DNS 56.9% ESL 66.7% GESL 68.7% x SSL 66.9% + GSSL 68.7% ESL 57.2% GESL 57.5% 1.0 * ****** GDNS 56.9%x SSL 57.5% nsity 1.0 : X X eensi 0.8 0.8 X X X Den X useful sparsity 0.6 X X 0.6 range OJaz-UoN ++ ++ 0 0.4 0.4 + Jaz-UoN 0.2 - 8 0.0 0.2 + :: # 0.0_ XX 7x7 5x5 conV 3x3 conv 1 x1 conv fc FLOP: 92% 8% 7% 8% 61% 23% 0.2% Size: 4% 96% 0.1% 6% 33% 25% 35% (a) AlexNet (b) GoogLeNet\n1.2 16 25 xx 14 1.0 20 Ehee eeeees 12 t+ 0.8 #+ 10 15 0.6 + 86420 + H 10 0.4 x x 5 X 4.5 TF/s 0.2 x 2.1 TF/s XXX xX x 62 GF/s x SGEMM SGEMM 0.0 SGEMM 0 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5 Non-zero Density Non-zero Density Non-zero Density (a) Atom (b) BDW (c) KNL\nFigure 5: Performance of conv2-5 layers of AlexNet with varying sparsity on Atom C2750 (a), Xeon E5-2697 v4 (b), and Xeon Phi 7250 (c). sGEmM performance of each platform serves as a proxy to the performance of dense convolutions.\nelement-wise sparsity5I, and dense methods can no longer be used. We believe that SSL provides higher sparsity for 11 convolutions than ESL because SSL does not prune fc layers, providing more room to prune other layers For larger convolutions that contribute to the bulk of FLOPs, ESI provides significantly higher sparsity than SSL; most of the larger convolution layers have non-zero density less than O.2, where sparse convolutions can provide speedups. It is interesting to note that ESL, GESL, and SSL all achieve very high sparsity of non-zero density less than 1.5% in layers like inception 4e/3x3. This may indicate that the 33 path is unnecessary in that inception module."}, {"section_index": "6", "section_name": "3.2 LAYER-BY-LAYER SPARSE CONVOLUTION RESULTS", "section_text": "Figure 5 shows layer-wise performance of our direct sparse convolution design with the useful hig sparsity obtained from GESL. We evaluate with the sparse matrices from multiple pruned AlexNe. models with up to 3% top-1 accuracy drop. Since the performance of sparse matrix operations highl depends on specific sparsity pattern, it is important not to evaluate with random sparse matrices. W. use SGEMM as a proxy of dense convolution performance to quantify layer-wise speedups of direc. sparse convolution. SGEMM is a good proxy because it has a long history of extensive optimization. and it allows us not to depend on the quality of a specific dense convolution implementatior'I We use. batch sizes of 32, 144, and 272 for Atom, BDw, and KNL, multiples of the number of hardware thread in respective platforms.\nBDW achieves 3.4 speedup with non-zero density x = 0.09, the sparsity similar to those of conv2-5 with no accuracy drop. The actual TF/s (as opposed to effective TF/s that also counts FLOPs for zeros is 0.76 when sparse convolution is sufficiently compute bound (e.g., x > 0.4). This performance corresponds to about a third of sGemm, from which we can derive the compute overhead of sparse convolution as 3. As explained in Section 2.2] this leads to the lower-bound of sparsity to ge speedups at x = 0.3, which matches with Figure 5(b). Atom with a higher bandwidth to flop ratic achieves higher 7.3 speedup at x = 0.09. The actual GF/s is 51 when x > 0.4, which is 1.2 lower than SGEmM performance (i.e. = 1.2). Note that the performance projection for conv5 ir Figure 3|using as derived here resembles the measured performance in Figure 5](conv2-5 share. similar performance characteristics). KNL achieves impressive 13.9 effective TF/s at x = 0.09 (3.1 Over SGEMM)."}, {"section_index": "7", "section_name": "4 RELATED WORK", "section_text": "This is because filter coefficients for a given input and output channel pair is also a type of group that SSL is looking for. follow the same approach to see maximum sparsity that SSL and GSSL can get in conv layers..\n' This is the same reason for this paper to focus on layer-wise performance instead of overall end-to-end speedup. As the baseline fo. overall end-to-end speedup may be relative to a baseline whose efficiency is suboptimal with performance bottlenecks in other parts/layer of the code. For more scientific comparison among different CNN speedup techniques, we recommend using dense matrix multiplicatio. (GEMM) FLOP/s of the evaluated platform as the baseline, because many platforms readily have vendor-provided extensively-optimizec. GEMM implementations which can be a proxy of highly-optimized dense CNN implementation. This also aligns with a long-accepted standar. practice in high performance computing community.\nRecent researches have achieved great success on reducing model size and accelerating inference of CNNs while maintaining accuracy, exploring a large design space as shown in Table[2] Regularization based and factorization-based approaches are the two main camps. Regularization-based approaches\nA:Lebedev & B:Han et al.(2015) C:Denton et al.(2014), Jaderberg et al. Lempitsky Han et al.(2016b),Liu 2014),Lebedev et al.(2015),Zhang et al. (2015)*,Wen et al.(2015)*,Guo et al. (2015), Kim et al.(2016), Ioannou et al. et al.(2016)* (2016), GESL* (2016),Tai et al.(2016),Denil et al.(2013] Regularization Pruning Factorization Group-wise Element-wise Computing Dense Sparse Dense\nTable 2: Design space of techniques in reducing model size and accelerating inference, categorized as 3 groups. The footer rows of the table specify the two pillars of design space: pruning methods (how the sparsity is obtained. during training) and computing methods (how the the obtained sparsity during inference). For techniques using regularization for pruning, * denotes those focusing more on conv layers than on fc layers..\nuse a separate training step to discover and prune redundant parameters in a pre-trained model using various regularizations, including ridge (Han et al.||2015)|2016b), lasso, and group lasso (Liu et al. 2015][Wen et al.][2016), combined with thresholding. Factorization-based approaches use low-rank decomposition and can quickly produce compressed models without additional pruning steps. Both approaches can use a fine-tuning step to recover accuracy loss caused by model pruning.\nResearches focusing on fully connected layers (Han et al.]2015} 2016b} Denil et al.]2013) hav achieved 10-50 model size reduction for networks such as AlexNet (Krizhevsky et al.2012. However, they achieved marginal inferencing speedup because fully connected layers usually accoun. for less than 10% of total computation in modern CNNs. Researches in groups A and C showr. in Table [2|aim at speeding up inference by focusing more on convolution layers, with most o. them relying on dense methods for computing convolution. While factorization-based approache. (group C) obtain smaller models in dense format naturally, regularization-based approaches (grouj. A) need group regularization to impose group-wise sparsity. Although Liu et al.[(2015) explor. sparse methods in computing convolution layers, their approach involves lowering overhead and use. hard-coding non-zeros in sparse matrix with full unrolling that leads to a large instruction footprint\nWhile our direct sparse convolution is demonstrated to achieve high speed up on convolution when having enough sparsity, factorization-based approaches can complement. This is because the inherent sparsity in the first few convolution layers can be not high enough, while factorization-based ap proaches can achieve speedups there. Liu et al.(2015) also show that factorization and regularization based approaches can be combined.\nWinograd (Lavin & Gray2015) and FFT based algorithms (Vasilache et al.]2015) also aim to speedup convolution. While being orthogonal, these techniques can have synergies with our direct sparse convolution. For example, FFT based convolutions are more effective for large filters that usually reside in the first few layers where sparsity is low. While this paper focuses on convolution layer performance, our technical report (Park et al.| 2016) also considers optimizations for fully connected layers, and sparsity in activations, which is also discussed in Han et al.(2016a)..\nPowerful CNNs are often quite compute demanding. Pruning as a post-processing step has been effective in drastically reducing the model size while boosting inference speed moderately. We aim to more fully realize the potential performance benefits due to the reduced FLOP counts resulting from pruned convolution kernels. By combining our high-performance direct sparse convolution method with a performance model, we developed a guided approach that prunes CNNs in a co-design fashion for different computer architectures and on different layers of a CNN in question. In particular, we demonstrated 3.1-7.3 convolution speedups in AlexNet on a variety of platforms, all in comparison to extensively-optimized dense linear algebra operations.\nLooking ahead, as this paper shows that pruning can boost inference speed significantly in additional to reducing model size, further techniques in pruning should be explored. While our direct sparse convolution algorithm is successful, our performance model also reveals that sparse convolution cannot speedup all convolution layers, as seen from 11 convolutions in GoogLeNet. We plan to expand our performance model to cover other FLOP-reduction methods such as FFT, Winograd, and\ntensor factorization, so that we can make informed decisions to choose the best performing methoc for each layer and the training process can be guided accordingly"}, {"section_index": "8", "section_name": "ACKNOWLEDGEMENT", "section_text": "We would like to thank Yiwen Guo. Anbang Yao. and Yurong Chen for sharing the dynamic network surgery source code and their insights. We would also like to thank Nitish Shirish Keskar for his. recommendations on hyper-parameter settings\nAydin Buluc, Jeremy T. Fineman, Matteo Frigo, John R. Gilbert, and Charles E. Leiserson. Parallel. Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication Using Compressed Sparse Blocks. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 2009.\nYiwen Guo, Anbang Yao, and Yurong Chen. Dynamic Network Surgery for Efficient DNNs. In. Proceedings of Advances in Neural Information Processing Systems (NIPs). 2016\nSong Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J Dally. EIE: efficient inference engine on compressed deep neural network. CoRR, 2016a\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up Convolutional Neural Networks with Low Rank Expansions. In British Machine Vision Conference (BMVC). 2014\nStefan Hadjis, Firas Abuzaid, Ce Zhang, and Christopher Re. Caffe con Troll: Shallow Ideas to Speed Up Deep Learning. arXiv preprint arXiv:1504.04343, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. arXiv preprint arXiv:1512.03385, 2015\nYong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Compres- sion of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications. In International Conference on Learning Representations (ICLR), 2016..\nAndrew Lavin and Scott Gray. Fast Algorithms for Convolutional Neural Networks. arXiv preprin arXiv:1509.09308, 2015\nVadim Lebedev and Victor Lempitsky. Fast ConvNets Using Group-wise Brain Damage. arXi preprint arXiv:1506.02515, 2015.\nJongsoo Park, Sheng Li, Wei Wen, Hai Li, Yiran Chen, and Pradeep Dubey. Holistic SparseCNN Forging the Trident of Accuracy, Speed, and Size. arXiv preprint arXiv:1608.01409, 2016..\nXiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating Very Deep Convolutional Networks for Classification and Detection. IEEE Transactions on Pattern Anaylsis and Machine Intelligence, 2015.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going Deeper with Convolutions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015"}]
ryuxYmvel
[{"section_index": "0", "section_name": "HOLSTEP: A MACHINE LEARNING DATASET FOR HIGHER-ORDER LOGIC THEOREM PROVING", "section_text": "Cezary Kaliszyk\nUniversity of Innsbruck\ncezary.kaliszyk@uibk.ac.at\nLarge computer-understandable proofs consist of millions of intermediate logica. steps. The vast majority of such steps originate from manually selected and man-. ually guided heuristics applied to intermediate goals. So far, machine learning has. generally not been used to filter or generate these steps. In this paper, we introduce. a new dataset based on Higher-Order Logic (HOL) proofs, for the purpose of de. veloping new machine learning-based theorem-proving strategies. We make this. dataset publicly available under the BsD license. We propose various machine. learning tasks that can be performed on this dataset, and discuss their significance. for theorem proving. We also benchmark a set of simple baseline machine learn. ing models suited for the tasks (including logistic regression, convolutional neura.. networks and recurrent neural networks). The results of our baseline models show the promise of applying machine learning to HOL theorem proving.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "As the usability of interactive theorem proving (ITP) systems (Harrison et al.]2014) grows, its use becomes a more common way of establishing the correctness of software as well as mathematical proofs. Today, ITPs are used for software certification projects ranging from compilers (Leroy 2009) and operating system components (Chen et al.]2016t Klein et al.] 2014), to establishing the absolute correctness of large proofs in mathematics such as the Kepler conjecture (Hales et al.]|2015) and the Feit-Thomson Theorem (Gonthier et al.]2013).\nFor results of such significance to be possible, the theorem libraries of these ITPs must contain. all necessary basic mathematical properties, accompanied with formal proofs. This means that the. size of many ITP libraries can be measured in dozens of thousands of theorems (Grabowski et al. 2010; Blanchette et al.] [2015) and billions of individual proof steps. While the general direction of. the proofs is specified by humans (by providing the goal to prove, specifying intermediate steps, or. applying certain automated tactics), the majority of such proof steps are actually found by automated. reasoning-based proof search (Kaliszyk & Urban,2015b), with very little application of machine. learning techniques so far.\nAt the same time, fast progress has been unfolding in machine learning applied to tasks that involve. logical inference, such as natural language question answering (Sukhbaatar et al.]2015), knowledge. base completion (Socher et al.] [2013a), automated translation (Wu et al.]2016), and premise selec tion in the context of theorem proving (Alemi et al.] 2016). Deep learning in particular has proven. to be a powerful tool for embedding semantic meaning and logical relationships into geometric. spaces, specifically via models such as convolutional neural networks, recurrent neural networks. and tree-recursive neural networks. These advances strongly suggest that deep learning may have become mature enough to yield significant advances in automated theorem proving. Remarkably it has recently become possible to build a system, AlphaGo (Silver et al.,2016), blending classi cal AI techniques such as Monte-Carlo tree search and modern deep learning techniques, capable. of playing the game of Go at super-human levels. We should note that theorem proving and Go. playing are conceptually related, since both consist in searching for specific nodes in trees of states. with extremely large arity and relatively large depth, which involves node evaluation decision (how. valuable is this state?) and policy decisions (which node should be expanded next?). The success of AlphaGo can thus serve as encouragement on the road to building deep learning-augmented theorem.\nFrancois Chollet, Christian Szegedy\nGoogle Research\n{fchollet,szegedy}@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Fast progress in specific machine learning verticals has occasionally been achieved thanks to the release of specialized datasets (often with associated competitions, e.g. the ImageNet dataset foi large-scale image classification (Deng et al.] 2009)) serving as an experimental testbed and public benchmark of current progress, thus focusing the efforts of the research community. We hope tha releasing a theorem proving dataset suited for specific machine learning tasks can serve the same purpose in the vertical of applying machine learning to theorem proving."}, {"section_index": "3", "section_name": "1.1 CONTRIBUTION AND OVERVIEW", "section_text": "First, we develop a dataset for machine learning based on the proof steps used in a large interactive. proof[section 2 We focus on the HOL Light (Harrison, 2009) ITP, its multivariate analysis library. Harrison,2013), as well as the formal proof of the Kepler conjecture (Hales et al.]|2010). These for malizations constitute a diverse proof dataset containing basic mathematics, analysis, trigonometry. as well as reasoning about data structures such as graphs. Furthermore these formal proof devel. opments have been used as benchmarks for automated reasoning techniques (Kaliszyk & Urban. 2014).\nThe dataset consists of 2,013,046 training examples and 196,030 testing examples that originate. from 11,400 proofs. Precisely half of the examples are statements that were useful in the currently proven conjectures and half are steps that have been derived either manually or as part of the auto-. mated proof search but were not necessary in the final proofs. The dataset contains only proofs of. non-trivial theorems, that also do not focus on computation but rather on actual theorem proving. For each proof, the conjecture that is being proven as well as its dependencies (axioms) and may. be exploited in machine learning tasks. Furthermore, for each statement both its human-readable (pretty-printed) statement and a tokenization designed to make machine learning tasks more man. ageable are included.\nFinally, in|section 4|we propose three baseline models for the proof step classification tasks, and we experimentally evaluate the models on the data in |section 5] The models considered include both a relatively simple regression model, as well as deep learning models based on convolutional anc recurrent neural networks."}, {"section_index": "4", "section_name": "1.2 RELATED WORK", "section_text": "The use of machine learning in interactive and automated theorem proving has so far focused o three tasks: premise selection, strategy selection, and internal guidance. We shortly explain these\nGiven a large library of proven facts and a user given conjecture, the multi-label classification prob- lem of selecting the facts that are most likely to lead to a successful proof of the conjecture has been usually called relevance filtering or premise selection (Alama et al.]2014). This is crucial for the efficiency of modern automation techniques for ITPs (Blanchette et al.] |2016), which today can usually solve 40-50% of the conjectures in theorem proving libraries. Similarly most competitive ATPs today (Sutcliffe,2016) implement the SInE classifier (Hoder & Voronkov, 2011).\nA second theorem proving task where machine learning has been of importance is strategy selection With the development of automated theorem provers came many parameters that control their exe cution. In fact, modern ATPs, such as E (Schulz,2013) and Vampire (Kovacs & Voronkov]2013) include complete strategy description languages that allow a user to specify the orderings, weighting functions, literal selection strategies, etc. Rather than optimizing the search strategy globally, one\nNext, in |section 3|we discuss the proof step classification tasks that can be attempted using the dataset, and we discuss the usefulness of these tasks in interactive and automated theorem proving These tasks include unconditioned classification (without access to conjectures and dependencies) and conjecture-conditioned classification (with access to the conjecture) of proof steps as being useful or not in a proof. We outline the use of such classification capabilities for search space pruning and internal guidance, as well as for generation of intermediate steps or possible new lemma statements.\nFinally, an automated theorem prover may use machine learning for choosing the actual inference steps. It has been shown to significantly reduce the proof search in first-order tableaux by the selection of extension steps to use (Urban et al., 2011), and has been also successfully applied in monomorphic higher-order logic proving (Farber & Brown, 2016). Data/proof mining has alsc. been applied on the level of interactive theorem proving tactics (Duncan, 2007) to extract and reuse. repeating patterns.\nWe focus on the HOL Light theorem prover for two reasons. First, it follows the LCF approach1j This means that complicated inferences are reduced to the most primitive ones and the data extrac- tion related modifications can be restricted the primitive inferences and it is relatively easy to extract proof steps at an arbitrary selected level of granularity. Second, HOL Light implements higher-order logic (Church, 1940) as its foundation, which on the one hand is powerful enough to encode most of today's formal proofs, and on the other hand allows for an easy integration of many powerful automation mechanisms (Baader & Nipkow!1998; Paulson,1999).\nWhen selecting the theorems to record, we choose an intermediate approach between HOL Light. ProofRecording (Obua & Skalberg2006) and the HOL/Import one (Kaliszyk & Krauss,2013). The theorems that are derived by most common proof functions are extracted by patching these functions like in the former approach, and the remaining theorems are extracted from the underlying OCaml. programming language interpreter. In certain cases decision procedures derive theorems to be reused. in subsequent invocations. We detect such values by looking at theorems used across proof blocks. and avoid extracting such reused unrelated subproofs..\nAll kernel-level inferences are recorded together with their respective arguments in a trace file. Th. trace is processed offline to extract the dependencies of the facts, detect used proof boundaries mark the used and unused steps, and mark the training and testing examples. Only proofs that have. sufficiently many used and unused steps are considered useful for the dataset. The annotated proo. trace is processed again by a HOL kernel saving the actual training and testing examples originatin. from non-trivial reasoning steps. Training and testing examples are grouped by proof: for each proo. the conjecture (statement that is finally proved), the dependencies of the theorem are constant, and a. list of used and not used intermediate statements is provided. This means that the conjectures usec. in the training and testing sets are normally disjoint..\nFor each statement, whether it is the conjecture, a proof dependency, or an intermediate statement. both a fully parenthesised HOL Light human-like printout is provided, as well as a predefined to-. kenization. The standard HOL Light printer uses parentheses and operator priorities to make its. notations somewhat similar to textbook-style mathematics, while at the same time preserving the complete unambiguity of the order of applications (this is particularly visible for associative opera-. tors). The tokenization that we propose attempts to reduce the number of parentheses. To do this we. compute the maximum number of arguments that each symbol needs to be applied to, and only mark. partial application. This means that fully applied functions (more than 90% of the applications) dc. not require neither application operators nor parentheses. Top-level universal quantifications are. eliminated, bound variables are represented by their de Bruijn indices (the distance from the corre-. sponding abstraction in the parse tree of the term) and free variables are renamed canonically. Since. the Hindley-Milner type inference|Hindley (1969) mechanisms will be sufficient to reconstruct the. most-general types of the expressions well enough for automated-reasoning techniques Kaliszyk. et al.(2015) we erase all type information. Table[1[presents some dataset statistics. The dataset, the. description of the used format, the scripts used to generate it and baseline models code are available:.\n1LCF approach is a software architecture for implementing theorem provers which uses a strongly typed. programming language with abstract datatypes (such as OCaml in the case of HOL Light) to separate the small. trusted core, called the kernel, which verifies the primitive inferences from user code which allows the user to arbitrarily extend the system in a safe manner. For more details see (Gordon et al.]|1979)..\nPredicting whether a statement is useful in the proof of a given conjecture; Predicting the dependencies of a proof statement (premise selection);. Predicting whether a statement is an important one (human named);. Predicting which conjecture a particular intermediate statement originates from: Predicting the name given to a statement;. Generating intermediate statements useful in the proof of a given conjecture;. Generating the conjecture the current proof will lead to..\nIn what follows we focus on the first task: classifying proof step statements as being useful or not i1 the context of a given proof. This task may be further specialized into two different tasks:\nIn the dataset, for every proof we provide the same number of useful and non-useful steps. As such. he proof step classification problem is a balanced two-class classification problem, where a randon baseline would yield an accuracy of 0.5.\nIn the interaction with an interactive theorem prover, the tasks that require most human time are: the search for good intermediate steps; the search for automation techniques able to justify the individual steps, and searching theorem proving libraries for the necessary simpler facts. These three problems directly correspond to the machine learning tasks proposed in the previous subsection. Being able to predict the usefulness of a statement will significantly improve many automation techniques. The generation of good intermediate lemmas or intermediate steps can improve level of granularity. of the proof steps. Understanding the correspondence between statements and their names can. allow users to search for statements in the libraries more efficiently (Aspinall & Kaliszyk, 2016). Premise selection and filtering are already used in many theorem proving systems, and generation. of succeeding steps corresponds to conjecturing and theory exploration..\nTrain Test Positive Negative Examples 2013046 196030 1104538 1104538 Avg. length. 503.18 440.20 535.52 459.66 Avg. tokens. 87.01 80.62 95.48 77.40 Conjectures 9999 1411 Avg. dependencies 29.58 22.82\nTable 1: HolStep dataset statistics\nUnconditioned classification of proof steps: determining how likely a given proof is to be. useful for the proof it occurred in, based solely on the content of statement (i.e. by only providing the model with the step statement itself, absent any context).. Conditioned classification of proof steps: determining how likely a given proof is to be. useful for the proof it occurred in, with \"conditioning\" on the conjecture statement that the. proof was aiming to attain, i.e. by providing the model with both the step statement and the. conjecture statement).\nFigure 1: Unconditioned classification model architectures\n1D CNN 1D CNN-LSTM Logistic regressior (unconditioned) (unconditioned) (unconditioned) statement statement statement Embedding Embedding Embedding dim=256 dim=256 dim=256 Conv1D Conv1D Dropout size=7 size=7 rate=0.5 dim=256 dim=256 Sigmoid Logreg MaxPooling1D MaxPoolinq1D size=3 size=3 stride=3 stride-3 Conv1D Conv1D size=7 size=7 dim=256 dim=256 GlobalMaxPooling MaxPooling1D size=5 stride=5 Dense dim=256 LSTM dim=256 Dropout rate=0.5 Dense dim=256 Sigmoid logreg. Dropout rate=0.5 Sigmoid logreg"}, {"section_index": "5", "section_name": "4 BASELINE MODELS", "section_text": "For each task (conditioned and unconditioned classification), we propose three different deep learn. ing architectures, meant to provide a baseline for the classification performance that can be achieve. on this dataset. Our models cover a range of architecture features (from convolutional network to recurrent networks), aiming at probing what characteristics of the data are the most helpful fo. usefulness classification.\nOur models are implemented in TensorFlow (Abadi et al.]|2015) using the Keras framework (Chollet 2015). Each model was trained on a single Nvidia K80 GPU. Training only takes a few hours pe model, which makes running these experiments accessible to most people (they could even be rur on a laptop CPU). We are releasing all of our benchmark code as open-source software|so as to allow others to reproduce our results and improve upon our models."}, {"section_index": "6", "section_name": "4.1 UNCONDITIONED CLASSIFICATION MODELS", "section_text": "See figure|1|for a layer-by-layer description of these models"}, {"section_index": "7", "section_name": "4.2 CONDITIONED CLASSIFICATION MODELS", "section_text": "For this task. we use versions of the above models that have two siamese branches (identical branches with shared weights), with one branch processing the proof step statement being considered, and the\nhttps://github.com/tensorflow/deepmath/tree/master/holstep_baselines\nLogistic regression on top of learned token embeddings. This minimal model aims to. determine to which extent simple differences between token distribution between useful and non-useful statements can be used to distinguish them. It provides an absolute floor on. the performance achievable on this task. 2-layer 1D convolutional neural network (CNN) with global maxpooling for sequence re. duction. This model aims to determine the importance of local patterns of tokens.. 2-layer 1D CNN with LSTM (Hochreiter & Schmidhuber,1997) sequence reduction. This model aims to determine the importance of order in the features sequences..\nFigure 2: Conditioned classification mode1 architectures\nother branch processing the conjecture. Each branch outputs an embedding; these two embeddings (step embedding and conjecture embedding) are then concatenated and the classified by a fully connected network. See figure|2[for a layer-by-layer description of these models\nLogistic regression. Siamese 1D CNN-LSTM (conditioned) (conditioned) statement conjecture statement conjecture Embedding Embedding Embedding Embedding dim=256 dim=256 dim=256 dim=256 Conv1D Conv1D Concat size=7 size-7 dim=256 dim=256 Dropout rate=0.5 MaxPooling1D MaxPooling1D size=3 size=3 Sigmoid Logreg stride=3 stride=3 Conv1D Conv1D size=7 size=7 Siamese 1D CNN dim=256 dim=256 (conditioned) MaxPooling1D MaxPooling1D statement conjecture size-5 size-5 stride=5 stride-5 Embedding Embedding LSTM LSTM dim=256 dim=256 dim=256 dim=256 Conv1D Conv1D Concat size=7 size=7 dim=256 dim=256 Dense dim=256 MaxPooling1D MaxPooling1D size=3 size-3 stride=3 stride=3 Dropout rate-0.5 Conv1D Conv1D size=7 size=7 Sigmoid logreg. dim=256 dim=256 GlobalMaxPooling GlobalMaxPooling Concat Dense dim=256 Dropout rate-0.5 Sigmoid logreg\nIt should be noted that all of our models start with an Embedding layer, mapping tokens or characters. n the statements to dense vectors in a low-dimensional space. We consider two possible encoding or presenting the input statements (proof steps and conjectures) to the Embedding layers of oui. models:\nCharacter-level encoding of the human-readable versions of the statements, where each. character (out of a set of 86 unique characters) in the pretty-printed statements is mappec to a 256-dimensional dense vector. This encoding yields longer statements (training state. ments are 308 character long on average). Token-level encoding of the versions of the statements rendered with our proposed high. level tokenization scheme. This encoding yields shorter statements (training statements are 60 token long on average), while considerably increasing the size of set of unique tokens. (1993 total tokens in the training set)..\nTable 2: HolStep without conditioning roof step classification accurac\nLogistic 1D CNN 1D CNN-LSTM regression Accuracy with char input 0.71 0.82 0.83 Accuracy with token input 0.71 0.83 0.77\nTable 3: HolStep proof step classification accuracy with conditioning\nLogistic Siamese Siamese regression 1D CNN 1D CNN-LSTM Accuracy with char input 0.71 0.81 0.83 Accuracy with token input 0.71 0.82 0.77"}, {"section_index": "8", "section_name": "5 RESULTS", "section_text": "Our unconditioned logistic regression model yields an accuracy of 71%, both with character encod ing and token encoding (tables |2|and|3). This demonstrates that differences in token or characte distributions between useful and non-useful steps alone, absent any context, is sufficient for discrim inating between useful and non-useful statements to a reasonable extent. This also demonstrates that the token encoding is not fundamentally more informative than raw character-level statements.\nAdditionally, our unconditioned 1D CNN model yields an accuracy of 82% to 83%, both with char-. acter encoding and token encoding (tables|2|and|3). This demonstrates that patterns of characters or patterns of tokens are considerably more informative than single tokens for the purpose of usefulness. classification.\nFinally. our unconditioned convolutional-recurrent model does not improve upon the results of the 1D CNN. which indicates that our models are not able to meaningfully leverage order in the feature. sequences into which the statements are encoded.\nNone of our conditioned models appear to be able to improve upon the unconditioned models, which. indicates that our architectures are not able to leverage the information provided by the conjecture The presence of the conditioning does however impact the training profile of our models, in partic. ular by making the 1D CNN model converge faster and overfit significantly quicker (figs.|5|and|6)"}, {"section_index": "9", "section_name": "6 CONCLUSIONS", "section_text": "Our baseline deep learning models, albeit fairly weak, are still able to predict statement usefulness. with a remarkably high accuracy. Such methods already help first-order automated provers (Kaliszyk & Urban, 2015a) and as the branching factor is higher in HOL the predictions are valuable for a number of practical proving applications. This includes making tableaux-based (Paulson, 1999) and superposition-based (Hurd,2003) internal ITP proof search significantly more efficient in turn.\nFor the logistic regression model and the 2-layer 1D CNN model, the choice of input encoding seems to have little impact. For the convolutional-recurrent model, the use of the high-level tokenization seems to cause a large decrease in model performance (figs. 4|and|6). This may be due to the fact that token encoding yields shorter sequences, making the use of a LSTM less relevant.\n0.9 Char-level_1D_CNN (unconditioned) 0.8 Char+level 1D CNN-LSTM (unconditioned) == Char-level logisticregression (unconditioned) 0.6 1 / 1 0.5 10 20 30 40 50 0 60 Epochs (128000 steps / epoch)\n0.9 0.9 Char-level_1D CNN 1D CNN (unconditioned) (unconditioned) 0.8 0.8 1D.CNN-LSTM Char+level 1D CNN-LSTM (unconditioned) (unconditioned) Logi$tic-regre$sion, (unconditioned) 0.7 0.7 Char-level logistic regression (unconditioned) 0.6 0.6 0.50 0.50 10 20 30 40 50 60 10 20 30 40 50 60 Epochs (128000 steps / epoch) Epochs (128000 steps / epoch)\nFigure 3: Training profile of the three uncondi tioned baseline models with character input.\n0.9 Char-level siamese 1D CNN-LSTM (conditioned) : 0.8 Char-level siamese' 1D CNN (conditioned) 0.7 Char-level logistic regression (conditioned) 0.6| 0.50 10 20 30 40 50 60 Epochs (128000 steps / epoch)\n0.9 0.9 Char-level siamese 1D CNN-LSTM Siamese 1D CNN (conditioned) (conditioned) 0.8 0.8 Char-level siamese' 1D CNN (conditioned) Siamese 1D CNN-LSTM (conditioned) 0.7 Char-level logistic regression 0.7 Logistic regression (conditioned) (conditioned) 0.6 0.6 0.56 0.5 10 20 30 40 50 60 10 20 30 40 50 60 Epochs (128000 steps / epoch) Epochs (128000 steps / epoch)\nFigure 5: Training profile of the three condi tioned baseline models with character input.\nmaking formalization easier. However, our models do not appear to be able to leverage order in the input sequences, nor conditioning on the conjectures. This is due to the fact that these models are no doing any form of logical reasoning on their input statements; rather they are doing simple patterr matching at the level of n-grams of characters or tokens. This shows the need to focus future effort on different models that can do reasoning, or alternatively, on systems that blend explicit reasoning (e.g. graph search) with deep learning-based feature learning. A potential new direction would be to leverage the graph structure of HOL statements using e.g. Recursive Neural Tensor Networks (Socher et al.,2013a|b) or other graph-based recursive architectures.\nFinally, two of the proposed task for the dataset have been premise selection and intermediate sen tence generation. It would be interesting to define more ATP-based ways to evaluate the selected premises, as well as to evaluate generated sentences (Kaliszyk et al.]2015). The set is a relatively large one when it comes to proof step classification, however the number of available premises makes the set a medium-sized set for premise selection in comparison with those of the Mizar Math- ematical Library or the seL4 development..\n0.9 1D CNN (unconditioned) 0.8 1D.CNN-LSTM (unconditioned) Logi$tic-regre$sion, (unconditioned) 0.6 0.50 10 20 30 40 50 60 Epochs (128000 steps / epoch)\nFigure 4: Training profile of the three uncondi tioned baseline models with token input.\n0.9 Siamese1D CNN (conditioned) 0.8 ---- Siamese 1D CNN-LSTM (conditioned) Logistic regression (conditioned) 0.6 0.5! 10 20 30 40 50 60 Epochs (128000 steps / epoch)\nFigure 6: Training profile of the three condi- tioned baseline models with token input\nThe dataset focuses on one interactive theorem prover. It would be interesting if the proposed tech-. niques generalize, primarily across ITPs that use the same foundational logic, for example using. OpenTheory (Hurd,2011), and secondarily across fundamentally different ITPs or even ATPs. A. significant part of the unused steps originates from trying to fulfill the conditions for rewriting and. from calls to intuitionistic tableaux. The main focus is however on the human found proofs so the trained predictions may to an extent mimic the bias on the usefulness in the human proofs. As ATPs. are at the moment very week in comparison with human intuition improving this even for the many. proofs humans do not find difficult would be an important gain..\nThe first author was partly supp ported by the ERC starting grant 714034"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Jesse Alama. Tom Heskes, Daniel Kuhlwein, Evgeni Tsivtsivadze, and Josef Urban. Premise se lection for mathematics by corpus analysis and kernel methods. J. Autom. Reasoning, 52(2):. 191-213. 2014. doi: 10.1007/s10817-013-9286-5.\nAlex A. Alemi, Francois Chollet, Geoffrey Irving, Christian Szegedy, and Josef Urban. DeepMath - Deep sequence models for premise selection. In Daniel D. Lee, Masashi Sugiyama, Ulrike V Luxburg, Isabelle Guyon, and Roman Garnett (eds.), Advances in Neural Information Processing Systems (NIPS 2016), pp. 2235-2243, 2016. URLhttps://arxiv.org/abs/1606.04442\nJasmin Christian Blanchette, Maximilian P. L. Haslbeck, Daniel Matichuk, and Tobias Nipkow Mining the Archive of Formal Proofs. In Manfred Kerber, Jacques Carette, Cezary Kaliszyk. Florian Rabe, and Volker Sorge (eds.), Intelligent Computer Mathematics (CICM 2015), volume. 9150 of LNCS, pp. 3-17. Springer, 2015.\nJames P. Bridge, Sean B. Holden, and Lawrence C. Paulson. Machine learning for first-order theo rem proving - learning to select a good heuristic. J. Autom. Reasoning, 53(2):141-172, 2014. doi:. 10.1007/s10817-014-9301-5.\nHaogang Chen, Daniel Ziegler, Tej Chajed, Adam Chlipala, M. Frans Kaashoek, and Nickolai Zel-. dovich. Using crash Hoare logic for certifying the FSCQ file system. In Ajay Gulati and Hakim Weatherspoon (eds.), USENIX 2016. USENIX Association, 2016.\nFrancois Chollet. Keras. https://github.com/fchollet/keras, 2015.\nAlonzo Church. A formulation of the simple theory of types. J. Symb. Log., 5(2):56-68, 1940. doi. 10.2307/2266170. URLhttp://dx.doi.0rg/10.2307/2266170\nMichael Farber and Chad E. Brown. Internal guidance for Satallax. In Nicola Olivetti and Ashis. Tiwari (eds.), International Joint Conference on Automated Reasoning (IJCAR 2016), volum 9706 of LNCS, pp. 349-361. Springer, 2016. doi: 10.1007/978-3-319-40229-1.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah. Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Watten- berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/! Software available from tensor- flow.org.\nDavid Aspinall and Cezary Kaliszyk. What's in a theorem name? In Jasmin Christian Blanchette and Stephan Merz (eds.), Interactive Theorem Proving (ITP 2016), volume 9807 of LNCS, pp. 459-465. Springer, 2016. doi: 10.1007/978-3-319-43144-4.\nThomas C. Hales, Mark Adams, Gertrud Bauer, Dat Tat Dang, John Harrison, Truong Le Hoang Cezary Kaliszyk, Victor Magron, Sean McLaughlin, Thang Tat Nguyen, Truong Quang Nguyen Tobias Nipkow, Steven Obua, Joseph Pleso, Jason Rute, Alexey Solovyev, An Hoai Thi Ta Trung Nam Tran, Diep Thi Trieu, Josef Urban, Ky Khac Vu, and Roland Zumkeller. A forma proof of the Kepler conjecture. CoRR, abs/1501.02155, 2015.\nJohn Harrison. The HOL Light theory of Euclidean space. J. Autom. Reasoning, 50(2):173-190 2013. doi: 10.1007/s10817-012-9250-9\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nCezary Kaliszyk and Josef Urban. Learning-assisted automated reasoning with Flyspeck. J. Auton Reasoning, 53(2):173-213, 2014. doi: 10.1007/s10817-014-9303-3\nCezary Kaliszyk and Josef Urban. FEMaLeCoP: Fairly efficient machine learning connection prover. In Martin Davis, Ansgar Fehnker, Annabelle McIver, and Andrei Voronkov (eds.), 2Oth In. ternational Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2015), volume 9450 of LNCS, pp. 88-96. Springer, 2015a. doi: 10.1007/978-3-662-48899-7.\nAdam Grabowski. Artur Kornilowicz, and Adam Naumowicz. Mizar in a nutshell. J. Formalized Reasoning, 3(2):153-245, 2010. doi: 10.6092/issn.1972-5787/1980.\nThomas Hales, John Harrison, Sean McLaughlin, Tobias Nipkow, Steven Obua, and Roland Zumkeller. A revision of the proof of the Kepler Conjecture. Discrete & Computational Ge- ometry, 44(1):1-34, 2010.\nCezary Kaliszyk and Josef Urban. Learning-assisted theorem proving with millions of lemma J. Symbolic Computation, 69:109-128, 2015b. doi: 10.1016/j.jsc.2014.09.032\nXavier Leroy. Formal verification of a realistic compiler. Commun. ACM. 52(7):107-115. 2009\nLawrence C. Paulson. A generic tableau prover and its integration with Isabelle. J. Universal Computer Science, 5(3):73-87, 1999\nDavid Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre, George van der. Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot. Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lil licrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering. the game of go with deep neural networks and tree search. Nature, 529:484-503, 2016. URI. http://www.nature.com/nature/iournal/v529/n7587/full/nature16961.html\nGeoff Sutcliffe. The CADE ATP system competition - CASC. AI Magazine, 37(2):99-101, 2016\nDaniel Kuhlwein and Josef Urban. MaLeS: A framework for automatic tuning of automated theorem provers. J. Autom. Reasoning, 55(2):91-116, 2015. doi: 10.1007/s10817-015-9329-1.\nRichard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural In- formation Processing Systems 26: 27th Annual Conference on Neural Information Pro cessing Systems 2013. Proceedings., pp. 926-934, 2013a.URLhttp://papers.nips.cc/paper/ 5028-reasoning-with-neural-tensor-networks-for-knowledge-base-completion\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances nNeuralInforwationP 13124392015\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey,. Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa.. Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR,. abs/1609.08144, 2016. URLhttp://arxiv.org/abs/1609.08144"}]
Bk67W4Yxl
[{"section_index": "0", "section_name": "IMPROVED ARCHITECTURES FOR COMPUTER GO", "section_text": "Tristan Cazenave\nUniversite Paris-Dauphine, PSL Research University, CNRS, LAMSADE 75016 PARIS. FRANCE\nTristan.Cazenave@dauphine.fr"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks are good at recognizing shapes in the game of Go. However they have weak- nesses at tactical search such as ladders and life and death. The way it is handled in AlphaGo is to give as input to the network the results of ladders. Reading ladders is not enough to understand more complex problems that require search. So AlphaGo combines deep networks with MCTS Coulom (2006). It trains a value network in order to evaluate positions. When playing, it combines the eval- uation of a leaf of the Monte Carlo tree by the value network with the result of the playout that starts at this leaf. The value network is an important innovation due to AlphaGo. It has helped improving a lot the level of play.\nOne of the problems about training a value network is that millions of games have to be played by. the policy network against different versions of itself in order to create the data used to train the. value network. It is therefore interesting to find a way to learn with less training examples so as tc reduce the bottleneck of playing millions of games. Learning with less examples also often implies. that in the end the accuracy of the network on the training set is greater..\nResidual Networks improve the training of very deep networks|He et al.(2015). These networks car gain accuracy from considerably increased depth. On the ImageNet dataset a 152 layers networks achieves 3.57% error. It won the 1st place on the ILSVRC 2015 classification task. The principle ol residual nets is to add the input of the layer to the output of each layer. With this simple modificatior training is faster and enables deeper networks.\nResidual networks were recently successfully adapted to computer Go Cazenave (2016a). As a follow up to this paper, we propose improved architectures for residual networks. We use enriched inputs, more comprehensive training and testing examples and experiment with deeper networks that give better accuracy and stronger play.\nThe second section details different layer architectures for computer Go, the third section gives experimental results, and the last section concludes.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "AlphaGo trains policy networks with both supervised and reinforcement learning and makes different policy networks play millions of games so as to train a value network. The reinforcement learning part requires massive amount of computa- tion. We propose to train networks for computer Go so that given accuracy is reached with much less examples. We modify the architecture of the networks in order to train them faster and to have better accuracy in the end.\nInput Convolution ReLU Output\nFigure 1: A usual layer\nThe residual layer used for image classification adds the input of the layer to the output of the laye using addition and identity. It is shown in figure2\nInput Convolution ReLU Convolution Addition ReLU Output\nFigure 2: A residual layer for Image Classification\nThe residual layer we use for Computer Go is shown in figure 3Cazenave (2016a). It simply adds. the input of a layer to the output of the 3 3 convolutional layer. It then uses a ReLU layer before the output. The output of a residual layer is the input of the next residual layer..\nInput Convolution Addition ReLU Output\nFigure 3: A residual layer for computer Go\nInput Convolution ReLU SBN Output\nFigure 4: A layer of DarkForest.\nInspired by DarkForest we tried to add Spatial Batch Normalization after the ReLU layer in our residual layer. The layer that gave the best results is given in figure[5] It simply adds a Spatial Batch Normalization after the ReLU layer and outside of the residual block. This is a new architecture that we propose and test in this paper.\nThe Torch Collobert et al.(2011) code used for the hidden layers is simply\nconvnet : add (nn.ConcatTableO) : a dd (cudnn.SpatialConvolution (nplanes, nplanes,3, 3, 1, 1, 1, 1) : add (nn.Identity( )) : a dd (nn.CAddTable (true) ) : a dd (cudnn.ReLU ()) : a dd (cudnn.SpatialBatchNormalization (nplanes) )\nInput Convolution Addition ReLU SBN Output\nFigure 5: A residual layer with Spatial Batch Normalization\nThe input layer of our network is also residual. It uses a 5 5 convolutional layer in parallel to : 1 1 convolutional layer and adds the outputs of the two layers before the ReLU layer. It is depictec in figure6\nInput 5x5 Convolution 1x1 Convolution Addition ReLU Output\nFigure 6: The first residual layer of the network for computer Go\nThe output layer of the network is a 3 3 convolutional layer with one output plane followed by a SoftMax. All the hidden layers use 256 feature planes and 3 3 filters"}, {"section_index": "3", "section_name": "3.1 DATA", "section_text": "Our training set consists of games played between 2000 and 2014 on the Kiseido Go Server (KGS by players being 6 dan or more. We exclude handicap games. Each position is rotated and mirrored to its eight possible symmetric positions. It results in 160 O00 O00 positions in the training set When training reaches the last position of the training set it starts again with the first one. The test set contains the games played in 2015. The positions in the test set are not mirrored and there are 100 000 different positions in the test set."}, {"section_index": "4", "section_name": "3.2 INPUT AND OUTPUT PLANES", "section_text": "The networks use 42 19 19 input planes: three planes for the colors of the intersections, one plane for the third line, one plane filled with one if there is a ko, one plane with a one for the ko move. one plane with the ownership of each intersection computed after one hundred random playouts, one plane for the criticality|Coulom (2009), one plane for the AMAF values, ten planes for the liberties of the friend and of the enemy colors (1, 2, 3, 4, 5 liberties), twelve planes for the liberties of the friend and of the enemy colors if a move of the color is played on the intersection (1, 2, 3, 4, 5, 6 liberties), one plane to tell if a friend move on the intersection is captured in a ladder, one plane to tell is an enemy move on the intersection is captured in a ladder, one plane to tell if a string can be captured in a ladder, one plane to tell if a string can escape a ladder, one plane to tell if a friend move threatens to capture in a ladder, one plane to tell if an enemy move threatens to capture in a ladder, and five planes for each of the last five moves.\nThe output of a network is a 19 19 plane and the target is also a 19 19 plane with a one for the move played and zeros elsewhere"}, {"section_index": "5", "section_name": "3.3 TRAINING", "section_text": "Networks were trained with a minibatch of either 50 or 100 and an initial learning rate of 0.2 minibatch = 20.0. The error criterion is the mean square error and the training algorithm is SGD with a momentum of 0.9. All networks use 256 feature planes, a 5 5 convolutional layer for the input layer and then only 3 3 convolutional layers\nThe different network architectures we experimented with are:\nWe used Torch Collobert et al.(2011) for training the networks. Training a 13 layers network on 5,000,000 examples with Torch, CUDA 8.0 and CUDNN takes approximately three hours on a GTX\nIn this section we will explain how we conducted the experiments evaluating deep residual networks. We first present the data that was used for training and testing. We then describe the input planes of the networks and the training and testing phases with results given as percentages on the test set.. We finish the section describing our Go playing program Golois..\nThe dataset is similar to the AlphaGo and the DarkForest datasets, all the games we have used for. training are part of these two other datasets. AlphaGo also uses games by weaker players in its dataset[Maddison et al.(2014), instead we only use games by 6 dan or more, it probably makes the dataset more difficult and more meaningful. The AlphaGo dataset is not available, also it would help. to have the 30 o0o o00 games played by AlphaGo against itself so as to train a value network but this dataset is not available either..\nnet, the usual 13 layers convolutional network as used in AlphaGoMaddison et al.(2014): Silver et al.(2016), netdark, with 13 layers and convolutional layers followed by ReLU and Spatial Batch Normalization as used in DarkForestTian & Zhu(2015), net13, a residual 13 layers network, net2o, a residual 20 layers network, net13/sbn, a residual 13 layers network with Spatial Batch Normalization.\n1080 GPU. This is 9 times faster than our previous experiments with a K40|Cazenave(2016a). The evolution of the percentage on the test set is given in table 1] Each line corresponds to 5,ooo,00 more training examples.\nWe see that all proposed layers improve much on the usual layer. The DarkForest layer is slightly better than the residual layer with the same number of layers but slightly worse than the residua network using 20 layers. The layer combining residuals and Spatial Batch Normalization is the best OVerall.\nThe 20 layers residual network was trained longer than the other networks. It used a 0.2 learning. rate until 210,000,000 examples and then halved the learning rate every 35,000,000 examples until. 340,000,000 examples (line 68 of the table). It reached a 58.001% accuracy on the test set. It is. greater than previous results reaching either 57.0% with 128 planes Silver et al.(2016), or 57.3% with 512 planes Tian & Zhu (2015). The main difference with previous work is the architecture we use less training examples as previous work since we only use KGS games without handicap by. players greater than 6 dan.\nEncouraged by these results and in order to address the reviewers comments we ran additional tests Instead of generating files for training as in the previous experiments we use dynamic minibatches. of size 50 that take 50 random states in the training set, each randomly mirrored to one of its eigh. symmetric states. As preparing the minibatch can take a significant time compared to the training time, we optimized the ladder code and we removed the Monte Carlo related input features..\nThe updating of the learning rate is performed using algorithm[1] A step corresponds to 5000 training examples. Every 1000 steps the algorithm computes the average error over the last 1000 steps and. the average error over the step minus 2000 and the step minus 1000. If the decrease in the average. error is less than O.05% then the learning rate is divided by 2. The initial learning rate is set to 0.2 The algorithm stays at least 40o0 steps with the same learning rate before dividing it by 2..\nIn order to further enhance accuracy we used bagging. The input board is mirrored to its 8 possible. symmetries and the same 20 layers network is run on all 8 boards. The outputs of the 8 networks are then summed. Bagging improves the accuracy up to 58.485%. The use of symmetries is similar. to AlphaGo.\nTable 1: Evolution of the accuracy on the test set\nneta netdark net13 net20 net13/sbn 1 0.896 47.132 46.978 47.389 47.819 2 0.675 49.179 48.555 48.821 49.493 3 1.121 50.260 50.302 50.601 50.826 4 1.454 51.076 51.186 51.449 51.471 5 43.719 51.230 51.088 51.722 51.689 6 46.334 51.832 51.602 51.797 52.219 7 47.875 52.188 52.258 52.375 52.611 8 48.629 52.384 52.384 52.618 52.756 9 49.308 52.705 52.697 53.029 53.085 10 49.698 52.748 52.856 53.157 53.145 11 50.196 53.244 53.189 53.566 53.441 12 50.367 53.114 53.201 53.514 53.718 13 50.954 53.471 53.442 53.794 53.708 14 51.337 53.661 53.720 53.827 53.985 15 51.489 53.969 53.844 54.063 53.984 16 51.572 53.983 53.635 54.282 54.021 17 51.981 54.145 54.009 54.438 54.349 18 51.922 54.134 54.051 54.539 54.314 19 52.056 54.183 54.164 54.631 54.485 20 52.294 54.408 54.226 54.541 54.522 68 58.001\nAlgorithm 1 The algorithm used to update the training rate\nError 10 13.256 20.256 - 9.5 9 Err 8.5 ++++++++++++k 8 ++++++++ 7.5 7 0 10 20 30 40 50 Examples\nFigure 7: Evolution of the error on the test set\nThe comparison of the evolution of the test error of the 13 layers network and of the 20 layers network is given in figure|7] These two networks use residual layers and spatial batch normalization We observe that the 20 layers network is consistently better and ends with a significantly reducec error. This also results in a better accuracy as depicted in figure 8 The 20 layers network ends with an accuracy of 57.687% without bagging. The 13 layers networks ends with an accuracy of 57.024%. We also trained a 13 layers networks with 512 planes, it is slightly better than the 13 layers network with 256 planes but worse than the 20 layers network with 256 planes. In previous experiments networks with more than 13 layers were considered worse than 13 layers networks Using our architecture enables to efficiently train networks deeper than 13 layers.\nAccuracy 60 58 56 54 ACeunney 52 50 48 13.256 46 20.256 0 10 20 30 40 50 Examples\nFigure 8: Evolution of the accuracy on the test set\nWe made the 20 layers network with bagging and a 58.485% accuracy play games on the KGS internet Go server. The program name is Golois3 and it is quite popular, playing 24 hours a day against various opponents. It is ranked 3 dan.\nPlaying on KGS is not easy for bots. Some players take advantage of the bot behaviors such as bein. deterministic, so we randomized play choosing randomly among moves that are evaluated greate than O.95 times the evaluation of the best move and that have a greater evaluation than the bes move when augmented by O.05. Other players intentionally play the bot with a wrong handicap tha disadvantages the bot if it loses and does not increase its rank if it wins. Golois plays on par witl other 3 dan human players and can even occasionally win a game against a 5 dan. Golois3 plays it moves almost instantly thanks to its use of a K40 GPU. It gives five periods of 15 seconds per mov. to its human opponents.\nIn comparison, AlphaGo policy network and DarkForest policy network reached a 3 dan level using either reinforcement learningSilver et al.[(2016) or multiple output planes giving the next moves tc learn and 512 feature planes Tian & Zhu (2015).\nGolois sometimes loses games due to the lack of tactical search. Especially against very strong players. In order to improve the level of play we plan to train a value network and to add the results of tactical searches as input features\nTwo previous versions of Golois played before Golois3 on the KGS Go server. The first version is named Golois and it is described in Cazenave[(2016b). Golois is ranked 1 kyu. The second version is named Golois2 and it is described in Cazenave(2016a). Golois2 is ranked 1 dan. The improvements due to changing the training set, the input features and the architecture described in this paper have enabled Golois3 to reach the 3 dan level\nThe 20 layers network of the second set of experiments also plays on KGS under the name Golois4 It is ranked 3 dan."}, {"section_index": "6", "section_name": "4 CONCLUSION", "section_text": "The usual architecture of neural networks used in computer Go can be much improved. Adding. Spatial Batch Normalization as in DarkForest enables to train the networks faster. Adapting residual networks also helps training the network faster. It also enables to successfully train deeper networks A residual network with 20 layers scores 58.001% on the KGS test set. It is greater than previously reported accuracy. Using bagging of mirrored inputs it even reaches 58.485%. The 20 layers network. with bagging plays online on KGS and reached a 3 dan level playing almost instantly. Combining. residual networks with Spatial Batch Normalization enables to train networks faster and to efficiently train deeper networks.\nTraining deeper networks faster for better results is important for the next development phase of Golois, namely training a value network. We also plan to add the results of elaborate tactical searches as input to the network. Both for the policy and the value network."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "The author would like to thank Nvidia and Philippe Vandermersch for providing a K40 GPU that was used in some experiments. This work was also granted access to the HPC resources of MesoPSI financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX 29-01) of the programme Investissements d'Avenir supervised by the Agence Nationale pour la Recherche."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Tristan Cazenave. Residual networks for computer Go. submitted to IEEE TCIAIG, 2016a\nRonan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environment. for machine learning. In BigLearn, NIPS Workshop. number EPFL-CONF-192376. 2011\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George van den Driessche Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.\nYuandong Tian and Yan Zhu. Better computer go player with neural network and long-term predic tion. arXiv preprint arXiv:1511.06410, 2015.\nRemi Coulom. Efficient selectivity and backup operators in Monte-Carlo tree search. In H. Jaap van den Herik, Paolo Ciancarini, and H. H. L. M. Donkers (eds.), Computers and Games, 5th International Conference, CG 2006, Turin, Italy, May 29-31, 2006. Revised Papers, volume 4630 of Lecture Notes in Computer Science, pp. 72-83. Springer, 2006..\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by. reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 448-456, 2015."}]
SJx7Jrtgl
[{"section_index": "0", "section_name": "DEEP UNSUPERVISED CLUSTERING WITH GAUSSIAN MIXTURE VARIATIONAL AUTOENCODERS", "section_text": "Nat Dilokthanakull,*, Pedro A. M. Mediano1, Marta Garnelo1. Matthew C. H. Lee', Hugh Salimbeni1, Kai Arulkumaran? & Murray .\nn.dilokthanakull4@imperial.ac.uk"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "We study a variant of the variational autoencoder model (VAE) with a Gaussian mixture as a prior distribution, with the goal of performing unsupervised clus- tering through deep generative models. We observe that the known problem of over-regularisation that has been shown to arise in regular VAEs also manifests itself in our model and leads to cluster degeneracy. We show that a heuristic called minimum information constraint that has been shown to mitigate this ef fect in VAEs can also be applied to improve unsupervised clustering performance with our model. Furthermore we analyse the effect of this heuristic and provide an intuition of the various processes with the help of visualizations. Finally, we demonstrate the performance of our model on synthetic data, MNIST and SVHN showing that the obtained clusters are distinct, interpretable and result in achieving competitive performance on unsupervised clustering to the state-of-the-art results"}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Unsupervised clustering remains a fundamental challenge in machine learning research. While long-. established methods such as k-means and Gaussian mixture models (GMMs) (Bishop| 2006) still lie at the core of numerous applications (Aggarwal & Reddy2013), their similarity measures are lim-. ited to local relations in the data space and are thus unable to capture hidden, hierarchical dependen-. cies in latent spaces. Alternatively, deep generative models can encode rich latent structures. While. they are not often applied directly to unsupervised clustering problems, they can be used for dimen-. sionality reduction, with classical clustering techniques applied to the resulting low-dimensional. space (Xie et al.[2015). This is an unsatisfactory approach as the assumptions underlying the di-. mensionality reduction techniques are generally independent of the assumptions of the clustering techniques.\nDeep generative models try to estimate the density of observed data under some assumptions abou1. its latent structure, i.e., its hidden causes. They allow us to reason about data in more complex. ways than in models trained purely through supervised learning. However, inference in models with. complicated latent structures can be difficult. Recent breakthroughs in approximate inference have provided tools for constructing tractable inference algorithms. As a result of combining differen tiable models with variational inference, it is possible to scale up inference to datasets of sizes tha1. would not have been possible with earlier inference methods (Rezende et al.|2014). One popular algorithm under this framework is the variational autoencoder (VAE) (Kingma & Welling2013 Rezende et al.]2014)\nIn this paper, we propose an algorithm to perform unsupervised clustering within the VAE frame-. work. To do so, we postulate that generative models can be tuned for unsupervised clustering by. making the assumption that the observed data is generated from a multimodal prior distribution, and,. correspondingly, construct an inference model that can be directly optimised using the reparameter-. ization trick. We also show that the problem of over-regularisation in VAEs can severely effect the. performance of clustering, and that it can be mitigated with the minimum information constraint introduced byKingma et al.(2016)."}, {"section_index": "3", "section_name": "1.1 RELATED WORK", "section_text": "The work that is most closely related to ours is the stacked generative semi-supervised model. (M1+M2) byKingma et al.(2014). One of the main differences is the fact that their prior distri-. bution is a neural network transformation of both continuous and discrete variables, with Gaussiar and categorical priors respectively. The prior for our model, on the other hand, is a neural network. transformation of Gaussian variables, which parametrise the means and variances of a mixture of Gaussians, with categorical variables for the mixture components. Crucially,Kingma et al. (2014. apply their model to semi-supervised classification tasks, whereas we focus on unsupervised clus. tering. Therefore, our inference algorithm is more specific to the latter..\nWe compare our results against several orthogonal state-of-the-art techniques in unsupervised clus. tering with deep generative models: deep embedded clustering (DEC) (Xie et al.[2015), adversar ial autoencoders (AAEs) (Makhzani et al.] 2015) and categorial GANs (CatGANs) (Springenberg 2015).\nVAEs are the result of combining variational Bayesian methods with the flexibility and scalability provided by neural networks (Kingma & Welling|2013;|Rezende et al.[2014). Using variational in ference it is possible to turn intractable inference problems into optimisation problems (Wainwright & Jordan20o8), and thus expand the set of available tools for inference to include optimisatior techniques as well. Despite this, a key limitation of classical variational inference is the need for the likelihood and the prior to be conjugate in order for most problems to be tractably optimised which in turn can limit the applicability of such algorithms. Variational autoencoders introduce the use of neural networks to output the conditional posterior (Kingma & Welling2013) and thus allow the variational inference objective to be tractably optimised via stochastic gradient descent and stan dard backpropagation. This technique, known as the reparametrisation trick, was proposed to enable backpropagation through continuous stochastic variables. While under normal circumstances back- propagation through stochastic variables would not be possible without Monte Carlo methods, this is bypassed by constructing the latent variables through the combination of a deterministic function and a separate source of noise. We refer the reader toKingma & Welling(2013) for more details.\nIn regular VAEs, the prior over the latent variables is commonly an isotropic Gaussian. This choice of prior causes each dimension of the multivariate Gaussian to be pushed towards learning a separate. continuous factor of variation from the data, which can result in learned representations that are. structured and disentangled. While this allows for more interpretable latent variables (Higgins et al. 2016), the Gaussian prior is limited because the learnt representation can only be unimodal and does.\nUnsupervised clustering can be considered a subset of the problem of disentangling latent variables, which aims to find structure in the latent space in an unsupervised manner. Recent efforts have moved towards training models with disentangled latent variables corresponding to different factors of variation in the data. Inspired by the learning pressure in the ventral visual stream,Higgins et al. (2016) were able to extract disentangled features from images by adding a regularisation coefficient to the lower bound of the VAE. As with VAEs, there is also effort going into obtaining disentangled features from generative adversarial networks (GANs) (Goodfellow et al.|2014). This has been re- cently achieved with InfoGANs (Chen et al.2016a), where structured latent variables are included as part of the noise vector, and the mutual information between these latent variables and the gen- erator distribution is then maximised as a mini-max game between the two networks. Similarly Tagger (Greff et al.J2016), which combines iterative amortized grouping and ladder networks, aims to perceptually group objects in images by iteratively denoising its inputs and assigning parts of the reconstruction to different groups.Johnson et al.(2016) introduced a way to combine amortized inference with stochastic variational inference in an algorithm called structured VAEs. Structured VAEs are capable of training deep models with GMM as prior distribution. Shu et al.(2016) in- troduced a VAE with a multimodal prior where they optimize the variational approximation to the standard variational objective showing its performance in video prediction task.\nnot allow for more complex representations. As a result, numerous extensions to the VAE have beer developed, where more complicated latent representations can be learned by specifying increasingly complex priors (Chung et al. 2015 Gregor et al. 2015 Eslam1 et al. 12016)\nIn this paper we choose a mixture of Gaussians as our prior, as it is an intuitive extension of the uni-. modal Gaussian prior. If we assume that the observed data is generated from a mixture of Gaussians,. inferring the class of a data point is equivalent to inferring which mode of the latent distribution the. data point was generated from. While this gives us the possibility to segregate our latent space into. distinct classes, inference in this model is non-trivial. It is well known that the reparametrisation trick which is generally used for VAEs cannot be directly applied to discrete variables. Several pos-. sibilities for estimating the gradient of discrete variables have been proposed (Glynn 1990] Titsias. & Lazaro-Gredilla2015).Graves(2016) also suggested an algorithm for backpropagation through GMMs. Instead, we show that by adjusting the architecture of the standard VAE, our estimator of the variational lower bound of our Gaussian mixture variational autoencoder (GMVAE) can be opti-. mised with standard backpropagation through the reparametrisation trick, thus keeping the inference. model simple.\nConsider the generative model p3.e(y, x, w, z) = p(w)p(z)ps(x|w, z)pe(y|x), where an observec sample y is generated from a set of latent variables x, w and z under the following process:\nw ~ N(0,I) z ~ Mult() K IN (zz(w;B),diag(o2r(w;)) Zk x|z,w ~ k=1 y|x ~ N ((x; 0), diag (2(x; 0))) )or B((x; 0)\nwhere K is a predefined number of components in the mixture, and zr (; ), ?, (; ), (.; 0), and. o2(.; 0) are given by neural networks with parameters and 0, respectively. That is, the observed. sample y is generated from a neural network observation model parametrised by 0 and the contin-. uous latent variable x. Furthermore. the distribution of x|w is a Gaussian mixture with means and variances specified by another neural network model parametrised by and with input w..\nMore specifically, the neural network parameterised by outputs a set of K means z. and I variances o?.. 2k, given w as input. A one-hot vector z is sampled from the mixing probability n which chooses one component from the Gaussian mixture. We set the parameter k = K-1 t make z uniformly distributed. The generative and variational views of this model are depicted i Fig.\nw w x X A y y\nFigure 1: Graphical models for the Gaussian mixture variational autoencoder (GMVAE) showing the generative model (left) and the variational family (right)..\nP,o(y, x, w, z) LELBO = E q(x,w,z|y)\nThe lower bound can then be written as.\nWe refer to the terms in the lower bound as the reconstruction term, conditional prior term, w-prior term and z-prior term respectively."}, {"section_index": "4", "section_name": "3.2.1 THE CONDITIONAL PRIOR TERM", "section_text": "The reconstruction term can be estimated by drawing Monte Carlo samples from g(x[y), where the gradient can be backpropagated with the standard reparameterisation trick (Kingma & Welling. 2013). The w-prior term can be calculated analytically\nM K Ps(zn=1|x(j),z KL qpx(x|y)||p3(x|wj) M j=1 k=1\nSince ps(z|x, w) can be computed for all z with one forward pass, the expectation over it can be calculated in a straightforward manner and backpropagated as usual. The expectation over qow (w|y can be estimated with M Monte Carlo samples and the gradients can be backpropagated via the reparameterisation trick. This method of calculating the expectation is similar to the marginalisatior approach ofKingma et al.(2014), with a subtle difference.Kingma et al.(2014) need multiple forward passes to obtain each component of the z-posterior. Our method requires wider outpu layers of the neural network parameterised by , but only need one forward pass. Both methods scale up linearly with the number of clusters."}, {"section_index": "5", "section_name": "3. 3 THE KL COST OE THE DISCRETE LATENT VARIABLE", "section_text": "The most unusual term in our ELBO is the z-prior term. The z-posterior calculates the clustering. assignment probability directly from the value of x and w, by asking how far x is from each o. the cluster positions generated by w. Therefore, the z-prior term can reduce the KL divergence. between the z-posterior and the uniform prior by concurrently manipulating the position of the. clusters and the encoded point x. Intuitively, it would try to merge the clusters by maximising. the overlap between them, and moving the means closer together. This term, similar to other KL regularisation terms, is in tension with the reconstruction term, and is expected to be over-powerec as the amount of training data increases..\nWe assume the mean-field variational family q(x, w, z[y) as a proxy to the posterior which factorises. as q(x,w,z|y) = I, qox(xi|yi)qow(wi|yi)P(zi|xi,wi), where i indexes over data points.To simplify further notation, we will drop i and consider one data point at a time. We parametrise each variational factor with the recognition networks x and Qw that output the parameters of the. variational distributions and specify their form to be Gaussian posteriors. We derived the z-posterior,. pB(z|x, w), as:\np(Zj =1)p(x|zj =1,w BZ=1x,w) = k=1P(zk =1)p(x|zj =1,w) ;N(x|;(w;),;(w;B)) k=1TkN(x|k(w;),0k(w;))\nmportantly, by constructing the model this way, the conditional prior term can be estimated using Eqn.5 without the need to sample from the discrete distribution p(z|x, w).."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "The main objective of our experiments is not only to evaluate the accuracy of our proposed model but also to understand the optimisation dynamics involved in the construction of meaningful, differ entiated latent representations of the data. This section is divided in three parts:\nThroughout this section we make use of the following datasets."}, {"section_index": "7", "section_name": "4.1 SYNTHETIC DATA", "section_text": "We quantify clustering performance by plotting the magnitude of the z-prior term described in Eqn.6 during training. This quantity can be thought of as a measure of how much different clusters overlap. Since our goal is to achieve meaningful clustering in the latent space, we would expect this quantity to go down as the model learns the separate clusters..\nLz = -Eq(x|y)q(w|y) KL(p3(z|x,w)||p(z))\nEmpirically, however, we have found this not to be the case. The latent representations that ou model converges to merges all classes into the same large cluster instead of representing informatior about the different clusters, as can be seen in Figs.2d|and 3a As a result, each data point is equally likely to belong to any of clusters, rendering our latent representations completely uninformative with respect to the class structure.\nWe argue that this phenomenon can be interpreted as the result of over-regularisation by the z-prior term. Given that this quantity is driven up by the optimisation of KL term in the lower bound\nThe possible overpowering effect of the regularisation term on VAE training has been described. numerous times in the VAE literature (Bowman et al.]2015] Sonderby et al.2016] Kingma et al. 2016Chen et al. 2016b). As a result of the strong influence of the prior, the obtained latent repre-. sentations are often overly simplified and poorly represent the underlying structure of the data. So far there have been two main approaches to overcome this effect: one solution is to anneal the KL. term during training by allowing the reconstruction term to train the autoencoder network before slowly incorporating the regularization from the KL term (Sonderby et al.]2016). The other main. approach involves modifying the objective function by setting a cut-off value that removes the ef-. fect of the KL term when it is below a certain threshold (Kingma et al.]2016). As we show in the. experimental section below, this problem of over-regularisation is also prevalent in the assignment of the GMVAE clusters and manifests itself in large degenerate clusters. While we show that the second approach suggested by Kingma et al.(2016) does indeed alleviate this merging phenomenon,. finding solutions to the over-regularization problem remains a challenging open problem..\n1. We first study the inference process in a low-dimensional synthetic dataset, and focus i1 particular on how the over-regularisation problem affects the clustering performance of th. GMVAE and how to alleviate the problem; 2. We then evaluate our model on an MNIST unsupervised clustering task; and. 3. We finally show generated images from our model, conditioned on different values of the. latent variables, which illustrate that the GMVAE can learn disentangled, interpretable la tent representations.\nSynthetic data: We create a synthetic dataset mimicking the presentation of|Johnson et al (2016), which is a 2D dataset with 10,000 data points created from the arcs of 5 circles. MNIST: The standard handwritten digits dataset, composed of 28x28 grayscale images and consisting of 60,000 training samples and 10,000 testing samples (LeCun et al.11998) SVHN: A collection of 32x32 images of house numbers (Netzer et al.]2011). We use the cropped version of the standard and the extra training sets, adding up to a total of approximately 600,000 images.\n= max(, Eq(x|y)q(w|y) KL(p3(z|x,w)|p(z))D\nThis modification suppresses the initial effect of the z-prior to merge all clusters thus allowing them. to spread out until the cost from the z-prior cost is high enough. At that point its effect is significantly. reduced and is mostly limited to merging individual clusters that are overlapping sufficiently. This. can be seen clearly in Figs.2e|and |2f] The former shows the clusters before the z-prior cost is. taken into consideration, and as such the clusters have been able to spread out. Once the z-prior is activated, clusters that are very close together will be merged as seen in Fig.2f.\nFinally, in order to illustrate the benefits of using neural networks for the transformation of the distributions, we compare the density observed by our model (Fig.2c) with a regular GMM (Fig.2c in data space. As illustrated by the figures, the GMVAE allows for a much richer, and thus more accurate representations than regular GMMs, and is therefore more successful at modelling non Gaussian data.\n(d) Latent space, at poor optimum (e) Latent space, clusters spreading (f) Latent space, at convergence.\nFigure 2: Visualisation of the synthetic dataset: (a) Data is distributed with 5 modes on the 2 dimensional data space. (b) GMVAE learns the density model that can model data using a mixture. of non-Gaussian distributions in the data space. (c) GMM cannot represent the data as well because of the restrictive Gaussian assumption. (d) GMVAE, however, suffers from over-regularisation and. can result in poor minima when looking at the latent space. (e) Using the modification to the ELBO. (Kingma et al.[2016) allows the clusters to spread out. (f) As the model converges the z-prior term. is activated and regularises the clusters in the final stage by merging excessive clusters..\nit reaches its maximum possible value of zero, as opposed to decreasing with training to ensure encoding of information about the classes. We suspect that the prior has too strong of an influence in the initial training phase and drives the model parameters into a poor local optimum that is hard to be driven out off by the reconstruction term later on.\nThis observation is conceptually very similar to the over-regularisation problem encountered in regu- lar VAEs and we thus hypothesize that applying similar heuristics should help alleviate the problem We show in Fig.2f|that by using the previously mentioned modification to the lower-bound pro- posed byKingma et al.(2016), we can avoid the over-regularisation caused by the z-prior. This is achieved by maintaining the cost from the z-prior at a constant value A until it exceeds that threshold. Formally. the modified z-prior term is written as:\n1.0 1.0 0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0 (a) Data points in data space (b) Density of GMVAE (c) Density of GMM\nS -1.0 -1.0 51.5 1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0\n0.0 0.0 0.5 0.5 Jo!d-z 1.0 1.0 1.5 1.5 2.0 -2.0 50 100 150 200 0 50 100 150 200 Epoch Epoch (a) z-prior term with normal ELBO (b) z-prior term with the modification\nFigure 3: Plot of z-prior term: (a) Without information constraint, GMVAE suffers from over regularisation as it converges to a poor optimum that merges all clusters together to avoid the KI cost. (b) Before reaching the threshold value (dotted line), the gradient from the z-prior term cai be turned off to avoid the clusters from being pulled together (see text for details). By the tim the threshold value is reached, the clusters are sufficiently separated. At this point the activate. gradient from the z-prior term only merges very overlapping clusters together. Even after activating its gradient the value of the z-prior continues to decrease as it is over-powered by other terms tha lead to meaningful clusters and better optimum."}, {"section_index": "8", "section_name": "4.2 UNSUPERVISED IMAGE CLUSTERING", "section_text": "We now assess the model's ability to represent discrete information present in the data on an im age clustering task. We train a GMVAE on the MNIST training dataset and evaluate its clusterin performance on the test dataset. To compare the cluster assignments given by the GMVAE with th true image labels we follow the evaluation protocol of|Makhzani et al.(2015), which we summaris here for clarity. In this method, we find the element of the test set with the highest probability c belonging to cluster i and assign that label to all other test samples belonging to i. This is the repeated for all clusters i = 1, ..., K, and the assigned labels are compared with the true labels t obtain an unsupervised classification error rate.\nWhile we observe the cluster degeneracy problem when training the GMVAE on the synthetic. dataset, the problem does not arise with the MNIST dataset. We thus optimise the GMVAE us-. ing the ELBO directly, without the need for any modifications. A summary of the results obtained. on the MNIST benchmark with the GMVAE as well as other recent methods is shown in Table|1 We achieve classification scores that are competitive with the state-of-the-art techniques'I except for. adversarial autoencoders (AAE). We suspect the reason for this is, again, related to the KL terms in. the VAE's objective. As indicated by Hoffman et al., the key difference in the adversarial autoen- coders objective is the replacement of the KL term in the ELBO by an adversarial loss that allows the. latent space to be manipulated more carefully (Hoffman & Johnson2016). Details of the network. architecture used in these experiments can be found in Appendix |A\nEmpirically, we observe that increasing the number of Monte Carlo samples and the number of clusters makes the GMVAE more robust to initialisation and more stable as shown in Fig.4 If fewer samples or clusters are used then the GMVAE can occasionally converge faster to poor local minima, missing some of the modes of the data distribution.\n'It is worth noting that shortly after our initial submission, Rui Shu published a blog pos (http://ruishu.io/2016/12/25/gmvae/) with an analysis on Gaussian mixture VAEs. In addition to providing insightful comparisons to the aforementioned M2 algorithm, he implements a version that achieves compet itive clustering scores using a comparably simple network architecture. Crucially, he shows that model M2 does not use discrete latent variables when trained without labels. The reason this problem is not as severe in the GMVAE might possibly be the more restrictive assumptions in the generative process, which helps the optimisation, as argued in his blog\nTable 1: Unsupervised classification accuracy for MNIST with different numbers of clusters (K (reported as percentage of correct labels)\nMethod K Best Run Average Run CatGAN (Springenberg 2015 20 90.30 AAE (Makhzani et al 2015 16 90.45 2.05 AAE (Makhzani et al 2015 30 95.90 1.13 DEC (Xie et al.) 2015 10 84.30 GMVAE (M = 1) 10 87.31 77.78 5.75 GMVAE (M = 10) 10 88.54 82.31 3.75 GMVAE (M = 1) 16 89.01 85.09 1.99 GMVAE (M = 10) 16 96.92 87.82 5.33 GMVAE (M = 1) 30 95.84 92.77 1.60 GMVAE (M = 10) 30 93.22 89.27 2.50 1.0 0.8 Teennree eeey 0.6 0.4 K=10,M=1 K=10,m=10 0.2 K=16,M=1 K=16,M=10 0.0 0 20 40 60 80 100 Epoch\nFigure 4: Clustering Accuracy with different numbers of clusters (K) and Monte Carlo samples (M) : After only few epochs, the GMVAE converges to a solution. Increasing the number of cluster. improves the quality of the solution considerably."}, {"section_index": "9", "section_name": "4.2.1 IMAGE GENERATION", "section_text": "So far we have argued that the GMVAE picks up natural clusters in the dataset, and that these. clusters share some structure with the actual classes of the images. Now we train the GMVAE with K = 10 on MNIST to show that the learnt components in the distribution of the latent space actually. represent meaningful properties of the data. First, we note that there are two sources of stochasticity in play when sampling from the GMVAE, namely.\nIn Fig.5a|we explore the latter option by setting w = 0 and sampling multiple times from the result- ing Gaussian mixture. Each row in Fig.5a|corresponds to samples from a different component of the Gaussian mixture, and it can be clearly seen that samples from the same component consistently result in images from the same class of digit. This confirms that the learned latent representation contains well differentiated clusters, and exactly one per digit. Additionally, in Fig.5b|we explore the sensitivity of the generated image to the Gaussian mixture components by smoothly varying\n0.8 0.6 0.4 K=10,m=1 K=10,M=10 0.2 K=16,M=1 K=16,M=10 0.0 0 20 40 60 80 100 Epoch\n1. Sampling w from its prior, which will generate the means and variances of x through a neural network ; and 2. Sampling x from the Gaussian mixture determined by w and z, which will generate the. image through a neural network 0..\nw and sampling from the same component. We see that while z reliably controls the class of the generated image, w sets the \"style' of the digit..\nFinally, in Fig.6 we show images sampled from a GMVAE trained on SVHN, showing that th GMVAE clusters visually similar images together.\n9 5 5 S S 5 S 5 0 3 8 Y 333 0 (a) Varying. (b) Varying w\nFigure 5: Generated MNIST samples: (a) Each row contains 10 randomly generated samples from different Gaussian components of the Gaussian mixture. The GMVAE learns a meaningfu generative model where the discrete latent variables z correspond directly to the digit values in ar unsupervised manner. (b) Samples generated by traversing around w space, each position of w correspond to a specific style of the digit.\n161001258493 875112550128502074 50719 55 DO 13.8 LT 4123263621263204 787521738 1141186558358915618 566 105 0003518.4\nFigure 6: Generated SVHN samples: Each row corresponds to 10 samples generated randomly from different Gaussian components. GMVAE groups together images that are visually similar.\nWe have introduced a class of variational autoencoders in which one level of the latent encoding space has the form of a Gaussian mixture model, and specified a generative process that allows\nus to formulate a variational Bayes optimisation objective. We then discuss the problem of over regularisation in VAEs. In the context of our model, we show that this problem manifests itself in the form of cluster degeneracy. Crucially, we show that this specific manifestation of the problem can be solved with standard heuristics.\nWe evaluate our model on unsupervised clustering tasks using popular datasets and achieving com. petitive results compared to the current state of the art. Finally, we show via sampling from the generative model that the learned clusters in the latent representation correspond to meaningful fea-. tures of the visible data. Images generated from the same cluster in latent space share relevant high-level features (e.g. correspond to the same MNIST digit) while being trained in an entirely. unsupervised manner.\nIt is worth noting that GMVAEs can be stacked by allowing the prior on w to be a Gaussian mixture distribution as well. A deep GMVAE could scale much better with number of clusters given that it would be combinatorial with regards to both number of layers and number of clusters per layer. As such, while future research on deep GMVAEs for hierarchical clustering is a possibility, it is crucial to also address the enduring optimisation challenges associated with VAEs in order to do so."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Christopher M Bishop. Pattern recognition and machine learning. 2006\nPW Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of th ACM, 33(10):75-84, 1990.\nAlex Graves. Stochastic backpropagation through mixture density distributions. arXiv preprin arXiv:1607.05690, 2016.\nKlaus Greff, Antti Rasmus, Mathias Berglund, Tele Hotloo Hao, Jurgen Schmidhuber, and Harri Valpola. Tagger: Deep unsupervised perceptual grouping. arXiv preprint arXiv:1606.06724, 2016.\nWe would 1ike to acknowledge the NVIDIA Corporation for the donation of a GeForce GTX Titan Z used in our experiments. We would like to thank Jason Rolfe, Rui Shu and the reviewers for useful comments. Importantly, we would also like to acknowledge that the variational family which we used throughout this version of the paper was suggested by an anonymous reviewer.\nSamuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Ben gio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.\ngio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info gan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprint arXiv:1606.03657, 2016a. Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. arXiv preprint arXiv:1611.02731, 2016b. J. Chung, K. Kastner, L. Dinh, K. Goel, A. Courville, and Y. Bengio. A Recurrent Latent Variable Model for Sequential Data. ArXiv e-prints, June 2015.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nAlireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders arXiv preprint arXiv:1511.05644, 2015.\nYuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011."}, {"section_index": "11", "section_name": "A NETWORK PARAMETERS", "section_text": "For optimisation, we use Adam (Kingma & Ba2014) with a learning rate of 10-4 and standar hyperparameter values 1 = 0.9, 2 = 0.999 and e = 10-8. The model architectures used in ou experiments are shown in Tables[A.1. A.2and|A.3\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation anc approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.\nTable A.1: Neural network architecture models of qo(x, w): The hidden layers are shared between. q(x) and q(w), except the output layer where the neural network is split into 4 output streams, 2. with dimension N. and the other 2 with dimension Nw. We exponentiate the variance components. to keep their value positive. An asterisk (*) indicates the use of batch normalization and a ReLU nonlinearity. For convolutional layers, the numbers in parentheses indicate stride-padding..\nTable A.2: Neural network architecture models of ps(x|w, z): The output layers are split into 2K. streams of output, where K streams return mean values and the other K streams output variances o all the clusters.\nTable A.3: Neural network architecture models of pe(y|x): The network outputs are Gaussian parameters for the synthetic dataset and Bernoulli parameters for MNIST and SVHN, where we use the logistic function to keep value of Bernoulli parameters between O and 1. An asterisk (*) indicates the use of batch normalization and a ReLU nonlinearity. For convolutional layers, the numbers in parentheses indicate stride-padding.\nDataset Input Hidden Output Synthetic 2 fc 120 ReLU 120 ReLU Nw = 2, Nw = 2 (Exp), Nz =2, Nx = 2 (Exp) MNIST 28x28 conv 16x6x6* (1-0) 32x6x6* (1-0) Nw = 150, Nw = 150 (Exp), 64x4x4* (2-1) 500* Nx = 200, Nx = 200 (Exp) SVHN 32x32 conv 64x4x4* (2-1) 128x4x4* (2-1) Nw = 150, Nw = 150 (Exp); 246x4x4* (2-1) 500* Nz = 200, Nx = 200 (Exp)\n6x6* (1-0) 32x6x6* (1-0) Nw = 150, Nw = 150 (Exp), (2-1) 500* Nz = 200, Nx = 200 (Exp)\nDataset Input Hidden Output Synthetic 2 fc 120 Tanh {Nx =2}2K MNIST 150 fc 500 Tanh {Nx =200}2K SVHN 150 fc 500 Tanh {Nx =200}2K\nDataset Input Hidden Output Synthetic 2 fc 120 Tanh {Nx=2}2K MNIST 150 fc 500 Tanh {Nx =200}2K SVHN 150 fc 500 Tanh {Nx =200}2K\nDataset Input Hidden Output Synthetic 2 fc 120 ReLU 120 ReLU {2}2 MNIST 200 500* full-conv 64x4x4* (2-1) 32x6x6* (1-0) 28x28 (Sigmoid) 16x6x6* (1-0) SVHN 200 500* full-conv 246x4x4* (2-1) 128x4x4* (2-1) 32x32 (Sigmoid) 64x4x4* (2-1)"}]
B1-q5Pqxl
[{"section_index": "0", "section_name": "MACHINE COMPREHENSION USING MATCH-LSTM AND ANSWER POINTER", "section_text": "Shuohang Wang\nSchool of Information Systems Singapore Management University\nshwang.2014@phdis.smu.edu.sg\nMachine comprehension of text is an important problem in natural language pro- cessing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for eval- uating machine comprehension algorithms, partly because compared with previ- ous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architec- ture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by [Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our tasks. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.[(2016) using logistic regression and manually crafted features. Besides, our boundary model also achieves the best performance on the MSMARCO dataset (Nguyen et al.|2016)."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Machine comprehension of text is one of the ultimate goals of natural language processing. While the ability of a machine to understand text can be assessed in many different ways, in recent years. several benchmark datasets have been created to focus on answering questions as a way to evaluate machine comprehension (Richardson et al.]2013] Hermann et al.]2015) Hill et al.]2016]Weston et al.]2016] Rajpurkar et al.|2016f |Nguyen et al.|[2016). In this setup, typically the machine is first presented with a piece of text such as a news article or a story. The machine is then expected to nswerOne\nGiven these advantages of the SQuAD and MSMARCO datasets, in this paper, we focus on these new datasets to study machine comprehension of text. A sample piece of text and three of its asso ciated questions from SQuAD are shown in Table[1 Traditional solutions to this kind of question answering tasks rely on NLP pipelines that involve multiple steps of linguistic analyses and feature engineering, including syntactic parsing, named entity recognition, question classification, semantic parsing, etc. Recently, with the advances of applying neural network models in NLP, there has been\nJing Jiang\nSchool of Information Systems Singapore Management Universit"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In most of the benchmark datasets, a question can be treated as a multiple choice question, whose correct answer is to be chosen from a set of provided candidate answers (Richardson et al. 2013 Hill et al.2016). Presumably, questions with more given candidate answers are more challeng- ing. The Stanford Question Answering Dataset (SQuAD) introduced recently byRajpurkar et al. (2016) contains such more challenging questions whose correct answers can be any sequence of tokens from the given text. Moreover, unlike some other datasets whose questions and answers were created automatically in Cloze style (Hermann et al.]2015) Hill et al.]2016), the questions and an- swers in SQuAD were created by humans through crowdsourcing, which makes the dataset more realistic. Another real dataset, the Human-Generated MAchine Reading COmprehension dataset (MSMARCO) (Nguyen et al.2016), provided a query together with several related documents col- lected from Bing Index. The answer to the query is generated by human and the answer words can not only come from the given text.\nIn 1870, Tesla moved to Karlovac, to attend school at the Higher Real Gymnasium, where he was profoundly influenced by a math teacher Martin Sekulic. The classes were held in German, as it was a school within the Austro-Hungarian Military Frontier. Tesla was able to perform integral calculus in his head, which prompted his teachers to believe that he was cheating. He finished a four-year term in three years, graduating in 1873.\nmuch interest in building end-to-end neural architectures for various NLP tasks, including several. pieces of work on machine comprehension (Hermann et al.]. 2015} Hill et al.2016fYin et al.2016 Kadlec et al.I 2016] Cui et al.2016). However, given the properties of previous machine compre-. hension datasets, existing end-to-end neural architectures for the task either rely on the candidate answers [Hill et al.|2016 Yin et al.[2016) or assume that the answer is a single token (Hermann. et al.]2015Kadlec et al.2016 Cui et al.]2016), which make these methods unsuitable for the SQuAD/MSMARCO dataset. In this paper, we propose a new end-to-end neural architecture to address the machine comprehension problem as defined in the SQuAD/MSMARCO dataset. And. for the MSMARCO dataset, we will only make use of the words in the given text to generate the. answer.\nOur contributions can be summarized as follows: (1) We propose two new end-to-end neural network. models for machine comprehension, which combine match-LSTM and Ptr-Net to handle the specia. properties of the SQuAD dataset. To the best of our knowledge, we are the first to propose the. boundary model which is more suitable to the SQuAD/MSMARCO tasks. And we are the first. to integrate the attention-based word pair matching into machine comprehension tasks. (2) We. have achieved the performance of an exact match score of 71.3% and an F1 score of 80.8% on the. unseen SQuAD test dataset, which is much better than the feature-engineered solution (Rajpurkar. et al.2016). Our performance is also close to the state of the art on SQuAD, which is 74.8% in. terms of exact match and 82.2% in terms of F1 collected from the SQuAD Leaderboard Besides our boundary model achieves the state-of-art performance on the MSMARCO dataset with BLUE 1/2/3/4 40.7/33.9/30.6/28.7 and Rouge-L 37.3 (3) Our further visualization of the models reveals some useful insights of the attention mechanism for reasoning the questions. And we also show. that the boundary model can overcome the early stop prediction problem in the sequence model. Besides, we also made our code available online3\nTable 1: A paragraph from Wikipedia and three associated questions together with their answers. taken from the SQuAD dataset. The tokens in bold in the paragraph are our predicted answers while the texts next to the questions are the ground truth answers..\nSpecifically, observing that in the SQuAD/MSMARCO dataset many questions could be entailed from some sentences in the original text, we adopt a match-LSTM model that we developed earlier for textual entailment (Wang & Jiang2016) as one layer of our model. We build a bi-directional match-LSTM on the given passage with attentions on the question for each word so that each posi- tion in the paragraph will have a hidden representation reflecting its relation to the question. Then we further adopt the Pointer Net (Ptr-Net) model developed byVinyals et al.(2015) to select the words in these positions based on the hidden representations built by match-LSTM as an answer. We propose two ways to apply the Ptr-Net model for our task: a sequence model which selects the answer word by word, and a boundary model which only selects the start and end points of the answer span. Experiments on the SQuAD dataset show that our two models both outperform the best performance reported by[Rajpurkar et al.(2016). Moreover, using an ensemble of several of our models, we can achieve very competitive performance on SQuAD. For the MSMARCO dataset, a real query based problem, our boundary model outperforms our sequence model with a big margin It also outperforms the golden passage baseline.\nAnswer Pointer Layer Match-LSTM Pad Layer LSTM preprocess- ing Layer for P attend school Real Gymnasium atten soft attention. soft attention LsTm preprocess- hq h. ing Layer for Q Whye did Whye Tesla did Tesla (a) Sequence Model (b) Boundary Model\nFigure 1: An overview of our two models. Both models consist of an LSTM preprocessing layer a match-LSTM layer and an Answer Pointer layer. For each match-LSTM in a particular direction hf, which is defined as H'a,, is computed using the a in the corresponding direction, as described. in Eqn. (2"}, {"section_index": "3", "section_name": "2.1 MATCH-LSTM", "section_text": "In a recent work on learning natural language inference, we proposed a match-LSTM model fo. predicting textual entailment (Wang & Jiang2016). In textual entailment, two sentences are giver. where one is a premise and the other is a hypothesis. To predict whether the premise entails th hypothesis, the match-LSTM model goes through the tokens of the hypothesis sequentially. At eacl. position of the hypothesis, attention mechanism is used to obtain a weighted vector representatior. of the premise. This weighted premise is then to be combined with a vector representation of th. current token of the hypothesis and fed into an LSTM, which we call the match-LSTM. The match LSTM essentially sequentially aggregates the matching of the attention-weighted premise to eacl. token of the hypothesis and uses the aggregated matching result to make a final prediction.."}, {"section_index": "4", "section_name": "2.2 POINTER NET", "section_text": "Vinyals et al.(2015) proposed a Pointer Network (Ptr-Net) model to solve a special kind of problems where we want to generate an output sequence whose tokens must come from the input sequence Instead of picking an output token from a fixed vocabulary, Ptr-Net uses attention mechanism as a pointer to select a position from the input sequence as an output symbol. The pointer mechanisn has inspired some recent work on language processing (Gu et al.]2016]Kadlec et al.] 2016). Here we adopt Ptr-Net in order to construct answers using tokens from the input text."}, {"section_index": "5", "section_name": "2.3 OUR METHOD", "section_text": "Formally, the problem we are trying to solve can be formulated as follows. We are given a piece of text, which we refer to as a passage, and a question related to the passage. The passage is\nPointer Layer Match-LSTM Layer LSTM preprocess- ing Layer for P /mnasium attel soft attention soft attention LSTM preprocess- n% ing Layer for Q Why did did Tesla Why Tesla (a) Sequence Model (b) Boundary Model\nIn this section, we first briefly review match-LSTM and Pointer Net. These two pieces of existing work lay the foundation of our method. We then present our end-to-end neural architecture for machine comprehension.\nAs pointed out earlier, since the output tokens are from the input, we would like to adopt the Pointer Net for this problem. A straightforward way of applying Ptr-Net here is to treat an answer as a sequence of tokens from the input passage but ignore the fact that these tokens are consecutive in the original passage, because Ptr-Net does not make the consecutivity assumption. Specifically, we represent the answer as a sequence of integers a = (a1, a2, . ..), where each a, is an integer between. 1 and P, indicating a certain position in the passage..\nAlternatively, if we want to ensure consecutivity, that is, if we want to ensure that we indeed select a subsequence from the passage as an answer, we can use the Ptr-Net to predict only the start and the. end of an answer. In this case, the Ptr-Net only needs to select two tokens from the input passage.. and all the tokens between these two tokens in the passage are treated as the answer. Specifically. we can represent the answer to be predicted as two integers a = (as, ae), where as an ae are integers. between 1 and P.\nWe refer to the first setting above as a sequence model and the second setting above as a bound. ary model. For either model, we assume that a set of training examples in the form of triplets {(Pn, Qn, an)} N=1 are given.\nAn overview of the two neural network models are shown in Figure 1 Both models consist of three layers: (1) An LSTM preprocessing layer that preprocesses the passage and the question using LSTMs. (2) A match-LSTM layer that tries to match the passage against the question. (3) An Answer Pointer (Ans-Ptr) layer that uses Ptr-Net to select a set of tokens from the passage as the answer. The difference between the two models only lies in the third layer."}, {"section_index": "6", "section_name": "LSTM Preprocessing Layer", "section_text": "The purpose for the LSTM preprocessing layer is to incorporate contextual information into the representation of each token in the passage and the question. We use a standard one-directional LSTM (Hochreiter & Schmidhuber1997) to process the passage|and the question separately, as shown below:\nHP = LSTM(P). H9 = LSTM(Q)\nThe resulting matrices HP E RlP and H9 E RlQ are hidden representations of the passage and. the question, where l is the dimensionality of the hidden vectors. In other words, the ith columr vector h? (or h?) in HP (or H) represents the ith token in the passage (or the question) togethe. with some contextual information from the left.."}, {"section_index": "7", "section_name": "Match-LSTM Layer", "section_text": "We apply the match-LSTM model (Wang & Jiang2016) proposed for textual entailment to our ma- chine comprehension problem by treating the question as a premise and the passage as a hypothesis The match-LSTM sequentially goes through the passage. At position i of the passage, it first uses. the standard word-by-word attention mechanism to obtain attention weight vector , E R1Q as follows:\nG, tanh(W*H9 + (WPhP + Wr h X eo d i softmax(wG; + b 8 eQ)\nEssentially, the resulting attention weight d i.; above indicates the degree of matching between the ith token in the passage with the jth token in the question. Next, we use the attention weight vector.\n4For the MSMARCO dataset, P is actually consisted of several unrelated documents. The previous state of pre-processing LSTM and match-LSTM to compute the first state of each document is set to zero..\nrepresented by matrix P E Rd P, where P is the length (number of tokens) of the passage and d is. the dimensionality of word embeddings. Similarly, the question is represented by matrix Q E Rd Q. where Q is the length of the question. Our goal is to identify a subsequence from the passage as the answer to the question.\nwhere Wq, Wp, Wr E Rll, bP, w E Rl1 and b E R are parameters to be learned, G, E RlQ is the intermediate result, h,-1 E R'1 is the hidden vector of the one-directional match-LSTM (to be explained below) at position i - 1, and the outer product (. eQ) produces a matrix or row vector by repeating the vector or scalar on the left for Q times.\nd , to obtain a weighted version of the question and combine it with the current token of the passage to form a vector z i:\nhP Hq\nLSTM(Z, h;-1) h\nWe further build a similar match-LSTM in the reverse direction. The purpose is to obtain a repre sentation that encodes the contexts from both directions for each token in the passage.\nThe top layer, the Answer Pointer (Ans-Ptr) layer, is motivated by the Pointer Net introduced by Vinyals et al.[(2015). This layer uses the sequence H' as input. Recall that we have two different models: The sequence model produces a sequence of answer tokens but these tokens may not be consecutive in the original passage. The boundary model produces only the start token and the end token of the answer, and then all the tokens between these two in the original passage are considered to be the answer. We now explain the two models separately.\nThe Sequence Model: Recall that in the sequence model, the answer is represented by a sequence of integers a = (a1, a2, ...) indicating the positions of the selected tokens in the original passage. The Ans-Ptr layer models the generation of these integers in a sequential manner. Because the length of an answer is not fixed, in order to stop generating answer tokens at certain point, we allow each ax to take up an integer value between 1 and P + 1, where P + 1 is a special value indicating the end of the answer. Once ak is set to be P + 1, the generation of the answer stops.\nF k tanh(VH + (Wahe-1) Xe Bk softmax(v'Fk + c e(P+1).\nhz = LSTM(H L, hz-1)\nWe can then model the probability of generating the answer sequence as\np(a[H) k\nH'\nIn order to generate the kth answer token indicated by ak, first, the attention mechanism is used probability of selecting the jth token from the passage as the kth token in the answer, and k,(P+1) is the probability of stopping the answer generation at position k. 3k is modeled as follows:\np(ak = j|a1, a2,...,ak-1, H)\nTo train the model, we minimize the following loss function based on the training examples\nN logp(an|Pn, Qn). n=1\nThe Boundary Model: The boundary model works in a way very similar to the sequence mode. above, except that instead of predicting a sequence of indices a1, a2, ..., we only need to predict. two indices as and ae. So the main difference from the sequence model above is that in the boundary model we do not need to add the zero padding to H', and the probability of generating an answer is. simply modeled as\nAs this boundary model could point to a span covering too many tokens without any restriction we try to manually limit the length of the predicted span and then search the span with the highest probability computed by p(as) p(ae as) as the answer.\nIn this section, we present our experiment results and perform some analyses to better understand how our models works.\nWe use the Stanford Question Answering Dataset (SQuAD) v1.1 and the human-generated Mi. crosoft MAchine Reading COmprehension (MSMARCO) dataset v1.1 to conduct our experiments\nPassages in SQuAD come from 536 articles in Wikipedia covering a wide range of topics. Each passage is a single paragraph from a Wikipedia article, and each passage has around 5 questions associated with it. In total, there are 23,215 passages and 107,785 questions. The data has been spli into a training set (with 87,599 question-answer pairs), a development set (with 10,570 question answer pairs) and a hidden test set."}, {"section_index": "8", "section_name": "3.2 EXPERIMENT SETTINGS", "section_text": "We first tokenize all the passages, questions and answers. We use word embeddings from GloVe (Pennington et al.|[2014) to initialize the model. Words not found in GloVe are initialized as zero vectors. The word embeddings are not updated during the training of the model..\nFor the SQuAD dataset, the performance is measured by two metrics: percentage of exact match with the ground truth answers and word-level F1 score when comparing the tokens in the predicted answers with the tokens in the ground truth answers. Note that in the development set and the test set each question has around three ground truth answers. F1 scores with the best matching answers are used to compute the average F1 score. For the MSMARCO dataset, the metrics in the official tool of MSMARCO evaluation are BLEU-1/2/3/4 and Rouge-L, which are widely used in many domains.\np(a|H) p(as|H)p(ae|as, H)\nFor the MSMARCO dataset, the questions are user queries issued to the Bing search engine, the context passages are real Web documents and the answers are human-generated. We select the span that has the highest F1 score with the gold standard answer for training and only predict the span in the passages during evaluation. The data has been split into a training set (82326 pairs), a development set (10047 pairs) and a test set (9650 pairs).\nThe dimensionality l of the hidden layers is set to be 150. We use ADAMAX (Kingma & Ba] 2015. with the coefficients 1 = 0.9 and 2 = 0.999 to optimize the model. Each update is computed through a minibatch of 30 instances. We do not use L2-regularization.\nSQuAD MSMARCO Exact Match F1 BLEU1/2/3/4 / Rouge-L Dev Test Dev Test Dev & Test Human 80.3 77.0 90.5 86.8 - & 46/-/-/- / 47 Golden Passage 19.6/18.8/18.1/17.5/32.3 & - LR (Rajpurkar et al.2016) 40.0 40.4 51.0 51.0 DCR (Yu et al.[[2016) 62.5 62.5 71.2 71.0 LSTM with Ans-Ptr (Sequence) 37.7 48.5 1 1 10.3/7.2 /5.6 /4.6 /21.6&- LSTM with Ans-Ptr (Boundary) 45.2 55.3 32.0 / 25.3 / 22.2 / 20.4 / 32.3 & - mLSTM with Ans-Ptr (Sequence) 54.4 68.2 12.5/9.2 /7.5 /6.5 /22.5 &- mLSTM with Ans-Ptr (Boundary) 63.0 72.7 32.9 / 26.4 / 23.2 / 21.6 / 33.8 & - Our best boundary model 67.0 66.9 77.2 77.1 40.1 / 33.3 / 30.1 / 28.2 / 37.2 & 40.7 / 33.9 / 30.6 / 28.7 / 37.3 mLSTM with Ans-Ptr (Boundary+en) 67.6 67.9 76.8 77.0 Our best boundary model (en) 71.3 72.6 80.0 80.8\nTable 2: Experiment Results on SQuAD and MSMARCO datasets. Here \"LSTM with Ans-Ptr' removes the attention mechanism in match-LSTM (mLSTM) by using the final state of the LSTM for the question to replace the weighted sum of all the states. Our best boundary model is the further tuned model and its ablation study is shown in Table|4[ \"en'' refers to ensemble method.\nSQuAD MSMARCO #w in A/Q/P #w in A/Q/P raw 3.1/11/141 16.3 / 6/667 seq 2.4/-/ - 6.7 /-/- bou 3.0/-/ - 15.7/-/ -\nTable 3: Statistical analysis on the development datasets.. #w:num- ber of words on average; P: passage. Q: question; A: answer; raw: raw. data from the development dataset: seq/bou: the answers generated by the sequence/boundary models with match-LSTM.\nthe experiments with the ablation of attention mechanism in match-LSTM. Specifically, we use the final representation of the question to replace the weighted sum of the question representations. Foi. the MSMARCO dataset, as the context for each question is consisted of around 10 documents, the. 'Golden Passage\"' is to directly use the human labeled document which could answer the questior. as the prediction.\nFrom the results in Table 2] we can see that the boundary model could clearly outperform the se-. quence model in a big margin on both datasets. We hypothesis that the sequence model is more likely. to stop word generation earlier, and the boundary model can somehow overcome this problem. We have a statistical analysis on the answers generated by our sequence and boundary models shown. in Table 3] We can see that the length of the answers generated by the sequence model is much. shorter than the ground truth. Especially for the MSMARCO task where the answers are usually. much longer, the sequence model could only generate 7 words on average, while the ground truth. answers are 16 on average and the boundary model could generate nearly the same number of words. with the ground truth. Several answers generated by our models are shown in Appendix A. From. Table2] we can also see that the performance gets poorer by removing the attention mechanism in. match-LSTM, while for the MSMARCO dataset, the attention mechanism effects less, with no more than 2 percent reduction in BLEU and Rouge-L scores by attention mechanism ablation..\nSQuAD MSMARCO EM & F1 BLEU1/2/3/4 & Rouge-L Best model 67.0 & 77.2 40.1/33.3/30.1/28.2 & 37.2 -bi-Ans-Ptr 66.5 & 76.8 39.9/32.8/29.6/27.9 & 36.7 -deep 65.9 & 75.8 39.6/32.6/29.4/27.4 & 35.9 -elem 65.2 & 75.4 38.1/31.4/28.3/26.5 & 35.5 -pre-LSTM 64.0 & 72.9 39.6/32.8/29.8/27.7 & 36.3\nTable 4: Ablation study for our best boundary model on the development datasets. Our best model is a further tuned boundary model by considering \"bi-Ans-Ptr' which adds bi- directional answer pointer, \"deep'' which adds another two- layer bi-directional LSTMs between the match-LSTM and the Answer Pointer layers, and \"elem\" which adds element- wise comparison ,(h - H') and (h OH), into Eqn|3 '-pre-LSTM'' refers to removing the pre-processing layer.\n(1) (2) 80 4000 70 3500 60 3000 50 F1 score(s) 2500 10 40 Exact match(s) 2000 30 F1 score(b) 1500 Exact match(b) 20 1000 F1 score(e) 10 Exact match(e) 500 0 0 1 2 3 4 5 6 7 8 9 >9 1 2 3 4 5 6 7 8 9 >9 Answer length Answer length (3) (4) 90 7000 80 6000 nnwwne 70 5000 60 4000 fnnnneeee 50 3000 40 2000 30 1000 20 er Question types Question types\nBased on the effectiveness of boundary pointer and match-LSTM, we conduct further exploration of. the boundary model by adding element-wise comparison (h H'a, ) and (h OHa) into Eqn[3Jin match-LSTM layer, adding 2 more bi-directional LSTM layers between match-LSTM and Ans-Ptr layers, and adding bi-directional Ans-Ptr. We show the ablation study of this further tuned model in Table4] We can see that adding element-wise matching could make the biggest improvement for our. boundary model. We also try to remove the phrase-level representation by removing the pre-process LSTM and using the word-level representations as the inputs of match-LSTM. Interestingly, we find. the phrase-level representation effects little on the MSMARCO task..\nOverall, we can see that both of our match-LSTM models have clearly outperformed the logis-. tic regression model by Rajpurkar et al.(2016), which relies on carefully designed features. The. improvement of our models over the logistic regression model shows that our end-to-end neural net-. work models without much feature engineering are very effective on these tasks and datasets. Our. boundary model also outperformed the DCR model (Yu et al.[2016), which maximizes the proba bility of the gold standard span from all the candidate spans through a neural network structure..\nFigure 2: Performance breakdown by answer lengths and question types on SQuAD development dataset. Top: Plot (1) shows the performance of our two models (where s refers to the sequence model, b refers to the boundary model, and e refers to the ensemble boundary model) over answers with different lengths. Plot (2) shows the numbers of answers with different lengths. Bottom. Plot (3) shows the performance our the two models on different types of questions. Plot (4) shows. the numbers of different types of questions..\nFirst, we suspect that longer answers are harder to predict. To verify this hypothesis, we analysed. the performance in terms of both exact match and F1 score with respect to the answer length on the development set, as shown in Figure 2 For example, for questions whose answers contain more than 9 tokens, the F1 score of the boundary model drops to around 55% and the exact match score drops to only around 30%, compared to the F1 score and exact match score of close to 72% and 67%, respectively, for questions with single-token answers. And that supports our hypothesis..\nAnswer: German In - what language were the classes given ? NULL 0 10 20 30 40 Answer: Martin Sekulic Who was oureton main influence in Karlovac NULL 0 10 20 30 40 Answer: attend school at the Higher Real Gymnasium. Why did oueeeson Tesla % Karlovac ~. NULL 0Z8t resia mnnee harrree attnnd sooyos at the Gnnnnnmnn whhee he wes Pnnunnnonr teeeher maatn seiie The hihher Re Aq caasees weee heen Gernan Sem syoyos wiitin fhe Miiiy hroonter Paragraph Answer: European Parliament and the Council of the European Union Which two governing oueeeson bodies have legislative veto power ? NULL whie the sey e Anodouow uo hhe eendoun pue the Cooune the eendoan union have Poorrr pue yeto nuunp the proress Aeeeernng the rreary % uo eeadoan union artttes 9 pue Paragraph\nAnswer: German In what language were the classes given ~ NULL 0 10 20 30 40 Answer: Martin Sekulic. Who was Tesla 's main influence Karlovac in NULL 0 10 20 30 40 Answer: attend school at the Higher Real Gymnasium Why did Tesla go to Karlovac ? NULL 0/8t mnrre 01 attnnd Jooyos at Hihher Reer whhee he wes Apunodoud teeeher Maatn seniie The Caseeess weee errnnn 18 Jooyos wiliin fhe hronterr Paragraph\nAnswer: European Parliament and the Council of the European Union Which two uosand governing bodies have legislative veto power 2 NULL while the sey AJodouow uo the ueadounn pue the the ueadounn uolun have SJaMod aue yeto huunp the fo the Treaty uo ueadounn unnun 9 pue Paragraph\nFigure 3: Visualization of the attention weights a for four questions. The first three questions share the same paragraph. The title is the answer predicted by our model.\nNext, we analyze the performance of our models on different groups of questions, as shown in Fig re [2] We use a crude way to split the questions into different groups based on a set of questio. vords we have defined, including \"what,\" \"how, \"who, \"when, \"which, \"where,\" and \"why.. These different question words roughly refer to questions with different types of answers. For ex mple, \"when\"' questions look for temporal expressions as answers, whereas \"where\" question ook for locations as answers. According to the performance on the development dataset, our mod. ls work the best for \"when\" questions. This may be because in this dataset temporal expression are relatively easier to recognize. Other groups of questions whose answers are noun phrases, suc as \"what' questions, \"which'' questions and \"where\" questions, also get relatively better results. O. he other hand, \"why' questions are the hardest to answer. This is not surprising because the answer. o \"why' questions can be very diverse, and they are not restricted to any certain type of phrases.\nFinally, we would like to check whether the attention mechanism used in the match-LSTM layer is effective in helping the model locate the answer. We show the attention weights a in Figure3 In the figure the darker the color is the higher the weight is. We can see that some words have been well aligned based on the attention weights. For example, the word \"German\"' in the passage. is aligned well to the word \"language' in the first question, and the model successfully predicts. 'German' as the answer to the question. For the question word \"who\"' in the second question, the.\nword \"teacher\" actually receives relatively higher attention weight, and the model has predicte the phrase \"Martin Sekulic'' after that as the answer, which is correct. For the third question th starts with \"why', the attention weights are more evenly distributed and it is not clear which worc have been aligned to \"why''. For the last question, we can see that the word knowledge needed fc generating the answer can also be detected by match-LSTM. For example, the words \"European \"Parliament\", \"Council\", \"European'' and \"Union' have higher attention weights on \"governing'' i the question. Even though our models can solve this type of questions, they are still not able to solv the questions that need multi-sentences reasoning. More answers generated by our models for th questions related to different kinds of reasoning are shown in Appendix B."}, {"section_index": "9", "section_name": "4.1 DATASETS", "section_text": "Instead of creating questions in Cloze style, a number of other datasets rely on human annotators to. create real questions. Richardson et al.[(2013) created the well-known MCTest dataset and Tapaswi. et al.(2016) created the MovieQA dataset. In these datasets, candidate answers are provided for. each question. Similar to these two datasets, the SQuAD dataset (Rajpurkar et al.[2016) was also. created by human annotators. Different from the previous two, however, the SQuAD dataset does not provide candidate answers, and thus all possible subsequences from the given passage have to. be considered as candidate answers.\nBesides the datasets above, there are also a few other datasets created for machine comprehension such as WikiReading dataset (Hewlett et al.[2016) and bAbI dataset (Weston et al.|2016), but they are quite different from the datasets above in nature..\nThere have been a number of studies proposing end-to-end neural network models for machine. comprehension. A common approach is to use recurrent neural networks (RNNs) to process the. given text and the question in order to predict or generate the answers (Hermann et al.]2015). Attention mechanism is also widely used on top of RNNs in order to match the question with the given passage (Hermann et al.]2015] Chen et al.]2016). Given that answers often come from the given passage, Pointer Network has been adopted in a few studies in order to copy tokens from. the given passage as answers (Kadlec et al.]2016] Trischler et al.2016). Compared with existing work, we use match-LSTM to match a question and a given passage, and we use Pointer Network. in a different way such that we can generate answers that contain multiple tokens from the given passage.\nMemory Networks (Weston et al. 2015) have also been applied to machine comprehen. sion (Sukhbaatar et al.2015]Kumar et al. 2016, Hill et al.[2016), but its scalability when ap. plied to a large dataset is still an issue. In this work, we did not consider memory networks for the SQuAD/MSMARCO datasets.\nThe setting of visual question answering (Antol et al.f|2015) is quite similar to machine comprehen. sion, while their answers are usually very short. So the sequence order of the word-level attention. representation used to align the figure and the question(Xu & Saenko2016) Fukui et al.|2016f Li et al.[2016), are not used in VQA. While our model focus on the word-by-word attention and use\nMachine comprehension of text has gained much attention in recent years, and increasingly re searchers are building data-drive, end-to-end neural network models for the task. We will first review the recently released datasets and then some end-to-end models on this task\nA number of datasets for studying machine comprehension were created in Cloze style by removing a single token from a sentence in the original corpus, and the task is to predict the missing word. For example, Hermann et al.(2015) created questions in Cloze style from CNN and Daily Mail highlights. Hill et al.(2016) created the Children's Book Test dataset, which is based on children's stories. Cui et al.(2016) released two similar datasets in Chinese, the People Daily dataset and the Children's Fairy Tale dataset..\nLSTM to concatenate the aligned pairs and that would be helpful to generate a longer sequence as answer."}, {"section_index": "10", "section_name": "5 CONCLUSIONS", "section_text": "In this paper, We developed two models for the machine comprehension problem defined in the Stanford Question Answering (SQuAD) and A Human-Generated MAchine Reading COmprehen- sion (MSMARCO) datasets, both making use of match-LSTM and Pointer Network. Experiments on the SQuAD and MSMARCO datasets showed that our second model, the boundary model, could achieve a performance close to the state-of-the-art performance on the SQuAD dataset and achieved the state-of-the-art on the MSMARCO dataset. We also show the boundary model could overcome the early stop prediction problem of the sequence model.\nThis research is supported by the National Research Foundation, Prime Ministers Office, Singapore under its International Research Centres in Singapore Funding Initiative.\nWe thank Pranav Rajpurkar for testing our model on the hidden test dataset and Percy Liang for helping us with the Dockerfile for Codalab.\nYiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. Consensus attention-basec neural networks for chinese reading comprehension. In arXiv preprint arXiv:1607.02250, 2016.\nAkira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. In. Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2016\nKarl Moritz Hermann. Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustaf Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of th Conference on Advances in Neural Information Processing Systems, pp. 1693-1701, 2015.\nDaniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han Matthew Kelcey, and David Berthelot. WIKIREADING: A novel large-scale language under-. standing task over wikipedia. In Proceedings of the Conference on Association for Computational. Linguistics, 2016.\nIn the future, we plan to look further into the different types of questions and focus on those questions. which currently have low performance, such as the \"why' questions and multi-sentences related questions. We also plan to test how our models could be applied to other machine comprehension datasets.\nDanqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the CNN/Daily Mail reading comprehension task. In Proceedings of the Conference on Association for Compu tational Linguistics, 2016.\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Proceed ings of the Conference on Advances in neural information processing systems. 2015.\nMakarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja. Fidler. MovieQA: Understanding stories in movies through question-answering. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2016..\nJason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merrienboer, Armand Joulin, and Tomas Mikolov. Towards AI-complete question answering: A set of prerequisite toy tasks. In Proceedings of the International Conference on Learning Representations, 2016\nAnkit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On druska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. In Proceedings of the International Conference on Machine Learning, 2016.\nWenpeng Yin, Sebastian Ebert, and Hinrich Schutze. Attention-based convolutional neural networl for machine comprehension. arXiv preprint arXiv:1602.04341, 2016\nYang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end answer chunk. extraction and ranking for reading comprehension. arXiv preprint arXiv:1610.09996, 2016"}, {"section_index": "11", "section_name": "A APPENDIX", "section_text": "We show the predictions our boundary and sequence models on two cases from two datasets ir Table|5] It can be seen that the sequence model is more likely to predict a shorter sequence which is the problem of early stop prediction.\nTable 5: Prediction samples for sequence and boundary models. The first case is sampled from SQuAD dataset and the second is sampled from MSMARCO dataset.\n(1) Context Asopposed to broadcasts of primetime series, CBs broadcast specia episodes of its late night talk shows as its lead-out programs for Supe Bowl 50, beginning with a special episode of The Late Show witl Stephen Colbert following the game. Question (Syntactic) What CBS show followed the Super Bowl? Golden Anwser The Late Show with Stephen Colbert match-LSTM (Sequence) The Late Show match-LSTM (Boundary) The Late Show with Stephen Colbert (2) Context Urinalysis is a test that evaluates a sample of your urine. Urinalysi is used to detect and assess a wide range of disorders, such as uri nary tract infection, kidney disease and diabetes. Urinalysis involve examining the appearance, concentration and content of urine. Abnor mal urinalysis results may point to a disease or illness. For example a urinary tract infection can make urine look cloudy instead of cleai Increased levels of protein in urine can be a sign of kidney disease. Query what can urinalysis detect? Golden Anwser Detect and assess a wide range of disorders, such as urinary tract infec tion, kidney disease and diabetes. match-LSTM (Sequence) Urinalysis match-LSTM (Boundary) Urinalysis is used to detect and assess a wide range of disorders, suc as urinary tract infection, kidney disease and diabetes"}, {"section_index": "12", "section_name": "B APPENDIX", "section_text": "Table 6: Different types of reasoning samples in SQuAD dataset. \"match-LSTM'' refers to th. \"match-LSTM with Ans-Ptr\"' and \"LSTM'' refers to the \"LSTM with Ans-Ptr\"' which is the ablatio. of attention mechanism in match-LSTM..\nWe show how four different models work on different type of questions in SQuAD dataset through. Table 6] After the analysis of a hundred cases, we see that our models are not able to solve the. questions that need multi-sentences reasoning. And the model without attention mechanism has less power to identify the important key word like the third case shown in Table[6.\ng1ann begns to Tesenoie lI Carnot cycle. Question (Synonymy) What is the Rankine cycle sometimes called? Golden Anwser practical Carnot cycle LSTM (Sequence) Carnot cycle match-LSTM (Sequence) Carnot cycle LSTM (Boundary) practical Carnot cycle match-LSTM (Boundary) Carnot cycle (2) Context While the Commission has a monopoly on initiating legislation, the European. Parliament and the Council of the European Union have powers of amend-. ment and veto during the legislative process. Question (Knowledge) Which two governing bodies have legislative veto power?. Golden Anwser the European Parliament and the Council of the European Union. LSTM (Sequence) European Parliament and the Council of the European Union match-LSTM (Sequence) European Parliament and the Council of the European Union LSTM (Boundary) European Parliament and the Council of the European Union match-LSTM (Boundary) European Parliament and the Council of the European Union. (3) Context Current faculty include the anthropologist Marshall Sahlins, historian Dipesh. Chakrabarty, ... Shakespeare scholar David Bevington, and renowned political. scientists John Mearsheimer and Robert Pape. Question (Syntactic) What Shakespeare scholar is currently on the university's faculty? Golden Anwser David Bevington LSTM (Sequence) Marshall Sahlins match-LSTM (Sequence) David Bevington LSTM (Boundary) Marshall Sahlins match-LSTM (Boundary) David Bevington (4) Context The V&A Theatre & Performance galleries, formerly the Theatre Museum,. opened in March 2009. The collections are stored by the V&A, and are available. for research, exhibitions and other shows. They hold the UK's biggest national collection of material about live performance in the UK since Shakespeare's day, covering drama, dance, musical theatre, circus, music hall, rock and pop.. and most other forms of live entertainment. Question (Reasoning) What collection does the V&A Theatre & Performance galleries hold?. Golden Anwser material about live performance LSTM (Sequence) Theatre match-LSTM (Sequence) the Theatre Museum LSTM (Boundary) research, exhibitions and other shows match-LSTM (Boundary) Theatre Museum (5) Context Along with giving the offender his 'just deserts\"', achieving crime control via incapacitation and deterrence is a major goal of criminal punishment.. Question (Ambiguous) What is the main goal of criminal punishment of civil disobedients?. Golden Anwser achieving crime control via incapacitation and deterrence LSTM (Sequence) deterrence match-LSTM (Sequence) just deserts. LSTM (Boundary) incapacitation and deterrence match-LSTM (Boundary) incapacitation and deterrence"}]
rye9LT8cee
[{"section_index": "0", "section_name": "ALTERNATING DIRECTION METHOD OF MULTIPLIERS FOR SPARSE CONVOLUTIONAL NEURAL NETWORKS", "section_text": "Farkhondeh Kiaee. Christian Gagne. and Mahdieh Abbasi\nComputer Vision and Systems Laboratory Department of Electrical Engineering and Computer Engineering Universite Laval, Quebec, QC G1V 0A6, Canada\nfarkhondeh.kiaee.l,mahdieh.abbasi.1}@ulaval.ca christian.gagne@gel.ulaval.ca\nThe storage and computation requirements of Convolutional Neural Network. (CNNs) can be prohibitive for exploiting these models over low-power or em bedded devices. This paper reduces the computational complexity of the CNNs by minimizing an objective function, including the recognition loss that is augmentec with a sparsity-promoting penalty term. The sparsity structure of the network is. identified using the Alternating Direction Method of Multipliers (ADMM), whicl is widely used in large optimization problems. This method alternates betweer promoting the sparsity of the network and optimizing the recognition perfor. mance, which allows us to exploit the two-part structure of the corresponding objective functions. In particular, we take advantage of the separability of th sparsity-inducing penalty functions to decompose the minimization problem intc. sub-problems that can be solved sequentially. Applying our method to a variety of state-of-the-art CNN models, our proposed method is able to simplify the orig. inal model, generating models with less computation and fewer parameters, whil maintaining and often improving generalization performance. Accomplishment on a variety of models strongly verify that our proposed ADMM-based methoc. can be a very useful tool for simplifying and improving deep CNNs.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "To improve the efficiency of CNNs, several attempts have been made to reduce the redundancy in the network. Jaderberg et al.(2014) proposed to represent the full-rank original convolutiona filters tensor by a low-rank approximation composed of a sequence of two regular convolutiona layers, with rectangular filters in the spatial domain. A different network connection structure is suggested byIoannou et al.(2015), which implicitly learns linear combinations of rectangular filters in the spatial domain, with different vertical/horizontal orientations. Tai et al.[(2015) presented ar exact and closed-form solution to the low-rank decomposition approach of Jaderberg et al.(2014 to enforce connection sparsity on CNNs.\nThe alternating direction method of multipliers (ADMM) (Boyd et al.(2011)) has been extensively studied to minimize the augmented Lagrangian function for optimization problems, by breaking them into smaller pieces. It turns out that ADMM has been recently applied in a variety of contexts (Lin et al.(2013a);Shen et al.(2012);Meshi & Globerson(2011). We demonstrate that the ADMM"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deep Convolutional Neural Networks (CNNs) have achieved remarkable performance in challeng. ing computer vision problems such as image classification and object detection tasks, at the cost of a large number of parameters and computational complexity. These costs can be problematic for. deployment especially on mobile devices and when real-time operation is needed..\nSparse learning has been shown to be efficient at pruning the irrelevant parameters in many practical. applications, by incorporating sparsity-promoting penalty functions into the original problem, where the added sparsity-promoting terms penalize the number of parameters (Kiaee et al.(2016a b c)) Motivated by learning efficient architectures of a deep CNN for embedded implementations, our. work focuses on the design of a sparse network using an initial pre-trained dense CNN..\nconvolution layer fully connected layer m Higher Layers. Middle Layers relu & pool Inputs Lower Layers output m' Input. One vectorized n' output. feature map input feature map feature map. m =1 m Fully connected. Convolution weights Filters Sparsity output feature map Blocks vector of length n. W\nFigure 1: Architecture of a typical CNN, selected sparsity blocks at convolutional and fully con nected layers are shown in blue.\nprovides an effective tool for optimal sparsity imposing on deep neural connections. This is achieved by augmenting a sparsity-inducing penalty term to the recognition loss of a pre-trained network.. Different functions including the lo-norm and its convex l1-norm relaxations can be considered as a penalty term. The variables are then partitioned into two subsets, playing two different roles: 1) promoting the sparsity of the network at the level of a predetermined sparse block structure; 2). minimizing the recognition error\nThe augmented Lagrangian function is then minimized with respect to each subset by fixing all othe. subsets at each iteration. In the absence of the penalty term, the performance results correspond tc the original network with a dense structure. By gradually increasing the regularization factor of. the sparsity-promoting penalty term, the optimal parameters move from their initial setting to the. sparse structure of interest. This regularization factor is increased until the desired balance betweer. performance and sparsity is achieved.\nSeveral approaches have been developed to create sparse networks by applying pruning or sparsity. regularizers: Wen et al.(2016);[Alvarez & Salzmann(2016);Liu et al.(2015);Han et al.(2015). The most relevant to our work in these papers is the Structured Sparsity Learning (SsL) method of|Wen. et al.[(2016), that regularizes the structures (i.e., filters, channels, filter shapes, and layer depth) of CNNs using a group lasso penalty function. However, the SSL approach suffers from two limitations. compared to our proposed method. First, it relies on a rigid framework that disallows incorporation. of non-differentiable penalty functions (e.g., lo-norm). Second, it requires training the original full. model, while our proposed method allows to decompose the corresponding optimization problems. into two sub-problems and exploit the separability of the sparsity-promoting penalty functions to. find an analytical solution for one of the sub-problems (see Sec.5|for more details)..\nOur numerical experiments on three benchmark datasets, namely CIFAR-10, CIFAR-100, anc SVHN, show that the structure of the baseline networks can be significantly sparsified. While most previous efforts report a small drop or no change in performance, we found a slight increase oi classification accuracy in some cases.\nConsider a CNN network consisting of a total of L layers, including convolutional and fully con nected layers, which are typically interlaced with rectified linear units and pooling (see Fig.[1). Let the l-th layer includes m' input feature maps and n' output feature maps, with W!, representing the convolution filter between the i-th and j-th input and output feature maps, respectively Our goal\n'Each fully connected layer can also be thought to be composed of several 1-dim convolutions, where the. filter is of the same size as the input, hence is applied only at one location. In this context, if you look at the\nminimize Lnet(W) + f(W) W\nwhere Lnet stands for the logistic loss function of the output layer of the network which is a function of the convolutional filters of all layers W = {W?,|i = 1,..., m', j = 1, ..., n', l = 1, ..., L}. The term f(w) is a penalty function on the total size of the filters. The lo-norm (cardinality) function or relaxations to higher orders such as l1-norm function can be employed to promote the sparsity of the filters.\nThe parameter controls the effect of sparse penalty term. As varies, the solution of (1) traces th trade-off path between the performance and the sparsity. In the next section, the alternating directio method of multipliers (ADMM) which is employed to find the optimal solution of (1) is described\nConsider the following constrained optimization problem:\nwhere I', is the dual variable (i.e., the Lagrange multiplier), p is a positive scalar, || . |= and is the Frobenius norm.\nIn order to find a minimizer of the constrained problem (3), the ADMM algorithm uses a sequence of iterative computations:\nThe three described computation steps are applied in an alternating manner. Re-estimation stops. when the Frobenius distance of F in two consecutive iterations as well as the Frobenius distance of. W and F at current iterations are less than a small threshold value. The details of steps 1 and 2 are. described in the next sections. The outline of the proposed sparse CNN approach is summarized in. Algorithm|1] At each individual regularization , in order to improve the performance of the sparse structured network we fine tune the initial non-augmented recognition loss subject to the parameters belonging to the identified sparse structure..\nfully connected layer at Fig.1] you can see that it is just composed of one (m' = 1) vectorized input feature map and n' 1-dim convolutions, one for each output class.\nis to design the optimal filters, subject to sparse structural constraints. In order to obtain the filters. which balance a trade-off between the minimization of the loss function and sparseness, we consider the following objective function\ns.t. W -F=0\nwhich is clearly equivalent to the problem stated in (1). The key point here is that by introducing an additional variable F and an additional constraint W - F = O, the objective function of the problem (1) is decoupled into two parts that depend on two different variables..\nI w!.- + trace( 2 l,i,j l,i,j\n1. Make use of a descent method to solve the following performance promoting problem\nw{k+1} = arg minC W.F{k} W\nF{k+1} = arg minC W{k+1},F,T{ F\n{k+1} {k} {k+1} {k+1} Wl Fl\nAlgorithm 1 Outline of the proposed sparse CNN algorithm"}, {"section_index": "3", "section_name": "3.1 PERFORMANCE PROMOTING STEP", "section_text": "I Wh, -U lF minimize Lnet W 2 l,i,j\nwhere U!, = F'lj - '. From (7), it can be seen that by exploiting the separability property o ADMM method in the minimization of the augmented Lagrangian, the sparsity penalty term whic might be non-differentiable is excluded from (7). Consequently, descent algorithms that rely on th differentiability can be utilized to solve the performance promoting sub-problem (7)\nThis property allows that popular software and toolkit resources for Deep Learning, including Caffe Theano, Torch, and TensorFlow, to be employed for implementing the proposed approach. In our work, we use Stochastic Gradient Descent (SGD) method of TensorFlow to optimize the weights (W), which seemed a reasonable choice for the high-dimensional optimization problem at hand The entire procedure relies mainly on the standard forward-backward pass that is used to train the convolutional network."}, {"section_index": "4", "section_name": "3.2 SPARSITY PROMOTING STEP", "section_text": "The completion of squares with respect to F in the augmented Lagrangian can be used to show tha (5) is equivalent to\nP minimize f(F) Fl 2 H F l,i,j\nframework to select arbitrary sparsity blocks. Sparse structure can then be achieved at the level of. the selected block. Specifically, both terms on the right-hand side of (8), f(F) (for either the case of l1-norm or lo-norm) as well as the square of the Frobenius norm can be written as a summation of. component-wise functions of a tensor. In our experiments, individual filter components are selected as the sparsity blocks (see Fig.1). Hence (8) can simply be expressed in terms of Fl, components corresponding to the filters. However, any other individual sub-tensor components can be selected as the sparsity block.\nMore precisely, if f(F) is selected to be the l1-norm function, then C(F) = t,i, ( || Ft, ||F +$ I Fl, - Vl, |/) and consequently (8) is converted to a minimization problem that only involves spatial filters. The solution of (8) can then be determined analytically by the following soft\nBy completing the squares with respect to W in the augmented Lagrangian C(W, F, T), we obtain the following equivalent problem to (4)\nthresholding operation\nf otherwise J\nwhere a = . Similarly, the following hard thresholding operation is the analytical solution for the case of the selection of the lo-norm f(F) penalty term..\nif l V * 24 0 otherwise\nFor convex problems, the ADMM is guaranteed to converge to the global optimum solution (Boyd. et al.[(2011)). For non-convex problems, where there is a general lack of theoretical proof, extensive computational experience suggests that ADMM works well when the penalty parameter p in the. augmented Lagrangian is chosen to be sufficiently large. This is related to the quadratic term that. tends to locally convexify the objective function for sufficiently large p..\nUnfortunately, in the deep learning problems, the objective is inherently highly non-convex and consequently there is the risk that it becomes trapped into a local optimum. This difficulty could be circumvented by considering a warm start that may be obtained by running a pre-trained version of the network. The proposed ADMM approach is then used to sparsify the final solution. Using this procedure, as the experiments in the next section show, we have obtained good empirical results."}, {"section_index": "5", "section_name": "4 EXPERIMENTAL RESULTS", "section_text": "In order to validate our approach, we show that our proposed sparse CNN approach can be efficiently applied to existing state-of-the-art network architectures to reduce the computational complexity. without reducing the accuracy performance. For this purpose, we evaluate the proposed scheme on the CIFAR-10, CIFAR-100, and SVHN datasets with several CNN models.\nIn the implementation of the performance promoting step in Sec.3.1] the batch size is 128 and the learning rate is set to a rather small value (i.e., 0.001 to search the space around the dense initialized filters to find a sparse solution). Since the regularization factor is selected from gradually increas ing values, for the first small values of the selection of long epochs for performance-promoting step (inner loop) and fine-tuning steps is computationally prohibitive and would result in over-fitting Instead, we start with one epoch for the first and increase the number of epochs by o for the next values up to the v-th value, after which the number of epochs is limited to dv. We found thal 6 = 1 and v = 15 generally work well in our experiments. We already incorporated the number oi training epochs at tables[3] 4] and 5|of Appendix B If the maximum limiting number of iterations of inner loop is (suggested value of =10), the training time of the v-th value takes a total of ov + dv epochs (dv for performance-promoting step and Ov for fine-tuning) under the worst-case assumption, where the inner loop has not converged and completes only at the -th iteration."}, {"section_index": "6", "section_name": "4.1 RESULTS ON CIFAR-1O OBJECT CLASSIFICATION", "section_text": "The CIFAR-10 dataset is a well-known small dataset of 60,000 32 x 32 images in 10 classes. Thij dataset comprises standard sets of 50,o00 training images, and 10,000 test images. As a baseline. for the CIFAR-1O dataset, we deploy four models: the Network in Network (NIN) architecture (Lin et al.[2013b), its low-rank version (Ioannou et al.]2015), a custom CNN, and its low-rank. counterpart as well, two last being learned from scratch on the CIFAR dataset. The configurations o. the baseline models are outlined in Table 1] The architecture of the NIN model is slightly differen from the one introduced in Lin et al.(2013b). The original NIN uses 5x5 filters in the first anc second convolutional layer which are replaced with one and two layers of 3x3 filters, respectively As suggested byIoannou et al.(2015), this modified architecture has comparable accuracy and less\nTable 1: Structure of the baseline networks\nNIN Low-rank NIN h: 1 x 3 x 96 conv1 3 x 3 x 192 v: 3 x 1 x 96 conv2,3 1 x 1 160.1 x 1 96 h: 1 x 3 x 96 conv4 3 x 3 x 192 v: 3 x 1 x 96 h: 1 x 3 x 96 conv5 3 x 3 x 192 v: 3 x 1 x 96 conv6,7 1 x 1 192.1 1 192 h: 1 3 96 conv8 3 x 3 x 192 v: 3 x 1 x 96 conv9,10 1 x 1 x 192.1 x 1 x 10\ncomputational complexity. In the low-rank networks, every single convolutional layer of the full rank model is replaced with two convolutional layers with horizontal and vertical filters. NIN an low-rank NIN have an accuracy of 90.71 % and 90.07 %, respectively. The custom CNN and it low-rank variant show a baseline accuracy of 80.0 % and 80.2%, respectively. The results of ou experiments are plotted in Fig.2|for both lo-norm and l1-norm sparsity constraints.\nFig.2shows how the accuracy performance changes as we increase the regularization factor . The case with = 0 can be considered as the baseline model. In order to avoid over pruning. of some layers, if the number of pruned filters in one layer exceeds 50 % of the total number o. filters in that layer, then we change the pruning threshold to the statistical mean of the Frobenius. norm of all the filters at that layer in the sparsity promoting step (explained in Sec.3.2) to stop. the over pruning of that layer. Taking the NIN and low-rank-NIN as an example, using the lo-norm. sparsity function, the parameters in the networks are reduced by 34.13 % and 28.5 % and the relative. accuracy performance is +.5 % and +1.23 %, respectively. Using the l1-norm sparsity constraint. achieves slightly lower accuracy compared to the lo-norm, although it still conveniently sparsifies. the network.\nUsing the proposed sparsity promoting approach on the custom CNN models, the networks witl sparse connections and similar accuracy (79.9% vs 80 %) are achieved, but they have approximately 49.4% fewer parameters than the original networks model. Since the target solution is likely t be sparse, enforcing sparsity at the beginning of the learning process with our proposed methoc provides a way to avoid overfitting for achieving a better performance. However, as the experimen results show, increasing more the sparsity strength of the solution may lead to slight oversmoothin. and drop in the performance. For the low-rank CNN, we achieve a comparable accuracy of 80.14 % with 25 % fewer parameters.\nTo further verify that the advantage of ADMM training is statistically significant, a t-test is con- ducted by repeating the experiment 15 times on CIFAR-10 by using NIN model. The t-test results are in Appendix [A] In Appendix B] however, we present detailed results for random sample runs over the configurations tested. According to the results presenting in Table [3 of Appendix [B] the number of parameters in the network can be reduced by a large factor, especially for the higher convolution layers. Interestingly, even with significant reductions in the number of parameters, the performance does not decrease that much. This parameter reduction also gives rise to the speed-up of the network, reported at the last columns of the tables. Note that most of the results listed in Table3 outperform the baseline model."}, {"section_index": "7", "section_name": "4.2 RESULTS ON CIFAR-1OO OBJECT CLASSIFICATION", "section_text": "The CIFAR-100 dataset is similar to the CIFAR-10 dataset containing 100 classes with 600 images. per class. For CIFAR-100 we again use the baseline networks in Table[1with only one structura]. difference (i.e., the NIN networks contain 100 feature maps at the last convolution layer and custom. CNN networks contain 100 output labels). The baseline NIN, low-rank NIN, custom CNN, and. low-rank CNN models show a test accuracy of 63.3 %, 63.6 %, 60.11 %, and 60.23 %, respectively\nFigure 2: Variation of accuracy measure against: (odd rows) values of o parameters and (ever rows) normalized number of zero elements for different models and datasets..\nUsing the proposed sparsity promoting approach on these networks, the total number of parameters in the layers can be reduced by a large factor with comparable or even better performance accuracy\nIn particular, on the CIFAR-100 dataset, we achieve 64.09 % classification accuracy with 34.1 % sparsity for the NIN model, which improves upon the original NIN on this dataset. A test accuracy of 65.23 % is obtained for CIFAR-100 for the low-rank NIN model with 28.5 % sparsity which\nNIN Low-rank NIN CNN Low-rank CNN 0.83 0.92 0.82 0.92 0.82 COOC 0.91 0.81 0.91 0.81 0.99 0.80 CRRRAIO 0.81 0.9 0 0.5 ^ 0 0.5 1 0 0.5 1 0 0.5 1 mu parameter 0.83 0.92 0.82 0.92 0.82 0.91 0.81 0.91 0.81 0.90 0.80 0.8 0.9 0 20 40 0 20 40 0 50 100 0 20 40 Normalized number of zero elements (%) 0.66 0.63 0.65 0.62009: 0.62 0.645 0.65 0.64 0.61 0.61 0.635 0.64 001 0.6 0.63 0.6 IARRI Acenneey 0.63 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 mu parameter 0.66 0.63 0.65 0.62 0.645 0.65 0.6 0.64 0.61 0.61 0.635 0.64 0.6 0.63 0.6 0.63 0 20 40 0 20 40 0 50 0 20 40 Normalized number of zero elements (%) 0.88 0.99 0.9 0.98 OcPoe 0.89 0.98 0.86 0.9703 0.88 0.97 0.84 0.87 0.96 0.96 0.86 NHAS 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 mu parameter 0.88 0.99 0.9 0.98 0.89 0.98 0.86 0.97 0.88 0.97 0.84 0.87 0.96 0.96 0.86 0 20 40 0 20 40 0 50 0 20 40 Normalized number of zero elements (%) I1-norm I0-norn\nsurpasses the performance of the baseline model. The proposed method on custom CNN and low rank CNN show comparable performance accuracy to their corresponding baseline models (59.82 % vs 60.11 % and 60.1 % vs 60.23 %) with much less computation (49.7 % and 24.4 % number of zerc elements, respectively). The details of changing sparsity in different layers of the networks on the CIFAR-100 dataset are presented in Table[4|of Appendix[B] The same conclusions made for CIFAR 10 can be drawn from these results."}, {"section_index": "8", "section_name": "4.3 RESULTS ON SVHN OBJECT CLASSIFICATION", "section_text": "The SVHN dataset consists of 630,420 32x32 color images of house numbers collected by Googl. Street View. The task of this dataset is to classify the digit located at the center of each image. The. structure of the baseline models used in SVHN is similar to those used for CIFAR-10, which are. presented in Table [1 The training and testing procedure of the baseline models followsLin et al. 2013b). The baseline NIN, low-rank NIN, custom CNN, and low-rank CNN models show the accu racy of 96.2 %, 96.7 %, 85.1 %, and 87.6 %, respectively. For this dataset, by applying our proposec. sparse approach to NIN and low-rank NIN models, we obtain a higher accuracy of 96.97 % and 99 % with 34.17 % and 28.6 % fewer parameters, respectively. We also achieve comparable accuracy o1. 83.3 % and 86.3 % using 49.7 % and 24.7 % less parameters of the original model parameters or. custom CNN and low-rank CNN models, respectively (see Table|5|of Appendix|B|for the details or. changing the sparsity in different layers of the networks on SVHN dataset).."}, {"section_index": "9", "section_name": "5 DISCUSSION", "section_text": "In this paper we proposed a framework to optimal sparsification of a pre-trained CNN approach. We employed the ADMM algorithm to solve the optimal sparsity-promoting problem, whose solutior gradually moves from the original dense network to the sparse structure of interest as our emphasis on the sparsity-promoting penalty term is increased. The proposed method could potentially reduce the memory and computational complexity of the CNNs significantly.\nBriefly, the main contributions of the proposed sparse CNN can be summarized as follows:\nSome methods such as SSL (Wen et al.(2016)), based on group Lasso regularization of the block structures (e.g. filters), appears to be closely related to our work. Indeed, these methods learn sparse filters and minimize the classification error simultaneously. In contrast, our proposed approach uses ADMM to provide a separate scheme that optimize the sparse blocks and classification error\nparability : The penalty function is separable with respect to the individual elements of the weight tensors. In contrast, the recognition loss function cannot be decomposed intc component-wise functions of the weight tensors. By separating the two parts in the min imization of the augmented Lagrangian, we can analytically determine the solution to the sparsity promoting sub-problem (8). ifferentiability : The recognition loss function Lnet(W) is typically differentiable with respec to the parameters, as opposed to some choices of sparsity penalty terms (e.g., lo-norn which is a non-differentiable function). In our approach, by separating the two parts ir the minimization of the augmented Lagrangian, descent algorithms can be utilized to solve the performance promoting sub-problem (7) while different functions (e.g., lo-norm anc l1-norm) can be incorporated as means of sparsity penalty terms in the original problen (1). odel size reduction : There are recent works focusing on reducing the parameters in the convo lutional layers (Jaderberg et al.(2014); Ioannou et al.(2015); Tai et al.(2015)). In CNN models, the model size is dominated by the fully connected layers. Thus, the previous ap proaches are not capable of reducing the size of the whole model. Our proposed approacl can be applied on both the convolution and fully connected layers and can speed up the computation as well as compressing the size of the model. ombinability with other methods : Several attempts have been made to compress the deep net\nombinability with other methods : Several attempts have been made to compress the deep net works using the weights sharing and quantization (Han et al.[(2016);Gupta et al.[(2015) Vanhoucke et al.[(2011)). However, these techniques can be used in conjunction with our. proposed sparse method to achieve further speedup..\nseparately. Indeed, at the core of our contribution, ADMM brings the above major separability and differentiability benefits to the proposed sparse CNN method. Our proposed algorithm has the advantage that it is partially and analytically solvable due to the separability property. This contributes to the efficient trainability of the model. Moreover, the differentiability problem of lo- norm penalty function makes it unusable for a joint performance/sparsity optimization, while it can be conveniently incorporated as a sparsity penalty term in our proposed method.\nFurthermore, in the SSL method, strengths of structured sparsity regularization is selected by cross validation and the networks weights are initialized by the baseline. This is computationally ben eficial for small regularization level. However, for larger regularization value, the presented SSI approach requires training the original full model from scratch. In contrast, our approach gradually modifies the regularization factor and each step continues training from the solution achieved in the previous step (corresponding to the previous regularization factor), which plays an important role in reducing the computational complexity of the method."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors gratefully acknowledge financial support by NSERC-Canada, MITACS, and E Ma. chine Learning Inc., a GPU grant from NVidia, and access to the computational resources of Calcul Quebec and Compute Canada. The authors are also grateful to Annette Schwerdtfeger for proof reading this manuscript."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Jose M Alvarez and Mathieu Salzmann. Learning the number of neurons in deep networks. In Advances in Neural Information Processing Systems, pp. 2262-2270, 2016."}, {"section_index": "12", "section_name": "APPENDIX A SIGNIFICANCE VALIDATION OF IMPROVEMENTS", "section_text": "On order to verify that the advantage of ADMM training is statistically significant, we conduct t-tes. by repeating the experiment 15 times on CIFAR-10 using NIN to compare the error rate of ADMM training and standard fine-tuning (by dropping the learning rate upon \"convergence\" and continuing to learn), with the same number of epochs and learning rates. Initialized from the same baselin model with 90.71 % accuracy, the ADMM training using lo-norm and standard fine-tuning on av. erage achieve accuracy of 91.34 % and 91.09 %, respectively. The results demonstrate the ADMM training achieves improvement of 0.63 % from the baseline model which is statistically significan (t-test result with p < 0.001). ADMM training performance is also significantly 25 % better thar. what the standard fine-tuning achieves (t-test result with p < O.001). The t-test experiment alsc shows that ADMM could reduce the variance of learning. In the 15 repeated experiments, ADMM training has the lowest standard deviation of errors compared with their counterparts using standar. fine-tuning (standard deviation of 0.04 % for ADMM vs 0.06 % for standard fine-tuning)..\nTable 2: t-test results for the significance validation of the performances. Results are reported ove. 15 runs on CIFAR-10 using NIN.\nADMM training Standard fine-tuning Baseline Mean accuracy (%) 91.34 91.09 90.71 Accuracy standard deviation (%) 0.04 0.06 Sparsity (%) 34.5 0 0 p-value 0.001 0.001\nDue to space consideration, we present some extra results in the current appendix. First, the results. for our sparsity promoting approach for the different models on CIFAR-10, CIFAR-100, and SVHN are presented in Tables [3]and[5] respectively. Follows in Table 6Jresults showing joint variations of accuracy and sparsity obtained with increasing values, for the three tested datasets. All these. results are for a single random run of each method on the dataset at hand..\nTable 3: Performance of the p osed ADMM-based sparse method on the CIFAR-10 dataset\n(a) NIN model Accuracy (%) Filter (#) Sparsity (%) Training epochs (#) Speedup 0 90.71 0-0-0-0 0.00 0 1.00 0.000 91.14 33-482-968-0 1.23 4 1.06 mnnn-I 0.105 91.42 33-551-1027-0 1.36 16 1.19 0.211 91.47 34-609-1144-31 1.55 30 1.31 0.316 91.56 41-749-1428-589 2.72 48 1.44 0.421 91.47 56-1822-2300-5630 12.14 90 1.56 0.526 91.13 70-6810-4834-12451 30.73 120 1.69 0.632 91.21 107-6810-11568-12451 34.07 140 1.81 0 90.71 0-0-0-0 0.00 0 1.00 0.000 91.24 34-482-969-2 1.23 4 1.06 mnnu-0 0.105 91.52 36-554-1031-2 1.37 16 1.19 0.211 91.57 39-614-1148-35 1.57 36 1.31 0.316 91.66 46-755-1432-596 2.75 64 1.44 0.421 91.57 65-1828-2304-5640 12.18 80 1.56 0.526 91.23 81-6821-4843-12461 30.78 96 1.69 0.632 91.31 118-6821-11577-12461 34.12 112 1.81\n* In the order of first hidden layer to last hidden layer out of a total of 576-18432-36864-36864 filters, respec tively.\n(b) Low-rank N1N mode Accuracy (%) Filter (#) Sparsity (%) Training epochs (#) Speedup 0 90.07 0-0-0-0-0-0-0-0 0.00 0 1.00 0.000 90.65 8-9-192-96-102-96-0-1 0.54 4 1.09 wnou- I 1 0.105 90.92 9-9-192-98-180-97-20-7 0.66 16 1.26 0.211 91.12 9-9-201-104-287-116-78-14 0.88 30 1.43 0.316 91.25 11-10-275-135-483-177-270-58 1.53 56 1.61 0.421 91.22 15-22-479-239-1105-411-983-225 3.75 100 1.78 0.526 91.21 19-28-1163-644-2832-1343-3083-871 10.76 120 1.96 0.632 91.20 30-37-2707-1989-6509-4176-7681-3232 28.43 140 2.13 0 90.07 0-0-0-0-0-0-0-0 0.00 0 1.00 0.000 90.75 9-10-194-96-103-98-2-2 0.55 4 1.09 wou-07 0.105 91.02 10-10-194-102-182-99-23-8 0.68 16 1.26 0.211 91.22 13-11-204-110-293-119-81-15 0.91 36 1.43 0.316 91.35 18-16-281-141-490-181-277-59 1.58 48 1.61 0.421 91.32 23-30-485-245-1112-420-990-233 3.82 80 1.78 0.526 91.31 29-36-1173-651-2839-1354-3092-879 10.84 108 1.96 0.632 91.30 40-46-2719-1996-6519-4188-7692-3240 28.51 126 2.13\n* In the order of first hidden layer to last hidden layer out of a total of 288-288-9216-9216-18432-18432-18432-18432 filter respectively.\n(c) CNN mode] Accuracy (%) Filter (#) Sparsity (%) Training epochs (#) Speedup 0 80.00 0-0-0-0-0 0.00 0 1.00 0.000 81.24 0-0-0-0-0 0.00 4 1.00 mnou-1 0.105 81.44 0-0-0-0-0 0.00 16 1.00 0.211 81.46 0-0-0-0-0 0.00 30 1.00 0.316 81.24 0-5-20-9-57 2.40 64 1.21 0.421 81.48 0-267-843-792-57 5.11 80 1.37 0.526 80.92 3-2870-8922-7161-57 29.82 120 1.94 0.579 79.80 7-5383-17189-10736-57 50.63 130 2.15 0 80.00 0-0-0-0-0 0.00 0 1.00 0.000 81.34 1-2-2-1-1 0.05 4 1.01 mnu-07 0.105 81.54 1-2-3-2-3 0.14 16 1.02 0.211 81.56 5-3-5-5-5 0.23 36 1.03 0.316 81.34 5-8-25-17-21 0.95 56 1.10 0.421 81.58 10-273-853-800-23 3.75 80 1.30 0.526 81.02 13-2879-8933-7173-23 28.48 96 1.93 0.579 79.90 17-5395-17200-10748-23 49.29 117 2.15\n* In the order of first hidden layer to last hidden layer out of a total of 288-12288-32768-16384-256 filter respectively.\n(d) Low-rank CNN model. Accuracy (%) Filter (#) Sparsity (%) Training epochs (#) Speedup 0 80.20 0-0-0-0-0-0-0-0-0 0.00 0 1.00 0.000 81.76 2-1-2-3-2-2-2-3-2 0.22 4 1.08 mnou-I 0.105 81.79 3-2-2-5-5-3-4-3-6 0.64 16 1.21 0.211 81.75 6-5-4-5-7-7-6-6-8 0.87 36 1.26 0.316 81.70 6-6-8-7-12-15-9-12-24 2.54 56 1.53 0.421 81.77 10-11-78-75-221-222-205-208-28 4.09 90 1.69 0.526 81.36 14-12-729-728-2241-2246-1804-1806-33 14.83 108 2.18 0.579 80.14 18-15-1358-1357-4311-4312-2697-2699-34 23.53 104 2.37 0 80.20 0-0-0-0-0-0-0-0-0 0.00 0 1.00 0.000 82.01 1-3-1-3-1-2-4-1-3 0.33 4 1.11 uou-07 0.105 82.01 5-3-9-9-4-8-9-6-8 0.88 16 1.26 0.211 81.91 13-8-9-9-13-12-10-12-13 1.43 30 1.37 0.316 82.10 3-9-12-14-20-18-17-19-32 3.41 56 1.62 0.421 82.00 14-18-83-87-228-227-212-215-39 5.28 90 1.77 0.526 81.38 19-19-737-738-2249-2250-1807-1813-39 15.51 96 2.19 0.579 80.25 25-20-1368-1367-4315-4322-2703-2706-40 24.22 104 2.37\n* In the order of first hidden layer to last hidden layer out of a total of 144-144-6144-6144-16384-16384-8192-8192-256 filters respectively.\nTable 4: Performance of the proposed ADMM-based sparse method on CIFAR-100 dataset\n* In the order of first hidden layer to last hidden layer out of a total of 576-18432-36864-36864 filters, respe tively.\n(b) Low-rank N1N mode Accuracy (%) Filter (#) Sparsity (%) Training epochs (#) Speedup 0 63.60 0-0-0-0-0-0-0-0 0.00 0 1.00 0.000 64.62 10-9-194-98-102-96-2-3 0.55 4 1.08 mou-I 0.105 64.51 12-11-196-101-183-100-22-9 0.68 16 1.10 0.211 64.83 13-14-205-107-291-122-82-16 0.92 36 1.13 0.316 65.22 18-18-282-142-489-183-275-60 1.58 48 1.21 0.421 65.18 22-31-486-246-1113-417-992-234 3.82 90 1.39 0.526 65.12 31-37-1170-653-2843-1349-3092-881 10.84 120 1.72 0.632 64.85 42-49-2721-1998-6520-4189-7691-3242 28.52 112 2.08 0 63.60 0-0-0-0-0-0-0-0 0.00 0 1.00 0.000 64.58 8-12-195-96-104-98-3-3 0.56 4 1.08 Wou-0 0.105 64.90 15-14-199-103-186-104-24-11 0.71 16 1.10 0.211 64.93 21-19-210-110-293-125-88-21 0.96 36 1.14 0.316 65.07 25-20-288-148-496-188-281-67 1.63 48 1.21 0.421 64.93 30-38-492-253-1118-426-998-240 3.88 80 1.40 0.526 64.88 34-44-1181-663-2845-1359-3099-887 10.90 96 1.72 0.632 65.11 55-59-2725-2008-6531-4192-7703-3248 28.60 140 2.08\nIn the order of first hidden layer to last hidden layer out of a total of 288-288-9216-9216-18432-18432-18432-18432 filters respectively.\n(c) CNN model Accuracy (%) Filter (#) Sparsity (%) Training epochs (#) Speedup 0 60.11 0-0-0-0-0 0.00 0 1.00 0.000 61.39 2-1-0-1-2 0.09 4 1.01 mnnu-1 0.105 61.88 3-3-3-4-2 0.10 16 1.01 0.211 61.60 3-3-5-4-4 0.19 30 1.02 0.316 61.73 7-11-25-13-23 1.03 64 1.10 0.421 61.97 7-274-848-801-23 3.74 80 1.28 0.526 61.43 10-2877-8929-7173-23 28.46 96 1.90 0.579 59.81 16-5390-17196-10748-23 49.27 117 2.11 0 60.11 0-0-0-0-0 0.00 0 1.00 0.000 61.89 2-3-2-2-3 0.14 4 1.01 unou-01 0.105 62.12 3-3-5-7-6 0.27 16 1.03 0.211 61.90 7-5-7-11-6 0.29 36 1.03 0.316 62.01 14-15-33-22-27 1.23 56 1.11 0.421 62.15 18-285-859-808-29 4.05 90 1.29 0.526 61.20 24-2890-8943-7181-32 28.91 120 1.90 0.579 59.92 30-5404-17211-10756-35 49.84 104 2.11\nIn the order of first hidden layer to last hidden layer out of a total of 288-12288-32768-16384-256 filter respectively.\n(d) Low-rank CNN model Accuracy (%) Filter (#) Sparsity (%) Training epochs (#) Speedup 0 60.23 0-0-0-0-0-0-0-0-0 0.00 0 1.00 0.000 61.54 2-2-3-3-3-1-1-2-1 0.12 4 1.04 mnou-I 0.105 61.98 3-5-4-4-3-4-4-4-7 0.75 16 1.19 0.211 61.70 5-8-4-5-4-5-7-8-9 0.97 30 1.23 0.316 61.74 9-8-8-7-11-10-11-12-26 2.75 48 1.47 0.421 61.96 10-9-79-74-224-222-206-209-32 4.50 90 1.62 0.526 61.18 11-15-730-729-2244-2243-1801-1805-35 15.03 108 2.07 0.579 60.09 12-16-1360-1358-4310-4309-2697-2698-35 23.63 104 2.25 0 60.23 0-0-0-0-0-0-0-0-0 0.00 0 1.00 0.000 61.62 1-1-2-1-4-3-2-2-2 0.22 4 1.06 wou-0 0.105 62.20 7-5-8-6-7-6-7-7-8 0.88 16 1.20 0.211 61.91 8-8-8-9-10-10-10-8-12 1.31 36 1.28 0.316 61.94 15-13-13-11-16-19-19-18-32 3.42 56 1.52 0.421 62.05 16-15-82-80-230-233-218-215-36 4.98 90 1.64 0.526 61.33 23-16-736-733-2250-2253-1812-1814-42 15.82 120 2.07 0.579 60.20 24-19-1365-1364-4316-4319-2706-2707-43 24.52 117 2.25\n* In the order of first hidden layer to last hidden layer out of a total of 144-144-6144-6144-16384-16384-8192-8192-256 filters respectively.\nTable 5: Performance of the p osed ADMM-based sparse method on SVHN dataset.\n* In the order of first hidden layer to last hidden layer out of a total of 576-18432-36864-36864 filters, respec tively.\nAccuracy (%) Filter (#) Sparsity (%) Training epochs (#) Speedup 0 96.70 0-0-0-0-0-0-0-0 0.00 0 1.00 0.000 97.72 10-11-193-98-102-98-0-3 0.56 4 1.11 mou-I1 0.105 97.58 11-12-194-102-182-99-21-10 0.68 16 1.13 0.211 98.16 15-15-206-108-292-121-80-18 0.92 36 1.16 0.316 97.90 17-16-280-139-490-182-276-65 1.58 64 1.25 0.421 98.18 21-31-485-247-1112-416-993-232 3.81 100 1.46 0.526 97.98 27-37-1174-652-2840-1348-3093-882 10.84 120 1.80 0.632 98.11 41-46-2718-2002-6517-4189-7694-3243 28.52 126 2.18 0 96.70 0-0-0-0-0-0-0-0 0.00 0 1.00 0.000 97.51 8-9-195-96-103-99-3-5 0.56 4 1.11 unou-07 0.105 97.88 16-13-196-104-185-103-26-11 0.71 16 1.13 0.211 97.94 17-18-209-113-297-126-86-21 0.96 36 1.17 0.316 98.27 25-24-287-144-494-187-281-71 1.63 64 1.26 0.421 97.97 30-36-492-251-1119-423-997-239 3.87 100 1.46 0.526 98.45 39-42-1176-659-2849-1363-3102-891 10.91 96 1.81 0.632 97.99 53-57-2725-2004-6535-4201-7705-3257 28.62 112 2.18\nIn the order of first hidden layer to last hidden layer out of a total of 288-288-9216-9216-18432-18432-18432-18432 filters respectively.\n(c) CNN model Accuracy (%) Filter (#) Sparsity (%) Training epochs (#) Speedup 0 85.10 0-0-0-0-0 0.00 0 1.00 0.000 86.81 2-1-0-2-0 0.01 4 1.00 monu-I 0.105 86.68 2-3-2-2-4 0.18 16 1.02 0.211 86.69 5-5-6-3-4 0.19 36 1.02 0.316 86.35 6-10-28-14-19 0.87 64 1.10 0.421 86.85 6-277-853-798-24 3.79 80 1.32 0.526 86.34 9-2880-8932-7167-24 28.50 96 1.97 0.579 83.30 13-5393-17199-10748-25 49.36 104 2.19 0 85.10 0-0-0-0-0 0.00 0 1.00 0.000 86.63 1-0-1-0-2 0.09 4 1.01 mnnu-0 0.105 86.70 4-4-5-7-6 0.28 16 1.03 0.211 86.80 11-8-9-8-7 0.34 30 1.04 0.316 86.74 13-17-32-17-22 1.02 64 1.11 0.421 86.97 13-285-855-809-29 4.04 90 1.34 0.526 86.49 19-2888-8934-7178-29 28.76 120 1.97 0.579 83.40 26-5401-17206-10758-29 49.58 117 2.19\n* In the order of first hidden layer to last hidden layer out of a total of 288-12288-32768-16384-256 filter respectively.\n(d) Low-rank CNN model Accuracy (%) Filter (#) Sparsity (%) Training epochs (#) Speedup 0 87.60 0-0-0-0-0-0-0-0-0 0.00 0 1.00 0.000 88.93 3-3-3-1-3-2-3-3-1 0.13 4 1.04 mnou-I 0.105 89.28 3-4-5-2-5-5-3-4-3 0.34 16 1.10 0.211 89.39 7-5-7-7-7-6-5-5-6 0.67 36 1.18 0.316 89.18 10-7-9-10-12-11-10-9-25 2.65 48 1.49 0.421 89.55 11-11-76-77-222-219-209-208-33 4.61 100 1.66 0.526 88.83 14-14-727-732-2242-2242-1803-1802-33 14.83 96 2.10 0.579 86.30 15-15-1357-1361-4308-4312-2696-2695-36 23.73 130 2.29 0 87.60 0-0-0-0-0-0-0-0-0 0.00 0 1.00 0.000 89.30 1-5-5-4-4-1-3-3-1 0.13 4 1.04 uou-07 0.105 89.55 4-9-5-7-6-7-5-5-7 0.77 16 1.20 0.211 89.65 6-11-9-9-12-12-6-11-11 1.21 30 1.28 0.316 89.39 10-15-11-14-17-18-17-18-29 3.10 56 1.52 0.421 89.56 17-16-83-85-224-232-213-216-42 5.59 90 1.72 0.526 89.00 21-23-740-738-2247-2255-1813-1813-44 16.04 120 2.12 0.579 86.41 25-24-1369-1367-4316-4321-2706-2706-44 24.64 117 2.29\n* In the order of first hidden layer to last hidden layer out of a total of 144-144-6144-6144-16384-16384-8192-8192-256 filters respectively.\nTable 6: Joint variations of accuracy and sparsity on the evaluated datasets for increasing values LR-NIN stands for low-rank NIN while LR-CNN is for low-rank CNN..\nv-rank NIN while LR-CNN is for low-rank CNN. (a) CIFAR-10 Sparsity (%) 0.00 1.23 1.36 .55 2.72 12.14 80.73 34.07 Accuracy (% Training epochs (#) Speedup Sparsity (%) Accuracy (%) Training epochs (#) Speedup Sparsity (%) Accuracy (%) NIN-N1 Training epochs (#) Speedup Sparsity (%) Accuracy (%) 0 Training epochs (#) Speedup Sparsity (%) Accuracy (%) Training epochs (#) CNN Speedup Sparsity (%) Accuracy (%) 1o Traini ng epo ochs F Speedu Sparsity (%) Accuracy (%) LNN-NN Speedup Sparsity (%) 10 1.% ochs F (b) CIFAR-100 Sparsity (%) 0.00 1.24 1.37 .57 12.17 80.76 3.30 63.97 ochs (#) Training Speedup Sparsity (%) Accuracy (%) Training epochs (#) Speedup Sparsity (%) Accuracy NIN-NN Training epochs (#) Speedup Sparsity (%) Accuracy (%) to Training epochs (#) Speedup Sparsity (%) Accuracy (%) Training epochs (#) CNN Speedup Sparsity (%) Accuracy (%) Training epochs (#) Speedup Sparsity (%) Accuracy (%) Traini ing epochs (# Accuracy (%) 10 Training epochs (#) Speedup (c) SVHN Sparsity (%) 0.00 1.23 96.90 6.96 ochs (#) NIN Speedu uracy Training epochs (#) Speedup NIN-NN Tra Speedup Sparsity (%) Accuracy (%) lo Training epochs (#) Speedup Sparsity (%) 86.3 Train g ep ochs #) CNN Speedup Sparsity (%) Accuracy (%) Training epochs (#) Speedup Sparsity (%) Accuracy (%) NNN-NN Training epochs (#) Speedup Training epochs (F) Speedup 14"}]
ByEPMj5el
[{"section_index": "0", "section_name": "OUT-OF-CLASS NOVELTY GENERATION: AN EXPERI- MENTAL FOUNDATION", "section_text": "Mehdi Cherti & Balazs Keg!\nCNRS/Universite Paris-Saclay\nmehdi.cherti, balazs.kegl}@qmail.com\nRecent advances in machine learning have brought the field closer to computa tional creativity research. From a creativity research point of view, this offers the. potential to study creativity in relationship with knowledge acquisition. From a. machine learning perspective, however, several aspects of creativity need to be. better defined to allow the machine learning community to develop and test hy-. potheses in a systematic way. We propose an actionable definition of creativity as. the generation of out-of-distribution novelty. We assess several metrics designed. for evaluating the quality of generative models on this new task. We also propose. a new experimental setup. Inspired by the usual held-out validation, we hold out. entire classes for evaluating the generative potential of models. The goal of the. novelty generator is then to use training classes to build a model that can generate. objects from future (hold-out) classes, unknown at training time - and thus, are. novel with respect to the knowledge the model incorporates. Through extensive. experiments on various types of generative models, we are able to find architec-. tures and hyperparameter combinations which lead to out-of-distribution novelty.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recent advances in machine learning have renewed interest in artificial creativity. Studies such as deep dream (Mordvintsev et al., 2015) and style transfer (Gatys et al., 2015) have aroused both general public interest and have given strong impetus to use deep learning models in computational creativity research (ICC, 2016). Although creativity has been a topic of interest on and off through- out the years in machine learning (Schmidhuber, 2009), it has been slowly becoming a legitimate sub-domain with the appearance of dedicated research groups such as Google's Magenta and re- search work on the topic (Nguyen et al., 2015; Lake et al., 2015).\nThere is a large body of work studying creativity by computational methods. A large variety o techniques, from rule-based systems to evolutionary computation has been used for a myriad o research questions. Compared to these methods, machine learning methods provide an importan advantage: they enable the study of creativity in relation with knowledge (i.e., knowledge-driver creativity; Kazakc1 et al. (2016)). Nevertheless, to better highlight the points of interest in com putational creativity research for the machine learning community and to allow machine learning researchers to provide systematic and rigorous answers to computational creativity problems, it is important to precisely answer three questions:\n1. What is meant by the generation of novelty? 2. How can novelty be generated? 3. How can a model generating novelty be evaluated\nWithin the scope of machine learning, it would be tempting to seek answers to these questions i. the sub-field on generative modeling. Mainstream generative modeling assumes that there is a phe nomena generating the observed data and strive to build a model of that phenomena, which would. for instance, allow generating further observations. Traditional generative modeling considers only. in-distribution generation where the goal is to generate objects from the category or categories o\nI19 PSL Research University, CGS-I3 UMR 9217 akin.kazakci@mines-paristech.fr"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "already observed objects. In terms of novelty generation, this can be considered as generating look a-likes of known types of objects. Although there is considerable value in in-distribution generatior (e.g., for super-resolution (Freeman et al., 2002; Dong et al., 2014; Ledig et al., 2016) or in-painting (Xie et al., 2012; Cho, 2013; Yeh et al., 2016)), this perspective is limited from a strict point of view of creativity: it is unlikely to come up with a flying ship by generating samples from a distributior of ships and flying objects.\nResearchers in creativity research (Runco & Jaeger, 2012) have argued that the crux of creative pro cess is the ability to build new categories based on already known categories. However, creativity is beyond a simple combination exploration: it is about generating previously unknown but meaningful. (or valuable) new types of objects using previously acquired knowledge (Hatchuel & Weil, 2009;. Kazakg1, 2014). Under this perspective, novelty generation aims at exhibiting an example from a. new type. This objective, which we shall call out-of-distribution generation, is beyond what can be formalized within the framework of traditional learning theory, even though learning existing types is a crucial part of the process.\nFrom a machine learning point of view, generating an object from an unknown type is not a well defined problem, and research in generative modeling usually aims at eliminating this possibility altogether, as this is seen as a source of instability (Goodfellow et al., 2014; Salimans et al., 2016 leading to spurious samples (Bengio et al., 2013). In a way, sampling procedures are designed to kil any possibility of sampling out of the distribution, which is a problem for studying the generation o1 novelty by machine learning methods.\nArguably, the most important problem is the evaluation of what constitutes a good model for gen erating out-of-distribution. On the one hand, we are seeking to generate meaningful novelty, no trivial noise. On the other hand, we aim at generating unknown objects, so traditional metrics basec on the concept of likelihood are of no use since novelty in the out-of-distribution sense is unlikely b definition. This lack of metrics hinders answering the first two questions. Without a clear-cut eval uation process, the utility of extending the definition of novelty generation to out-of-sample seems pointless.\nThis paper argues that for a wider adoption of novelty generation as a topic for scientific study withir machine learning, a new engineering principle is needed, which would enable such evaluation, and consequently, rigorous experimental research. In the traditional supervised context, the main engi neering design principle is the minimization of the error on a hold-out test set. The paper proposes a simple setup where the generative potential of models can be evaluated by holding out entire classes simulating thus unknown but meaningful novelty. The goal of the novelty generator is then to use training classes to build a model that can generate objects from future (hold-out) classes, unknown at training time.\nThe main contributions of this paper:\nThe paper is organized as follows. We review some of the seminal work at the intersection o machine learning and out-of-distribution generation in Section 2. We discuss the conceptual frame work of out-of-distribution generation and its relationship with likelihood-based generative models in Section 3. We outline the families of evaluation metrics, focusing on those we use in the pape in Section 4. In Section 4.3 we describe the gist of our experimental setup needed to understand the\nWe design an experimental framework based on hold-out classes to develop and to analyze out-of-distribution generators. We review and analyze the most common evaluation techniques from the point of view of measuring out-of-distribution novelty. We argue that likelihood-based techniques inher- ently limit exploration and novelty generation. We carefully select a couple of measures and demonstrate their applicability for out-of-distribution novelty detection in experiments. We run a large-scale experimentation to study the ability of novelty generation of a wide set of different autoencoders and GANs. The goal here is to re-evaluate existing architectures under this new goal in order to open up exploration. Since out-of-distribution novelty generation is arguably a wider (and softer) objective than likelihood-driven sampling from a fixed distribution, existing generative algorithms, designed for this latter goal, constitute a small subset of the algorithms able to generate novelty. The goal is to motivate the reopening some of the closed design questions.\nmetrics described in Section 4.4, designed specifically for the out-of-distribution setup. We describe the details of the experimental setup and analyze our results in Section 5. Finally, we conclude ir. Section 6.\nThe paper can be read either in order of the sections, first the motivation and conceptual underpinnin, of the framework, then the technical contribution, or the other way around, by jumping the Section 4. then coming back to Sections 2 and 3..\n2 MACHINE LEARNING AND NOVELTY GENERATION: THE INNOVATION ENGINE, \"ZERO-SHOT\" LEARNING, AND DISCOVERING NEW TYPES\nThere are three important papers that consider novelty generation in a machine learning context. Nguyen et al. (2015) propose an innovation engine (Figure 1(a)). They generate images using a. neural net that composes synthetic features. The generator is fed back with an entropy-based score (similar to objectness; Section 4.2) coming from an Imagenet classifier, and the feedback is used ir. an evolutionary optimization loop to drive the generation. An important contribution of the pape. is to demonstrate the importance of the objectness score. They show that interesting objects are. not generated when asking the machine to generate from a single given class. The generation paths often go through objects from different classes, \"stepping stones\" which are seemingly unrelated. to the final object. The main conceptual difference between our approaches is that Nguyen et al. (2015) do not ground their generative model in learned knowledge: their generation process is no1. learned model, rather a stochastic combinatorial engine. On the one hand, this makes the generatior. (evolutionary optimization) rather slow, and on the other, the resulting objects reflect the style of the. (preset) synthetic features rather than features extracted from existing objects..\nThe main goal of Lake et al. (2015) and Rezende et al. (2016) is one-shot learning and generation. learn to classify objects given a small number (often one) of examples coming from a given cate. gory, and learn to generate new objects given a single example (Figure 1(b)). One-shot generatior. is definitely an intermediate step towards out-of-distribution generation. The extremely low num ber of examples conceptually limits likelihood-based learning/fitting/generation. Lake et al. (2015 circumvents this problem by learning strong Bayesian top-down models (programs) that capture the structural properties of known objects which are generalizable across classes. They also consider. unconstrained (\"zero-shot') generation as an extension of their approach, and show that the mode. can generate new symbols from scratch. They make no attempt to conceptualize the goal of uncon. strained generation outside the top-down Bayesian framework, or to design evaluation metrics tc assess the quality of these objects, but their intriguing results are one of the strongest motivations of. our paper.\nKazakc1 et al. (2016) show that symbols of new types can be generated by carefully tuned au toencoders, learned entirely bottom-up, without imposing a top-down Bayesian architecture (Fig ure 1(c)). They also make a first step of defining the conceptual framework of novelty generation by arguing the goal of generating objects from new types, unknown at the time of training. They desigr a technique for finding these new types semi-automatically (combining clustering and human label- ing). They argue the importance of defining the value of these new types (and of out-of-distributior generation in general), but they make no attempt to design evaluation metrics, thus limiting the exploration and the development of out-of-distribution generative architectures.\nThe generative process is commonly framed in a probabilistic setup: it is assumed that an un-. derlying unknown likelihood model p() should first be learned on an i.i.d. training sample D =. {x1,..., xn}, assumed to be generated from p(), and then a sampler S should sample from the. learned P(). The first step, estimating P() using D, is a classical function learning problem that. can be studied through the usual concepts of overfitting and regularization, and algorithms can be designed using the classical train/test principle. The second step, designing S for sampling from P() is also a classical domain of random sampling with a conceptual framework and a plethora of. methods.\n(a) \"Synthetic\" objects from imagenet categories from Figure 7 of Nguyen et al. (2015)\nTechnically both steps are notoriously hard for the high-dimensional distributions and the complex. dependencies we encounter in interesting domains. Hence, most of the recent and successful meth-. ods get rid of the two-step procedure at the level of algorithmic design, and short-cut the procedure. from the probabilistic D -> p -> S to the constructive D -> A, where A(D) is a generator, tasked. to produce sample objects similar to elements of D but not identical to them. A is fundamentally different from (P, S) in that there is no explicit fitting of a function, we use D to directly design an. algorithm or a program.\nWhen the probabilistic setup is still kept for analysis, we face a fundamental problem: if we as sume that we are given the true likelihood function p(), the likelihood of the training samp of i.i.d. samples of size n, so the trivial generator A which resamples D will have the same expecte log-likelihood as an optimal i.i.d. sampler. The resampling \"bug\" is often referred to as \"overfitting While it makes perfect sense to talk about overfitting in the D -> p -> S paradigm (when p is fitte on D), it is somewhat conceptually misleading when there is no fitting step, we propose to call memorizing\". When a generator A is trained on D without going through the fitting step D -> J the classical tools for avoiding memorizing (regularization, the train/test framework) may be eithe conceptually inadequate or they may not lead to an executable engineering design principle.\nThe conceptual problem of analyzing constructive algorithms in the probabilistic paradigm is no. unrelated to our argument of Section 1 that the probabilistic generative framework is too restrictive. for studying novelty generation and for designing out-of-distribution generative models. In our view. this flaw is not a minor nuisance which can be fixed by augmenting the likelihood to avoid resam. pling, rather an inherent property which cannot (or rather, should not) be fixed. The probabilistic. framework is designed for generating objects from the distribution of known objects, and this i. in an axiomatic contradiction with generating out-of-distribution novelty, objects that are unknow. at the moment of assembling a training sample. Resampling (generating exact copies) is only th. most glaring demonstration of a deeper problem which is also present in a more subtle way wher. attempting to generate new types of objects.\nWe are not arguing that the probabilistic generative framework should be banished, it has a very. important role in numerous use cases. Our argument is that it is not adequate for modeling out-of distribution novelty generation. What follows from this on the algorithmic level is not revolutionary: the design of most successful generative algorithms already moved beyond the probabilistic frame- work. On the other hand, moving beyond the probabilistic generative framework at a conceptual. level is a paradigm change which will require groundwork for laying the foundations, including. revisiting ideas from a domain larger than machine learning..\nAt the algorithmic/computational level the machine learning community has already started to move beyond likelihood. The overfitting problem is often solved by implicitly constraining A not to resam- ple. Another common solution is to design tractable likelihood surrogates that implicitly penalize memorization. These surrogates then can be used at the training phase (to obtain non-resampling\ngenerators explicitly) and/or in the evaluation phase (to eliminate generators that resample). Th ingenious idea of using discriminators in GANs (Goodfellow et al., 2014; Salimans et al., 2016 is a concrete example; although the setup can be analyzed through the lens of probabilistic sam pling, one does not have to fall back onto this framework. If we drop the underlying conceptua probabilistic framework, the constructive GAN idea may be extended beyond generating from th set which is indistinguishable from the set of existing objects. In Section 4.4 we will use discrim inators to assess the quality of generators whose very goal is to generate novelty: objects that ar distinguishable from existing objects. The main challenge is to avoid the trivial novelty generator producing uninteresting noise. This challenge is structurally similar to avoiding the trivial memoriz ing/resampling generator in in-distribution sampling. The two main elements that contribute to the solution is i) to ground the generator strongly in the structure of existing knowledge, without overl fixating it on existing classes, and ii) use a discriminator which knows about out-of-class novelty t steer architectures towards novelty generation."}, {"section_index": "3", "section_name": "4 EVALUATION OF GENERATIVE MODELS", "section_text": "In this section we outline the families of evaluation metrics, focusing on those we use in the paper In Section 4.3 we describe the gist of our experimental setup needed to understand the metrics described in Section 4.4, designed specifically for the out-of-distribution setup.."}, {"section_index": "4", "section_name": "4.1 INDIRECT SUPERVISED METRICS", "section_text": "When generative models are used as part of a pipeline with a supervised goal, the evaluation is. based on the evaluation of the full pipeline. Examples include unsupervised pre-training (Hintor. et al. (2006); Bengio et al. (2007); the original goal that reinvigorated research in neural nets), semi supervised learning (Kingma et al., 2014; Rasmus et al., 2015; Maalge et al., 2016; Salimans et al.. 2016), in-painting (Xie et al., 2012; Cho, 2013; Yeh et al., 2016), or super-resolution (Freeman et al.. 2002; Dong et al., 2014; Ledig et al., 2016). The design goal becomes straightforward, but the setup. is restricted to improving the particular pipeline, and there is no guarantee that those objectives car. be transferred between tasks. In our case, the objective of the supervised pipeline may actually sup. press novelty. In a certain sense, GANs also fall into this category: the design goal of the generator is. to fool a high-quality discriminator, so the generator is asked not to generate new objects which can. be easily discriminated from known objects. In our experiments, surprisingly, we found that GANs. can be still tuned to generate out-of-distribution novelty, probably due to the deficiencies of both. the generator and the discriminator. Our goal in this paper can also be understood as designing a. pipeline that turns novelty generation into a supervised task: that of generating objects from classes unknown at training time."}, {"section_index": "5", "section_name": "4.1.1 PARZEN DENSITY ESTIMATOR", "section_text": "Parzen density estimators are regularly used for estimating the log-likelihood of a model (Breuleux et al., 2009). A kernel density estimator is fit to generated points, and the model is scored by log likelihood of a hold-out test set under the kernel density. The metrics can be easily fooled (Thei. et al., 2015), nevertheless, we adopted it in this paper for measuring both the in-distribution anc out-of-distributions quality of our generators.."}, {"section_index": "6", "section_name": "4.2 OBJECTNESS", "section_text": "Salimans et al. (2016) proposed a new entropy-based metrics to measure the \"objectness'1 of the. generated set of objects. As GANs, the metrics uses a trained discriminator, but unlike GANs, i. is not trained for separating real objects and generated objects, rather to classify real objects int. existing categories. The goal of the generator is create objects which belong confidently to a lov. number (typically one) of classes. To penalize generators fixating onto single objects or categories. they also require that the set of objects has a high entropy (different objects span the space of th. categories represented by the discriminator). The metrics is only indirectly related to classical log. likelihood: in a sense we measure how likely the objects are through the \"eye\" of a discriminator..\nIThey also call it \"inception score\" but we found the term objectness better as it is more general than the single model used in their paper..\nFormally, objectness is defined as\nn K 1 Pi,l N Pe i=1 l=1\nare the class marginals\nSalimans et al. (2016) proposed this metric as one of the \"tricks\"' to stabilize GANs, but, interest. ingly, a similar measure was also used in the context if evolutionary novelty generation (Nguyer et al., 2015)."}, {"section_index": "7", "section_name": "4.3 ASSESSING OUT-OF-DISTRIBUTION NOVELTY BY OUT-OF-CLASS SCORING", "section_text": "As the classical supervised validation setup simulates past (training) and future (test) by randoml partitioning an existing data set, we can simulate existing knowledge and novelty by partitioning existing data sets holding out entire classes. The goal of the novelty generator is then to use train ing classes to build a model that can generate objects from future (hold-out) classes, unknown a training. In our first experiments we tried to leave out single classes of MNIST, but the label noise \"leaked' hold-out classes which made the evaluation tricky. To avoid this, we decided to challeng the generator, trained on MNIST, to generate letters. We pre-trained various discriminators using different setups, only on digits (MNIST), only on letters (Google fonts), or on a mixture of digits anc letters, and used these discriminators to evaluate novelty generators in different ways. For example we measure in-class objectness and in-class Parzen using a discriminator trained on MNIST, anc out-of-class objectness and out-of-class Parzen by a discriminator trained on (only) Google fonts"}, {"section_index": "8", "section_name": "4.4 OUT-OF-CLASS SCORES", "section_text": "Naturally, letter discriminators see letters everywhere. Since letters are all they know, they classif everything into one of the letter classes, quite confidently (this \"blind spot\"' phenomenon is exploited. by Nguyen et al. (2015) for generating \"synthetic\" novelty), the letter objectness of an in-distribution. digit generator can sometimes be high. For example, a lot of 6s were classified as bs. To avoid this. \"bias\", we also trained a discriminator on the union of digits and letters, allowing it to choose digits. when it felt that the generated object looked more like a digit. We designed two metrics using. this discriminator: out-of-class count measures the frequency of confidently classified letters in a. generated set, and out-of-class max is the mean (over the set) of the probability of the most likely. letter. None of these metrics penalize \"fixated\" generators, outputting the same few letters all the. time, so we combine both metrics with the entropy of the letter posterior (conditioned on being a. letter).\n= arg maxpi,l l\narg max Pi,l 'outi Kjn<l<Kn+Kout\nbe the most likely categ. overall and most likely out-of-class categ ory, respectively. Let.\nI{l=l*uti} Pl >Kin} *ut.\nPi,l =P(l|xi\ns the posterior probability of category l given the generated object x, under the discriminator P rained on a set with known labels, and\n1 n Pe Pi,l, n i=1\nFormally, let Pi,1,...,Pi,Km be the in-class posteriors and Pi,Kin+1, ..., Pi,Kin+Kout b. be the out-of- class posteriors, where Kin = 10 is the number of in-class classes (digits), and Kout = 26 is the number of out-of-class classes (letters). Let.\nbe the normalized empirical frequency of the out-of-class category l. We measure the diversity o the generated sample by the normalized entropy of the empirical frequencies.\nn I{l* > Kin^ pi,e>0} + A diversity, out-of-class count = (1 - X) n i=1\nn out-of-class max = (1 - X) ) + X diversity Pi.l*. n Out i=1\nIn our experiments we set the confidence level 0 = 0.95 and the mixture coefficient X = 0.5\nThe ultimate test of 1'art pour 1'art generative models is whether humans like the generated objects. Visual inspection is often used as an evaluation principle in papers (Denton et al., 2015; Radford. et al., 2015; Dosovitskiy et al., 2016), and it is sometimes even made part of the objectified pipeline. by using crowdsourcing tools (Denton et al., 2015; Lake et al., 2015; Salimans et al., 2016). First it definitely makes development (e.g., model selection and hyperparameter tuning) slow. Second,. the results depend a lot on what questions are asked and how the responders are primed. For testing. generative models, the usual GAN-type question to ask is whether the generated objects are gener-. ated by a nature (or a human) or a machine (the visual Turing test). Even those that go the furthest in. tasking machines to generate novelty (Lake et al., 2015) ask human judges to differentiate between. human and machine. In our view, this question is too restrictive when the goal is out-of-distribution. novelty generation. Asking whether an object is \"new' is arguably too vague, but inventing adjective. categories (such as \"surprising'' or \"interesting\" (Schmidhuber, 2009)) that can poll our ability to. detect novelty should be on the research agenda. Priming is another important issue: the answer of a human annotator can depend on the information given to her. Nevertheless, a human annotation tool with well-designed priming and questions could accelerate research in novelty generation in the. same way labeling tools and standard labeled benchmark sets accelerated supervised learning..\nWe assessed the visual quality of the set of generated objects using an in-house annotation tool. We. took each model which appeared in the top ten by any of the quantitative metrics described in the previous section, and hand-labeled them into one of the following three categories: i) letters, ii) digits, and iii) bad sample (noise or not-a-symbol).\nEach panel consisted 26 15 generated objects, the fifteen most probable symbols of each letter according to the classifier trained on both letters and digits (Figure 2). The goal of this annotation exercise was i) to assess the visual quality of the generated symbols and ii) to assess the quality of. the metrics in evaluating novelty.."}, {"section_index": "9", "section_name": "5 EXPERIMENTS", "section_text": "Our scores cannot be directly optimized because they all measure out-of-class performance, an showing out-of-class objects at training would be \"cheating\". All our (about 1oo0) models were trained for \"classical'' objectives: reconstruction error in the case of autoencoders, and adversaria error in the case of GANs. The out-of-class scores were used as a weak feedback for model selectio. and (quasi random) hyperparameter optimization. The goal is not to be statistically flawless, after al we do not have a statistical model. Rather we set our goal to analyze existing generative architectures from the point of view of novelty generation. Most of the generative models come from a large clas of architectures, sometimes purposefully designed for not to \"misbehave\". When possible, we turnec these tricks, designed to avoid generating \"spurious\"' objects, into optional hyperparameters.\nKin+Kout 1 diversity = - log Kout \"pe log pe, l=Kin\nc b 8 LJ i D 0 00 e ) 4 C 6 6 G D U e f 9 F P C 1 1 p C . Y 3 X Y 3 (a) The top autoencoder (b) The top GAN\nFigure 2: A couple of the top models according to human assessment. Top left characters of each 4 4 panel are the labels, letters coming from the training sample. For each letter we display the fifteen most probable symbols according to the classifier trained on both letters and digits."}, {"section_index": "10", "section_name": "5.1 DETAILED EXPERIMENTAL SETUP", "section_text": "We used two families of deep learning based generative models, autoencoders and GANs. The architectures and the optional features are described in the next sections. All hyperparameters were selected randomly using reasonable priors. All the ~1000 autoencoders were trained on MNIST training data.\nWe used three regularization strategies for autoencoders: sparse autoencoders (Makhzani & Frey. 2013; 2015), denoising autoencoders (Bengio et al., 2013) and contractive autoencoders (Rifai et al.,. 2011).\nSparse autoencoders can either be fully connected or convolutional. For fully connected sparse autoencoders, we use the k-sparse formulation from Makhzani & Frey (2013), a simple way of obtaining a sparse representation by sorting hidden units and keeping only the top k%, zeroing out the others, and then backpropagating only through non-zero hidden units..\nFor contractive autoencoders, we use the fully connected version with a single hidden layer fron Rifai et al. (2011).\nWe also explore mixtures between the different autoencoder variants in the hyperparameter search For each model we choose to enable or disable independently the denoising training procedure, the contractive criterion (parametrized by the contractive coefficient, see (Rifai et al., 2011)) and the sparsity rate k (only for fully connected architectures). Table 1 shows the hyperparameters and thei priors.\nIn initial experiments we found that 100 iterations were sufficient for the majority of models to have convergence so we chose to fix the maximum number of iterations to 100. We also chose to extend\nFor convolutional sparse architectures, we use the \"winner take all' (WTA) formulation from. Makhzani & Frey (2015) which obtains spatial sparsity in convolutional feature maps by keeping only the maximum activation of each feature map, zeroing out the others. We optionally combine it with channel sparsity which, for each position in the feature maps, keeps only the maximum. activation across the channels and zero out the others..\nThe generation procedure we use for autoencoders is based on Bengio et al. (2013), who proposed a probabilistic interpretation of denoising autoencoders and a way to sample from them using a Markov chain. To have a convergent procedure and to obtain fixed points, we chose to use a de- terministic generation procedure instead of a Markov chain (Bahdanau & Jaeger, 2014). As in Bahdanau & Jaeger (2014), we found that the procedure converged quickly.\nthe procedure of Bahdanau & Jaeger (2014) by binarizing (using a threshold) the images after each reconstruction step, as we found that it improved the speed of the convergence and could lead to final samples with an exact zero reconstruction error.\nName Prior Type nb layers 1, 2, 3, 4, 5 choice nb fully connected hidden units 100,200,300,...1000 choice nb conv layers. 1, 2, 3, 4, 5 choice nb conv filters. 8, 16, 32, 64, 128, 256, 512 choice conv layers filter size. 3 or 5 choice noise corruption [0, 0.5] uniform k sparsity rate [0, 1] uniform contraction coefficient [0, 100] uniform\nName Prior Type nb discr. updates 1, 2, 3 choice 12 coeficient 10-6,10-1 logspace gen. input dim.. 10, 20, 50, 70, 100, 150, 200, 300 choice nb fully connected gen. units 8. 16.32. 64. 128.256. 1024. 2048 choice nb fully connected discr. units 8, 16, 32, 64, 128, 256, 1024, 2048 choice nb filters gen. 8, 16, 32, 64, 128, 256, 512 choice nb filters discr. 8, 16, 32, 64, 128, 256, 512 choice nb iterations 50, 100, 150, 200, 250, 300 choice learning rate [10-6, 10-1] on logspace, or 0.0002 logspace weight initialization. Normal(0, std) where std is from. 10 .10 logspace"}, {"section_index": "11", "section_name": "5.2 ANALYSIS", "section_text": "First, we found that tuning (selecting) generative models for in-distribution generation will make them \"memorize' the classes they are trained to sample from. This is of course not surprising, but i is important to note because it means that out-of-class generation is non-trivial, and the vast majorit of architectures designed and tuned in the literature are not generating out-of-class novelty naturally Second, we did succeed to find architectures and hyperparameter combinations which lead to out of-class novelty. Most of the generated objects, of course, were neither digits nor letters (Figure 3) which is why we needed the \"supervising'' discriminators to find letter-like objects among them The point is not that all new symbols are letters, that would arguably be an impossible task, bu to demonstrate that by opening up the range of generated objects, we do not generate noise, rathe objects that can be forming new categories.\nThe quantitative goal of this study was to assess the quality of the defined metrics in evaluating out of-distribution generators. We proceeded in the following way. We selected the top ten autoencoders and GANs according to the five metrics of out-of-class (letters) count, out-of-class max, out-of class objectness, out-of-class Parzen, and in-class Parzen. We then annotated these models intc one of the three categories of \"letter\"' (out), \"digit\"' (in), and \"bad' (noise or not-a-symbol). The\nFor stochastic gradient optimization of the autoencoder models, we used adadelta (Zeiler, 2012). with a learning rate of 0.1 and a batch size of 128. We used rectified linear units as an activation. function for hidden layers in all models. We use the sigmoid activation function for output layers\nTable 1: Autoencoder hyperparameter priors\nFor GANs, we built upon Radford et al. (2015) and used their architecture as a basis for hyperparam eter search. We modified the code proposed here to sample new combinations of hyperparameters. Table 2 shows the hyperparameters and their priors.\nTable 2: GAN hyperparameter priors\nFigure 3: A random selection of symbols generated by one of our best sparse autoencoder, the same as the one that generated the letters in Figure 4(b)..\nTable 3: Inter-score correlations among top 10% models per score and human annotation counts among top twenty models per score. out=letters; in=digits..\nlast three columns of Table 3 show that the out-of-class count and out-of-class max scores work well in selecting good out-of-class generators, especially with respect to in-class generators. They are relatively bad in selecting good generators overall. Symmetrically, out-of-class objectness and the Parzen measures select, with high accuracy, good quality models, but they mix out-of-class and in-class generators (digits and letters). Parzen scores are especially bad at picking good out- of-class generators. Somewhat surprisingly, even out-of-class Parzen is picking digits, probably because in-distribution digit generators generate more regular, less noisy images than out-of-class letter generators. In other words, opening the space towards non-digit like \"spurious\"' symbols come at a price of generating less clean symbols which are farther from letters (in a Parzen sense) than clean digits.\nWe also computed the inter-score correlations in the following way. We first selected the top 10%. models for each score because we were after the correlation of the best-performing models . Then we computed the Spearman rank correlation of the scores (so we did not have to deal with different scales and distributions). The first eight columns of Table 3 show that i) in-class and out-of-class. measures are anti-correlated, ii) out-of-class count and max are uncorrelated, and are somewhat. anti-correlated with out-of-class objectness.\nThese results suggest that the best strategy is to use out-of-class objectness for selecting good quality models and out-of-class count and max to select models which generate letters. Figure 4 illustrates the results by pangrams (sentences containing all letters) written using the generated symbols. The models (a)-(d) were selected automatically: these were the four models that appeared in the top ten both according to out-of-class objectness and out-of-class counts. Letters of the last sentence (e) were hand-picked by us from letters generated by several top models. Among the four models. three were fully connected autoencoders with sparsity and one was a GAN. All of the three sparse autoencoders had five hidden layers and used a small noise corruption (less than 0.1). The GAN used the default learning rate of 0.0002 and a large number (2048) of fully connected hidden units for the generator, while the number of fully connected hidden units of the discriminator was significantly smaller (128).\nE a $ 9 1 1 Y 6 F c e 0 c s 2 5 2 6 ? r & 1 4 - Y 4 e 3 8 r F e !. L 6 6 1 - r u 6 L- 4 P r 2 c - (L c 0 3 3 : 4 6 e G 1 0 c 5 r 2 p 1 P 0 C E $ ~ 9 7 t 1 3 7 4 N. M L 1 0 3 5 y 5 c A b. : 1 9 S c 2 3 8 2 3 '? 2\ninter-score correlations human counts Oc om 00 op ic im io ip out in bad out count 1 -0.03 -0.13 0.04 -0.12 0.02 -0.07 -0.11 12 0 8 out max -0.03 1 -0.07 0.01 -0.16 -0.10 0.03 -0.09 15 0 5 out objectness -0.13 -0.07 1 0.21 -0.06 0.08 0.02 -0.08 9 10 1 out Parzen 0.04 0.01 0.21 1 -0.17 0.01 -0.19 -0.20 4 13 3 in count -0.12 -0.16 -0.06 -0.17 1 0.30 0.1 0.14 - - in max 0.02 -0.10 0.08 0.01 0.30 11 0.03 0.06 - in objectness -0.07 0.03 0.02 -0.19 0.1 0.03 1 0.00 - - in Parzen -0.11 -0.09 -0.08 -0.20 0.14 0.06 0.00 1 0 17 3\nFigure 4: Pangrams created (a-d) using top models selected automatically, and (e) using letters selected from several models by a human.\nIn this paper we have proposed a framework for designing and analysing generative models for. novelty generation. The quantitative measures make it possible to systematically study the creative capacity of generative models. We believe that human evaluation will remain an important source of. feedback in this domain for the foreseeable future. Nevertheless, quantitative measures, such as our out-of-class objectness and out-of-class count and max, will i) make it possible to semi-automate the search for models that exhibit creativity, and ii) allow us to study, from the point of view of novelty generation, the numerous surrogates used for evaluating generative models (Theis et al.,. 2015), especially those that explicitly aim at quantifying creativity or interestingness (Schmidhuber,. 2009).\nThe main focus of this paper was setting up the experimental pipeline and to analyze various quality metrics, designed to measure out-of-distribution novelty of samples and generative models. The immediate next goal is to analyze the models in a systematic way, to understand what makes them \"memorizing\" classes and what makes them opening up to generate valuable out-of-distribution samples.\nThis work was partially su ported by the HPC Center of Champagne-Ardenne ROMEO"}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent. Generalized denoising auto-encoders as generative models. In Advances in Neural Information Processing Systems, pp. 899_907, 2013.\nn4 Fiue Dozen Liq40t 34g5 P4Ck 30* Fpye DOZ en l pqwOr 5agS pAck bOx (lauor: *7 9 6Ol WJ+h f14E dOze n t1q40r j465 paCk my 60x witH five dOzeh Li9u0r Jugs\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-. mation Processing Systems, pp. 2672-2680, 2014.\nArmand Hatchuel and Benoit Weil. Ck design theory: an advanced formulation. Research in engi neering design, 19(4):181-192, 2009\nProceedings of the International Confe utational Creativity, 2016. ICCC\nAkin Kazakci. Conceptive artificial intelligence: Insights from design theory. In International Design Conference DESIGN2014. pp. 1-16, 2014\nAlireza Makhzani and Brendan Frey. k-sparse autoencoders. arXiv preprint arXiv:1312.5663, 2013\nAnh Mai Nguyen, Jason Yosinski, and Jeff Clune. Innovation engines: Automated creativity anc. improved stochastic optimization via deep learning. In Proceedings of the 2015 on Genetic ana Evolutionary Computation Conference, pp. 959966. ACM, 2015.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi supervised learning with ladder networks. In Advances in Neural Information Processing Systems pp. 3532-3540, 2015.\nDanilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One shot generalization in deep generative models. In ICML, 2016..\neon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXi preprint arXiv:1508.06576, 2015.\nGeoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527-1554, 2006.\nAlexander Mordvintsev, Christopher Olah, and Mike Tyka. Inceptionism: Going deeper into neural networks. Google Research Blog. Retrieved June. 20. 2015\nSalah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. Contractive auto encoders: Explicit invariance during feature extraction. In Proceedings of the 28th internationa conference on machine learning (1CML-11), pp. 833-840, 2011\nTim Salimans. Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training GANs. arXiv preprint arXiv:1606.03498. 2016\nJurgen Schmidhuber. Driven by Compression Progress: A Simple Principle Explains Essentia. Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity Art, Science, Music, Jokes, pp. 48-76. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. ISBN 978-3-642-02565-5. doi: 10.1007/978-3-642-02565-5_4. URL http: / / dx. doi. 0rg/\nLucas Theis. Aaron van den Oord, and Matthias Bethge. A note on the evaluation of generativ models. arXiv preprint arXiv:1511.01844, 2015.\nRaymond Yeh, Chen Chen, Teck Yian Lim, Mark Hasegawa-Johnson, and Minh N Do. Semantic image inpainting with perceptual and contextual losses. arXiv preprint arXiv:1607.07539, 2016\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. 2012.\nJunyuan Xie, Linli Xu, and Enhong Chen. Image denoising and inpainting with deep neural net works. In Advances in Neural Information Processing Systems, pp. 341-349, 2012."}]
SkB-_mcel
[{"section_index": "0", "section_name": "CENTRAL MOMENT DISCREPANCY (CMD) FOR DOMAIN-INVARIANT REPRESENTATION LEARNING", "section_text": "Werner Zellinger. Edwin Lughofer & Susanne Saminger-Platz\nthomas.grubinger, thomas.natschlaeger}@scch.at"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The collection and preprocessing of large amounts of data for new domains is often time consuming and expensive. This in turn limits the application of state-of-the-art methods like deep neural net- work architectures, that require large amounts of data. However, often data from related domains can be used to improve the prediction model in the new domain. This paper addresses the particularly important and challenging domain-invariant representation learning task of unsupervised domain adaptation (Glorot et al.]2011f Li et al.[2014] Pan et al.[2011] Ganin et al.]2016). In unsupervised domain adaptation, the training data consists of labeled data from the source domain(s) and unla- beled data from the target domain. In practice, this setting is quite common, as in many applications"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The learning of domain-invariant representations in the context of domain adap. tation with neural networks is considered. We propose a new regularization. method that minimizes the domain-specific latent feature representations directly. in the hidden activation space. Although some standard distribution matching. approaches exist that can be interpreted as the matching of weighted sums of. moments, e.g. Maximum Mean Discrepancy, an explicit order-wise matching of. higher order moments has not been considered before. We propose to match the. higher order central moments of probability distributions by means of order-wise. moment differences. Our model does not require computationally expensive dis-. tance and kernel matrix computations. We utilize the equivalent representation of. probability distributions by moment sequences to define a new distance function,. called Central Moment Discrepancy (CMD). We prove that CMD is a metric on. the set of probability distributions on a compact interval. We further prove that. convergence of probability distributions on compact intervals w. r. t. the new met-. ric implies convergence in distribution of the respective random variables. We test. our approach on two different benchmark data sets for object recognition (Office). and sentiment analysis of product reviews (Amazon reviews). CMD achieves a. new state-of-the-art performance on most domain adaptation tasks of Office and. outperforms networks trained with Maximum Mean Discrepancy, Variational Fair. Autoencoders and Domain Adversarial Neural Networks on Amazon reviews. In addition, a post-hoc parameter sensitivity analysis shows that the new approach. is stable w.r.t. parameter changes in a certain interval. The source code of the. experiments is publicly available'\nthe collection of input data is cheap, but the collection of labels is expensive. Typical examples include image analysis tasks and sentiment analysis, where labels have to be collected manually.\nRecent research shows that domain adaptation approaches work particularly well with (deep) neu. ral networks, which produce outstanding results on some domain adaptation data sets (Ganin et al. 2016;Sun & Saenko 2016} Li et al. 2016} |Aljundi et al.2015} Long et al.[2 2015} Li et al.|2015 Zhuang et al.]2015} Louizos et al. 2016). The most successful methods have in common that. they encourage similarity between the latent network representations w.r.t. the different domains. This similarity is often enforced by minimizing a certain distance between the networks' domain specific hidden activations. Three outstanding approaches for the choice of the distance function are the Proxy A-distance (Ben-David et al.[2010), the Kullback-Leibler (KL) divergence |Kullback & Leibler (1951), applied to the mean of the activations (Zhuang et al.|2015), and the Maximum Mean. Discrepancy(Gretton et al.]2006 MMD).\nThe interpretation of the KL-divergence approaches and MMD-based approaches as moment match ing procedures motivate us to match the higher order moments of the domain-specific activation dis tributions directly in the hidden activation space. The matching of the higher order moments is per formed explicitly for each moment order and each hidden coordinate. Compared to KL-divergence based approaches, which only match the first moment, our approach also matches higher order mo ments. In comparison to MMD-based approaches, our method explicitly matches the moments fo each order, and it does not require any computationally expensive distance- and kernel matrix com putations.\nThe proposed distribution matching method induces a metric between probability distributions. This is possible since distributions on compact intervals have an equivalent representation by means ol their moment sequences. We utilize central moments due to their translation invariance and natura geometric interpretation. We call the new metric Central Moment Discrepancy (CMD).\nThe contributions of this paper are as follows:\n(Ds)n an corresponding samples are given: the source sample S =- 7 2\nTwo of them, the MMD and the KL-divergence approach, can be viewed as the matching of statis. tical moments. The KL-divergence approach is based on mean (first raw moment) matching. Using. the Taylor expansion of the Gaussian kernel, most MMD-based approaches can be viewed as mini- mizing a certain distance between weighted sums of all raw moments (Li et al.2015).\nWe propose to match the domain-specific hidden representations by explicitly minimiz-. ing differences of higher order central moments for each moment order. We utilize the equivalent representation of probability distributions by moment sequences to define a new. distance function, which we call Central Moment Discrepancy (CMD).. . Probability theoretic analysis is used to prove that CMD is a metric on the set of probability. distributions on a compact interval. We additionally prove that convergence of probability distributions on compact intervals. w. r. t. to the new metric implies convergence in distribution of the respective random vari- ables. This means that minimizing the CMD metric between probability distributions leads to convergence of the cumulative distribution functions of the random variables.. In contrast to MMD-based approaches our method does not require computationally ex. pensive kernel matrix computations. We achieve a new state-of-the-art performance on most domain adaptation tasks of Office. and outperform networks trained with MMD, variational fair autoencoders and domain adversarial neural networks on Amazon reviews.. A parameter sensitivity analysis shows that CMD is insensitive to parameter changes within. a certain interval. Consequently, no additional hyper-parameter search has to be performed..\nfo(Xs) output layer AH3(0,Xs) AH3(0,XT) hidden layer H3. VeE(l(0,Xs,Ys)) AH(0,Xs) AH(0, XT) +AVed(AH(0,Xs),AH(0,XT)) hidden layer H2. DXXX X X X X AH(0,Xs) AH(0,XT) hidden layer H1. Xs XT\nfe(Xs output layer AH3(0, Xs) AH3(0,XT) hidden layer H3 CXXXXXXY VeE(l(0,Xs,Ys)) AH,(0,Xs) AH(0,XT) +AVed(Ah(0,Xs),Ah(0,XT)) hidden layer H2 AH(0,Xs) AH(0,XT) hidden layer H1 X Y X Xs XT\nFigure 1: Schematic sketch of a three layer neural network trained with backpropagation based on objective (2). Ve refers to the gradient w.r. t. 0..\nthe target sample T = XT ={xi} (DT)m. The goal of the unsupervised domain adaptation setting is to build a classifier f : I. V with a low target risk RT(f) =. Pr (f(x) + y)\nwhile no information about the labels in DT is given\nmin E(l(0, X,Y)) 0eO\nOne fundamental assumption of most unsupervised domain adaptation networks is that the source risk Rs(f) is a good indicator for the target risk RT(f), when the domain-specific latent space representations are similar (Ganin et al.f 2016). This similarity can be enforced by matching the distributions of the hidden activations A(0, Xs) and A(0, XT) of higher layers H. Recent state- of-the-art approaches define a domain regularizer d : ([a, b]N)n ([a, b]N)m > [0, oo), which gives a measure for the domain discrepancy in the activation space [a, b]/. The domain regularizer is added to the objective by means of an additional weighting parameter .\nFig.[1shows a sketch of the described architecture and fig.2lshows the hidden activations of a simple neural network optimized by eq. (1) (left) and eq. (2) (right). It can be seen that similar activation. distributions are obtained when being optimized on the basis of the domain regularized objective\nX VeE(l(0,Xs,Ys) + XVed(Ah(0,Xs),A(0,XT)\nmin E(l(0,Xs,Ys)) + X.d(Ah(0,Xs),Ah(0,XT) ACO\n3.0 3.0 2.5 2.5 2.0 2.0 1.5 1.5 1.0 1.0 0.5 0.5 0.0 0.0 0.00.5 1.0 0.0.0.5.1.0 0.00.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.00.5 1.0\n2.5 2.5 2.0 2.0 1.5 1.5 1.0 1.0 0.5 0.5 0.0 0.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0\nvalue e with a neural network classifier that is simultaneously trained with the original network by means of a gradient reversal layer. They call their approach domain-adversarial neural networks Unfortunately, a new classifier has to be trained in this approach including the need of new parame ters, additional computation times and validation procedures.\nThe two approaches MMD and the Proxy A-distance have in common that they do not minimize the domain discrepancy explicitly in the hidden activation space. In contrast, the authors in|Zhuang et al.(2015) do so by minimizing a modified version of the Kullback-Leibler divergence of the mean activations (MKL). That is, for samples X. Y c RN\nAs noted in the introduction, our approach is motivated by the fact that the MMD and the KL- divergence approach can be seen as the matching of statistical moments of the hidden activations A(0, Xs) and A(0, XT). In particular, MMD-based approaches that use the Gaussian kernel are equivalent to minimizing a certain distance between weighted sums of all moments of the hidden activation distributions (Li et al.]2015).\nFigure 2: Hidden activation distributions for a simple one-layer classification network with sigmoid activation functions and five hidden nodes trained with the standard objective (1) (left) and objec- tive (2) that includes the domain discrepancy minimization (right). The approach of this paper was used as domain regularizer. Dark gray: activations of the source domain, light gray: activations of the target domain.\nAnother approach is to make use of the MMD (Gretton et al.]2006) as domain regularizer. MMD(X, Y)2 = E(K(X, X)) - 2E(K(X, Y)) + E(K(Y, Y)) (3) between all examples in X and Y stored by the kernel matrix K(X, Y). A suitable choice of the kernel seems to be the Gaussian kernel e-|x-yll2 Louizos et al.2016Li et al.2015Tzeng et al. 2014). This approach has two major drawbacks: (a) the need of tuning an additional kernel parameter 3, and (b) the need of the kernel matrix computation K(X, Y) (computational complexity O(n2 + nm+m?)), which becomes inefficient (resource-intensive) in case of large data sets. Concerning (a) the tuning of is sophisticated since no target samples are available in the domain adaptation setting Suitable tuning procedures are transfer learning specific cross-validation methods (Zhong et al. 2010). More general methods that don't utilize source labels include heuristics that are based on kernel space properties (Sriperumbudur et al.|2009, [Gretton et al.2012), combinations of multiple kernels (Li et al.|2 2015), and kernel choices that maximize the MMD test power (Sutherland et al. 2016). The drawback (b) of the kernel matrix computation can be handled by approximating the MMD (Zhao & Meng2015), or by using linear time estimators (Gretton et al. 2012). In this work we focus on the quadratic-time MMD with the Gaussian kernel (Gretton et al. 2012 Tzeng et al. 2014) and transfer learning specific cross-validation for parameter tuning (Zhong et al. 2010 Ganin et al.[2016).\nN E(X)i E(Y)i MKL(X, Y) = E(X)i log + E(Y) log E(Y)i E(X)i i=1\nproach is fast to compute and has an explicit interpretation in the activation space. Our empirical ob- servations (section Experiments) show that minimizing the distance between only the first moment (mean) of the activation distributions can be improved by also minimizing the distance between higher order moments.\napproach utilizes the equivalent representation of probability distributions in terms of its momen series. We further utilize central moments due to their translation invariance and natural geometri interpretation. Our approach contrasts with other moment-based approaches, as they either matcl only the first moment (MKL) or they don't explicitly match the moments for each order (MMD). A a result, our approach improves over MMD-based approaches in terms of computational complexity with O (N(n + m)) for CMD and O (N(n2 + nm + m2)) for MMD. In contrast to MKL-base approaches more accurate distribution matching characteristics are obtained. In addition, CMI achieves a new state-of-the-art performance on most domain adaptation tasks of Office and out performs networks trained with MMD, variational fair autoencoders and domain adversarial neura networks on Amazon reviews."}, {"section_index": "3", "section_name": "CENTRAL MOMENT DISCREPANCY (CMD)", "section_text": "Definition 1 (CMD metric). Let X = (X1,..., Xn) and Y = (Y1,..., Yn) be bounded random vectors independent and identically distributed from two probability distributions p and q on the compact interval [a, b|N. The central moment discrepancy metric (CMD) is defined by\n8 1 CMD(p,q |E(X)-E(Y)|l+ k(X) ck(Y)|\nN 11 Ck(X) = E (X-E(X)) r1+...+rn=k i=1 r1,...,rn>0.\nThe first order central moments are zero, the second order central moments are related to vari ance, and the third and fourth order central moments are related to the skewness and the kurto- sis of probability distributions. It is easy to see that CMD(p, q) 0, CMD(p, q) = CMD(q,p), CMD(p,q) CMD(p,r) + CMD(r,q) and p = q => CMD(p,q) = 0. The following theorem shows the remaining property for CMD to be a metric on the set of probability distributions on a compact interval.\nOur approach is to minimize the discrepancy between the domain-specific hidden activation distri butions by minimizing the CMD. Thus, in the optimization procedure, we increasingly expect to see. the domain-specific cumulative distribution functions approach each other. This characteristic can. be expressed by the concept of convergence in distribution and it is shown in the following theorem.\nTheorem 2. Let pn and p be probability distributions on a compact interval and let CMD be define. as in (5), then\nCMD(pn,p) -> 0 Pn p\nWe define the final central moment discrepancy regularizer as an empirical estimate of the CMD. metric. Only the central moments that correspond to the marginal distributions are computed. The number of central moments is limited by a new parameter K and the expectation is sampled by the empirical expectation.\nIn this section we first propose a new distance function CMD on probability distributions on compact intervals. The definition is extended by two theorems that identify CMD as a metric and analyze a convergence property. The final domain regularizer is then defined as an empirical. estimate of CMD. The proofs of the theorems are given in the appendix..\nCMD(p,q) = 0 p = q\nDefinition 2 (CMD regularizer). Let X and Y be bounded random samples with respective prob ability distributions p and q on the interval [a, b|N. The central moment discrepancy regularizer. CMDk is defined as an empirical estimate of the CMD metric, by\nK 1 1 CMDk(X,Y E(X)-E(Y)|l + |Ck(X)Cr(Y)|lz\nThis definition includes three approximation steps: (a) the computation of only marginal central mo-. ments, (b) the bound on the order of central moment terms via parameter K, and (c) the sampling of the probability distributions by the replacement of the expected value with the empirical expectation.\nApplying approximation (a) and assuming independent marginal distributions, a zero CMD dis- tance value still implies equal joint distributions (thm.1) but convergence in distribution (thm. 2 applies only to the marginals. In the case of dependent marginal distributions, zero CMD dis- tance implies equal marginals and convergence in CMD implies convergence in distribution of the marginals. However, the matching properties for the joint distributions are not obtained with depen dent marginals and approximation (a). The computational complexity is reduced to be linear w. r. t. the number of samples.\nConcerning (b), proposition 1|shows that the marginal distribution specific CMD terms have an. upper bound that is strictly decreasing with increasing moment order. This bound is convergent to. zero. That is, higher CMD terms can contribute less to the overall distance value. This observation is experimentally strengthened in subsection Parameter Sensitivity..\nProposition 1. Let X and Y be bounded random vectors with respective probability distributions p and q on the compact interval [a, b|N. Then.\nk 1 1 k 1 |ck(X)-ck(Y)|l2VN k+1 21+k"}, {"section_index": "4", "section_name": "5 EXPERIMENTS", "section_text": "Our experimental evaluations are based on two benchmark datasets for domain adaptation, Amazon reviews and Office, described in subsection Datasets. The experimental setup is discussed in subsec tion Experimental Setup and our classification accuracy results are discussed in subsection Results. Subsection Parameter Sensitivity analysis the accuracy sensitivity w.r.t. parameter changes of K. for CMD and 3 for MMD."}, {"section_index": "5", "section_name": "5.1 DATASETS", "section_text": "Amazon reviews: For our first experiment we use the Amazon reviews data set with the same pre- processing as used by Chen et al.(2012); Ganin et al.](2016);Louizos et al.(2016). The data set contains product reviews of four different product categories: books, DVDs, kitchen appliances and electronics. Reviews are encoded in 5oo0 dimensional feature vectors of bag-of-words unigrams and bigrams with binary labels: 0 if the product is ranked by 1 - 3 stars and 1 if the product is ranked\nWe would like to underline that the training of neural networks with eq. (2) and the CMD regularizer in eq. (6) can be easily realized by gradient descent algorithms. The gradients of the CMD regularizer are simple aggregations of derivatives of the standard functions gH, x and ||.l[2\nby 4 or 5 stars. From the four categories we obtain twelve domain adaptation tasks (each categor serves once as source category and once as target category).\nOffice: The second experiment is based on the computer vision classification data set from Saenko et al.(2010) with images from three distinct domains: amazon (A), webcam (W) and dslr (D) This data set is a de facto standard for domain adaptation algorithms in computer vision. Amazon the largest domain, is a composition of 2817 images and its corresponding 31 classes. Following previous works we assess the performance of our method across all six possible transfer tasks."}, {"section_index": "6", "section_name": "Amazon Reviews:", "section_text": "For the Amazon reviews experiment, we use the same data splits as previous works for every task Thus we have 2000 labeled source examples and 2000 unlabeled target examples for training, and between 3000 and 6000 examples for testing.\nWe use a similar architecture as Ganin et al.(2016) with one dense hidden layer with 50 hidden. nodes, sigmoid activation functions and softmax output function. Three neural networks are trained. by means of eq. (2): (a) a base model without domain regularization ( = 0), (b) with the MMD as domain regularizer and (c) with CMD as domain regularizer. These models are additionally com. pared with the state-of-the-art models VFAE (Louizos et al.[2016) and DANN (Ganin et al.[2016) The models (a),(b) and (c) are trained with similar setup as in|Louizos et al.(2016) and Ganin et al.. (2016).\nFor the CMD regularizer, the A parameter of eq. (2) is set to 1, i.e. the weighting parameter is ne-. glected. The parameter K is heuristically set to five, as the first five moments capture rich geometric. information about the shape of a distribution and K = 5 is small enough to be computationally. efficient. However, the experiments in subsection Parameter Sensitivity show that similar results are obtained for K > 3.\nSince we have to deal with sparse data, we rely on the Adagrad optimizer (Duchi et al. 2011). Fo all evaluations, the default parametrization is used as implemented in Keras (Chollet. 2015). A1 evaluations are repeated 10 times based on different shuffles of the data, and the mean accuracie and standard deviations are analyzed.\nOffice: Since the office dataset is rather small with only 2817 images in its largest domain, we use th latent representations of the convolution neural network VGG16 ofSimonyan & Zisserman (2014 In particular we train a classifier with one hidden layer, 256 hidden nodes and sigmoid activatio. function on top of the output of the first dense layer in the network. We again train one base mode without domain regularization and a CMD regularized version with K = 5 and X = 1.\nAs an alternative to Adagrad for non-sparse data, we use the Adadelta optimizer fromZeiler (2012) Again, the default parametrization from Keras is used. We handle unbalances between source and target sample by randomly down-sampling (up-sampling) the source sample. In addition, we ensure a sub-sampled source batch that is balanced w. r. t. the class labels.\nFor the MMD regularizer we use the Gaussian kernel with parameter . We performed a hyper parameter search for and X, which has to be performed in an unsupervised way (no labels in the target domain). We use a variant of the reverse cross-validation approach proposed byZhong et al. (2010), in which we initialize the model weights of the reverse classifier by the weights of the first learned classifier (see[Ganin et al.[(2016) for details). Thereby, the parameter is tuned on 10 values between 0.1 and 500 on a logarithmic scale. The parameter is tuned on 10 values between 0.01 and 10 on a logarithmic scale. Without this parameter search, no competitive prediction accuracy results could be obtained.\nWe follow the standard training protocol for this data set and use all available source and target examples during training. Using this \"fully-transductive\" protocol, we compare our method with other state-of-the-art approaches including DLID (Chopra et al.]2013), DDC (Tzeng et al.] 2014) DAN (Long et al.]2015), Deep CORAL (Sun & Saenko2016), and DANN (Ganin et al. 2016) based on fine-tuning of the baseline model AlexNet (Krizhevsky et al.] 2012). We further compare our method to LSSA (Aljundi et al.]2015), CORAL (Sun et al.]2016), and AdaBN (Li et al.]2016) based on the fine-tuning of InceptionBN (Ioffe & Szegedy|2015).\nSince all hyper-parameters are set a-priori, no hyper- arameter search has to be performed\nAll experiments are repeated 10 times with randomly shuffled data sets and random initializations"}, {"section_index": "7", "section_name": "5.3 RESULTS", "section_text": "Amazon Reviews: Table |1|shows the classification accuracies of four models: The Source Onl model is the non domain regularized neural network trained with objective (1), and serves as a base model for the domain adaptation improvements. The models MMD and CMD are trained with the same architecture and objective (2) with d as the domain regularizer MMD and CMD, respec tively. VFAE refers to the Variational Fair Autoencoder of|Louizos et al.(2016), including a slightl modified version of the MMD regularizer for faster computations, and DANN refers to the domain adversarial neural networks model of Ganin et al.(2016). The last two columns are taken directly from these publications.\nTable 1: Prediction accuracy standard deviation on the Amazon reviews dataset. The last tw columns are taken directly fromLouizos et al.(2016) and Ganin et al. (2016)."}, {"section_index": "8", "section_name": "5.4 PARAMETER SENSITIVITY", "section_text": "The first sensitivity experiment aims at providing evidence regarding the accuracy sensitivity of the CMD regularizer w.r.t. parameter changes of K. That is, the contribution of higher terms in the CMD regularizer are analyzed. The claim is that the accuracy of CMD-based networks does not depend strongly on the choice of K in a range around its default value 5.\nIn fig.3|on the upper left we analyze the classification accuracy of a CMD-based network trained on all tasks of the Amazon reviews experiment. We perform a grid search for the two regularization hyper-parameters and K. We empirically choose a representative stable region for each parameter [0.3, 3] for A and {1,..., 7} for K. Since we want to analyze the sensitivity w.r.t. K, we averaged over the X-dimension, resulting in one accuracy value per K for each of the 12 tasks. Each accuracy is transformed into an accuracy ratio value by dividing it with the accuracy of K = 5. Thus, for each K and task we get one value representing the ratio between the obtained accuracy (for this K and task) and the accuracy of K = 5. The results are shown in fig.3](upper left). The accuracy\nAs one can observe in table[1] our accuracy of the CMD-based model is the highest in 9 out of 12 domain adaptation tasks, whereas on the remaining 3 it is the second best method. However, the difference in accuracy compared to the best method is smaller than the standard deviation over all. data shuffles.\nSource->Target Source Only MMD CMD VFAE DANN books->dvd .787 .004 .796 .008 .805 .007 .799 .784 books->electronics .714 .009 .758 .018 .787 .007 .792 .733 books->kitchen .745 .006 .787 .019 .813 .008 .816 .779 dyd->books .746 .019 .780 .018 .795 .005 .755 .723 dvd->electronics .724 .011 .766 .025 .797 .010 .786 .754 dvd->kitchen .765 .012 .796 .019 .830 .012 .822 .783 electronics->books .711 .006 .733 .017 .744 .008 .727 .713 electronics->dvd .719 .009 .748 .013 .763 .006 .765 .738 electronics->kitchen .844 .005 .857 .007 .860 .004 .850 .854 kitchen->books .699 .014 .740 .017 .756 .006 .720 .709 kitchen->dvd .734 .011 .763 .011 .775 .005 .733 .740 kitchen->electronics .833 .004 .844 .007 .854 .003 .838 .843 average .752 .009 .781 .015 .798 .007 .784 .763\nOffice: Table [2|shows the classification accuracy of different models trained on the Office dataset.. Note that some of the methods (LSSA, CORAL and AdaBN) are evaluated based on the Incep-. tionBN model, which shows higher accuracy than the base model (VGG16) of our method in most. tasks. However, our method outperforms related state-of-the-art methods on all except two tasks, on which it performs similar. We improve the previous state-of-the-art method AdaBN (Li et al.|2016). by more than 3.2% in average accuracy.\nTable 2: Prediction accuracy standard deviation on the Office dataset. The first 10 rows are taken. directly from the papers of Ganin et al.(2016) and[Li et al.(2016). The models DLID -DANN are based on the AlexNet model, LSSA -AdaBN are based on the InceptionBN model, and our method (CMD) is based on the VGG16 model.\nMethod A->W D->W W-D A->D D->A W->A average AlexNet .616 .954 .990 .638 .511 .498 .701 DLID .519 .782 .899 DDC .618 .950 .985 .644 .521 .522 .707 Deep CORAL .664 .957 .992 .668 .528 .515 .721 DAN .685 .960 .990 .670 .540 .531 .729 DANN .730 .964 .992 : - - - InceptionBN .703 .943 1.00 .705 .601 .579 .755 LSSA .677 .961 .984 .713 .578 .578 .749 CORAL .709 .957 .998 .719 .590 .602 .763 AdaBN .742 .957 .998 .731 .598 .574 .767 VGG16 .676 .006 .961 .003 .992 .002 .739 .009 .582 .005 .578 .004 .755 CMD .770 .006 .963 .004 992 .002 .796 .006 638 .007 .633 .006 .799\nratios between K = 5 and K E {3, 4, 6, 7} are lower than 0.5%, which underpins the claim that the accuracy of CMD-based networks does not depend strongly on the choice of K in a range around its default value 5. For K = 1 and K = 2 higher ratio values are obtained. In addition, for these two values many tasks show worse accuracy than obtained by K E {3, 4, 5, 6,7}. From this we additionally conclude that higher values of K are preferable to K = 1 and K = 2.\nThe default number of hidden nodes in all our experiments is 256 because of the high classification accuracy of the networks without domain regularization (Source Only) on the source domains. The. question arises if the accuracy of the CMD is lower for higher numbers of hidden nodes. That. is, if the accuracy ratio between the accuracy, of the CMD regularized networks compared to the. accuracy of the Source Only models, decreases with increasing hidden activation dimension. In order. to answer this question we calculate these ratio values for each task of the Amazon reviews data set. for different number of hidden nodes (128, 256, 384, . . . , 1664). For higher numbers of hidden nodes. our Source Only models don't converge with the optimization settings under consideration. For the. parameters X and K we use our default setting = 1 and K = 5. Fig.3|on the lower left shows. the ratio values (vertical axis) for every number of hidden nodes (horizontal axis) and every task. (colored lines). It can be seen that the accuracy improvement of the CMD domain regularizer varies. between 4% and 6%. However, no accuracy ratio decrease can be observed..\nThe same procedure is performed with the MMD weighted by parameter X = 9 and = 1.2 a these values show the highest classification accuracy for 256 hidden nodes. Fig. 3 on the lowe right shows that the accuracy improvement using the MMD decreases with increasing number o hidden nodes for this parameter setting. That is, for accurate performance of the MMD, additiona parameter tuning procedures for X and need to be performed. Note that the problem of finding th best setting for the parameter of the Gaussian kernel is a well known problem (Hsu et al.|2003).\nIn this paper we proposed the central moment discrepancy (CMD) for domain-invariant representa. tion learning, a distance function between probability distributions. Similar to other state-of-the-art. approaches (MMD, KL-divergence, Proxy A-distance), the CMD function can be used to minimize. the domain discrepancy of latent feature representations. This is achieved by order-wise differences\nThe same experimental procedure is performed with MMD regularization wighted by X E [5, 45 and Gaussian kernel parameter E [0.3, 1.7]. We calculate the ratio values w.r.t. the accuracy of = 1.2, since this value of shows the highest mean accuracy of all tasks. Fig.3|(upper right). shows the results. It can be seen that the accuracy of the MMD network is more sensitive to parameter changes than the CMD regularized version. Note that the problem of finding the best settings for the parameter of the Gaussian kernel is a well known problem (Hsu et al.2003).\nPlease note that we use a default setting for K and X. Thus, fig.3 shows that our default setting ( = 1, K = 5) can be used independently of the number of hidden nodes. This is an additional result.\nCMD MMD 1.010 1.010 1.005 1.005 o 1.000 1.000 0.995 0.995 0.990 0.990 eunn 0.985 0.985 De 0.980 0.980 0.975 0.975 - 0.970 1 1 / 0.970 1 2 3 4 5 6 7 0.3 0.5 0.8 1.01.2 1.5 1.7 number of moments. kernel parameter 1.10 1.10 1.08 1.08 fo 1.06 1.06 1.04 1.04 1.02 1.02 1.00 1.00 number of hidden nodes number of hidden nodes\n0 5 0 5 0 5\n0 5 1 2 3 4 5 6 7\nFigure 3: Sensitivity of classification accuracy w.r. t. different parameters of CMD (left) and MMD (right) on the Amazon reviews dataset. The horizontal axes show parameter values and the vertical axes show accuracy ratio values. Each line represents accuracy ratio values for one specific task. The ratio values are computed w. r. t. the default accuracy for CMD (upper left), w. r. t. the best obtainable accuracy for MMD (upper right) and w. r. t. the non domain regularized network accuracies (lower left and lower right).\nof central moments. By using probability theoretic analysis, we proved that CMD is a metric and tha convergence in CMD implies convergence in distribution for probability distributions on compact intervals. Our method yields state-of-the-art performance on most tasks of the Office benchmark data set and outperforms Gaussian kernel based MMD, VFAE and DANN on most tasks of the Amazor reviews benchmark data set. These results are achieved with the default parameter setting of K = 5 In addition, we experimentally underpinned the claim that the classification accuracy is not sensitive to the particular choice of K for K > 3. Therefore, no computationally expensive hyper-paramete selection is required."}, {"section_index": "9", "section_name": "A THEOREM PROOFS", "section_text": "Proof. Let X and Y be two random vectors that have probability distributions p and q, respec tively. Let X = X - E(X) and Y = Y - E(Y) be the mean centered random variables. From. CMD(p, q) = 0 it follows that all moments of the bounded random variables X and Y are equal.. Therefore, the joint moment generating functions of X and Y are equal. Using the property that p.\nIn our experimental analysis we compared our approach to different other state-of-the-art distribu-. tion matching methods like the Maximum Mean Discrepancy (MMD) based on the Gaussian kernel using a quadratic time estimate. In the future we want to extend our experimental analysis to other. MMD approaches including other kernels, parameter selection procedures and linear time estima-. tors. In addition, we plan to use the CMD for training generative models and to further investigate. the approximation quality of the proposed empirical estimate.\nCMD(p,q) = 0 = p = q\nCMD(pn,p) -> 0 Pn p\nk 1 1 k 1 |ck(X)-ck(Y)|l<2VN k +1 k + 21+k\nProof. Let ([a, b]) be the set of all random variables with values in [a, b]. Then it follows that\nCkX CkX X RX X < E b-a X -E(X <2vN sup E b-a XEX([a,b])\nThe latter term refers to the absolute central moment of order k, for which the smallest upper bound is known (Egozcue et al.|2012):\nk 1 1 k 1 |ck(X)-ck(Y)|l<2VN k+1 21+k\nThe research reported in this paper has been supported by the Austrian Ministry for Transport, Inno. vation and Technology, the Federal Ministry of Science, Research and Economy, and the Province of Upper Austria in the frame of the COMET center SCCH.\nWe would like to thank Bernhard Moser and Florian Sobieczky for fruitful discussions on metric spaces.\nk(X)ck(Y)|l2 = |b- a|k CkX a X X -E(X X EX <2VN sup E b-a XEX([a,b])\nlck(X)-ck(Y)|l< 2VN sup x(1-x) 1 - xx x E[0,1]"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Patrick Billingsley. Probability and measure. John Wiley & Sons, 2008\nPatrick Billingsley. Conve f probability measures. John Wiley & Sons, 2013\nFrancois Chollet. Keras: Deep learning library for theano and tensorflow, 2015\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and. stochastic optimization. Journal of Machine Learning Research. 12(Jul):2121-2159. 2011.\nYanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, and Xiaodi Hou. Revisiting batch normaliza tion for practical domain adaptation. arXiv preprint arXiv:1603.04779, 2016.\nYujia Li, Kevin Swersky, and Richard Zemel. Unsupervised domain adaptation by domain invar. ant projection. In Neural Information Processing Systems Workshop on Transfer and Multitas Learning, 2014.\nMinmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. Marginalized denoising autoencoders for domain adaptation. International Conference on Machine Learning, pp. 767-774, 2012\nSumit Chopra, Suhrid Balakrishnan, and Raghuraman Gopalan. Dlid: Deep learning for domain. adaptation by interpolating between domains. International Conference on Machine Learning Workshop on Challenges in Representation Learning, 2013..\nMingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning, pp. 97-105, 2015.\nSinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199-210. 2011.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations, 2014.\nBaochen Sun, Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. In AAAI Conference on Artificial Intelligence, 2016.\nJi Zhao and Deyu Meng. Fastmmd: Ensemble of circular discrepancy for efficient two-sample tesi Neural computation, 27(6):1345-1372, 2015.\nErheng Zhong, Wei Fan, Qiang Yang, Olivier Verscheure, and Jiangtao Ren. Cross validation frame work to choose amongst models and datasets for transfer learning. In Joint European Conference. on Machine Learning and Knowledge Discovery in Databases, pp. 547-562. Springer, 2010..\nFuzhen Zhuang, Xiaohu Cheng, Ping Luo, Sinno Jialin Pan, and Qing He. Supervised representatio learning: Transfer learning with deep autoencoders. In International Joint Conference on Artificic Intelligence, 2015.\nDougal J Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola. and Arthur Gretton. Generative models and model criticism via optimized maximum mean discrepancy. arXiv preprint arXiv:1611.04488, 2016.\nEric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014."}]
r1Bjj8qge
[{"section_index": "0", "section_name": "ENCODING AND DECODING REPRESENTATIONS WITH SUM- AND MAX-PRODUCT NETWORKS", "section_text": "Antonio Vergari\nDepartment of Computer Science University of Bari, Italy.\nnicola.dimauro,floriana.esposito}@uniba.it\nSum-Product networks (SPNs) are expressive deep architectures for representing probability distributions, yet allowing exact and efficient inference. SPNs have been successfully applied in several domains, however always as black-box distribution estimators. In this paper, we argue that due to their recursive definition, SPNs can also be naturally employed as hierarchical feature extractors and thus for unsupervised representation learning. Moreover, when converted into Max-Product Networks (MPNs), it is possible to decode such representations back into the original input space. In this way, MPNs can be interpreted as a kind of generative autoencoder, even if they were never trained to reconstruct the input data. We show how these learned representations, if visualized, indeed correspond to \"meaningful parts' of the training data. They also yield a large improvement when used in structured prediction tasks. As shown in extensive experiments, SPN and MPN encoding and decoding schemes prove very competitive against the ones employing RBMs and other stacked autoencoder architectures."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "On a high level, the generative approach to machine learning can be described as follows: Given a se of samples D, drawn (usually i.i.d.) from an unknown distribution p* over random variables (RV. X, recover p* from D. To a certain extent, generative learning (GL) can be seen as the \"kingclass paradigm in machine learning. It is well known that an optimal predictor - given an additional los function can just be derived from p*. For example, assuming that Y is a class variable and X ar observed features, the classifier with minimal expected 0/1-loss is given by argmax,, p* (y, X).\nIt is therefore not surprising that GL and representation learning (RL) (Bengio et al.] 2012) are. highly related, as both aim at \"formally understanding' data. GL can be described as a \"black-box' approach, since we are usually interested in the capability of some model pe to capture the underlying. distribution p*. In RL, however, one may be interested in interpreting the \"inner parts\"' of pe as. abstract features of the original raw data. Both perspectives can be seen in the seminal RL approaches (Hinton & Salakhutdinov]2006) [Bengio et al.2006), as the activations of generatively trained models are employed as data representations for initializing deep architectures..\nAs another simple example, consider a Bayes classifier, which estimates the joint distribution p(Y, X by using the class-prior p(Y ) and class-conditionals p(X Y ). In a purist GL view, we estimate p(Y and p(X|Y) to compute p(Y, X) = p(X|Y) p(Y) x p(Y |X). In an RL approach, however, we would recognize that the parts of our model p(X Y) (or also p(Y X)) can be interpreted as a kind of soft one-hot encoding for Y, and would use them as features in a discriminative approach. The same argument holds for an unsupervised learning scenario, i.e. when Y is unobserved: we would deal with latent mixture models for which p(X |Y) are the mixture components. In summary, we\nRobert Peharz\nrobert.peharz@medunigraz.at"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "note that any generative model - depending on its structure and semantics - might be a potentia feature extractor and thus a potential useful representation learner.\nIn this paper, we investigate a particular promising candidate for this approach, namely Sum-Product Networks (SPNs) (Poon & Domingos 2011), recently proposed deep probabilistic networks, ad- mitting exact but tractable inference for several kinds of probabilistic queries. SPNs have been successfully applied to computer vision (Gens & Domingos2012} Amer & Todorovic]2015) speech (Peharz et al.|2014b)Zohrer et al. 2015) and language modeling (Cheng et al.[2014). In these works, however, SPNs have been used only as black-box distribution estimators.\nHere we exploit SPNs for RL. One way to interpret SPNs is as a hierarchically structured gen. eralization of mixture models: they are nested arrangements of factorized distributions (produc. nodes) and mixture distributions (weighted sum nodes) defining distributions over subsets of X. Due to this peculiarity, representations extracted from an SPN by evaluating the network node. extend the idea of using mixture components as features, as in the motivating example above, in a. recursive way. In[Vergari et al.(2016) some initial approaches to encode embeddings via an SPN were proposed, showing how these model can constitute an interesting alternative or addition to othe. popular generative feature extractors such as RBMs (Hinton & Salakhutdinov]2006} [Marlin et al.. 2010). The advantages of employing SPNs for RL are that one can \"easily\" learn both structure anc. parameters by leveraging the SPN's recursive probabilistic semantics (Gens & Domingos]2013) rather than imposing an a-priori structure or using an ad-hoc weight learning algorithm, as usually. done for other deep architectures. Rich hierarchical features can be obtained even by such a simpl generative learning scheme. Indeed, in an SPN each node can be seen as a probabilistic part-basec. feature extractor. Visualizations of the filters learned by SPNs trained on images data confirm thai. these networks are able to learn meaningful representations at different levels of abstraction..\nIn this work we provide a way to decode the learned representations back to their original space by. employing a Max-Product Network (MPN) (Poon & Domingos2011). Our decoding procedure. leverages the Most Probable Explanation (MPE) (Darwiche] 2009) inference routine for SPNs and incorporates an imputation mechanism for missing components in a representation to be decoded. To a certain extent, an MPN can be exploited as a kind of generative autoencoder. We continue the. work of|Vergari et al.(2016) by adding other ways to leverage SPN representations, again for \"free\". i.e. without training the network with the aim to reconstruct its input. Additionally, we characterize. conditions when MPNs can be considered perfect encoder-decoders under the proposed scheme. As. a final contribution, we evaluate the meaningfulness and usefulness of SPN and MPN representations in an extensive set of structured output prediction tasks. Having devised a decoding procedure allows. us to explore different learning scenarios, e.g. building embeddings for the input features, for the. labels or for both. We demonstrate that these encoding and decoding schemes, \"cheaply' obtained by. a generative SPN, show surprisingly competitive performances when compared to those extracted. from RBMs, probabilistic autoencoders (Germain et al.[2015) and deep autoencoders tailored for label embeddings (Wicker et al.[2016) in all the learning scenarios evaluated..\nLet RVs be denoted by upper-case letters, e.g. X, Y and let corresponding lower-case letters denote their values, e.g. x ~ X. Similarly, boldface notation denotes sets of RVs and their combined values e.g. X, Y and x, y. For Y C X and a sample x, we denote with xy the restriction of x to Y.\nLet S, denote the sub-network rooted at node n and parametrized by wn. Each node n in S defines a probability distribution pw. over its scope by normalizing the output of Sn. Consequently, the distribution of S over all X is defined as the root normalized output. Sn(x|sc(n)), or short-hand Sn(x), indicates the output value of node n when X = x is observed as the network input.\nAn SPN S over a set of RVs X is a probabilistic model defined via a rooted DAG. The leaves of the graph (the SPN's inputs) are computationally tractable, possibly unnormalized distributions over a sub-set of X. When n is a leaf of S, let n. denote its associated distribution. The inner nodes compute either weighted sums or products over their children. Let ch(n) be the set of children for a particular node n. For a sum node n and a child c E ch(n), we associate a nonnegative weight wnc with the outgoing sum-edge n -> c. The set of all sum weights in S (the network parameters) is denoted as w. Furthermore, let S (resp. S) be the set of all sum (resp. product) nodes in S.\na (d)\nFigure 1: A complete and decomposable SPN S with leaves over univariate distributions labelec. by their scopes (1a); the MPN M obtained from S (1b); and its bottom-up evaluation to solve. argmaxq~Q p(q, X1 = 0, X2 = 1, X6 = 0) (1c) with Q = {X3, X4, X5}. Orange (resp. blue) for inner (resp. leaf) activations. A tree path highlighted by MPEAssignment in the top-down traversa. of M (1d). The assignment for RVs Q (resp. O = {X1, X2, X6}) is the violet (resp. purple) leaves\nLet sc(*) denote the scope, a labeling function associating to each node n a subset of X, i.e sc(n) C X. For a leaf n, sc(n) is the set of RVs over which n is defined. The scope of an inner node n is defined as sc(n) = Ucech(n) sc(c). The scope gives rise to some fundamental properties of an SPN: S is complete if Vn E S and Vc1, C2 E ch(n) : sc(c1) = sc(c2). S is decomposable if Vn E SO and Vc1, C2 E ch(n), c1 c2 : sc(c1) N sc(c2) = 0 (an example in Figure[1a)\nWhile inference in unconstrained SPNs is intractable, marginalization in complete and decomposable SPNs reduces to performing the marginalization task at the leaves and evaluating the inner nodes as usual (Poon & Domingos!2011} Peharz et al.]2015). An SPN is locally normalized when it complete, decomposable and locally normalized SPNs, the distributions of all nodes are already correctly normalized distributions. In the following, we only consider this class of SPNs.\nWhile marginalization can be tackled in time linear in the network size, the problem of finding a Most Probable Explanation (MPE) is generally NP-hard in SPNs (Peharz et al.2016). Given two sets of RVs Q, O c X, Q U O = X and Q n O = 0, inferring an MPE assignment is defined as finding\n= argmaxp(o, q) X q~Q\nHowever, MPE can be solved exactly in selective SPNs (Peharz et al.[|2014b]2016), i.e. SPNs where it holds that for each sample x at most one child of each sum node is non-zero. MPE in selective SPNs is solved via MPEAssignment (Poon & Domingos2011), which evaluates the network twice. First. one builds an MPN M from S by replacing each node n E S by a max node n E Mmax computing. maxcech(n) WncMc(x) and each leaf distribution by a maximizing distribution (Peharz et al.]|2016) (Figure[1b). One then computes M(x|o) - the MPE probability of the query p(x|o) - by evaluating M bottom-up (Figure[1c). Stage two consists of a top-down traversal of M. Starting from the root.. one follows the maximal child branch for each max node and all child branches of a product node Each partial input configuration determines a unique tree path. The MPE assignment x* is obtained by collecting the MPE solutions (w.r.t. Q) of the leaves in the path (Figure|1d). For selective SPNs the corresponding MPNs compute precisely the same value for each node, since sums and maxes are equivalent when applied to all zeros but one nonnegative value. In the non-selective case, MPNs can. be seen as a (lower-bounding) approximation of SPNs, and are thus also an interesting candidate for. RL, as showed in the next sections. Furthermore, while MPEAssignment solution for general SPNs. is not exact, it is still employable as a reasonable and common approximation (Peharz et al.]2016).\nSPNs and MPNs can be interpreted as very peculiar deep Neural Networks (ANNs) that are labeled.. constrained and fully probabilistic (Vergari et al.2016). They are labeled networks because of the scope function, which enables a direct encoding of the input (Bengio et al.]2012). Their topology. is constrained because of the completeness and decomposability properties, hence connections are. sparse and not dense. Differently from other distribution estimators like NADEs (Larochelle &.\nThe semantics of SPNs enable the design of simple and yet surprisingly effective structure learning. algorithms (Dennis & Ventura]2012) Peharz et al.J2013) Gens & Domingos2013). Many receni attempts and variants (Rooshenas & Lowd2014JAdel et al.]2015Vergari et al.[2015) build upor the currently most prominent algorithm earnSPN, a greedy top-down SPN learner introducec. in (Gens & Domingos)2013). LearnSPN proceeds by recursively decomposing a given data matrix. along its rows (i.e. samples), generating sum nodes and estimating their weights, and its columns (i.e. RVs), generating products. To a certain extent, LearnSPN can be interpreted as a recursive data. crawler, extracting peculiar features from a data matrix, which potentially only live in a particulai. data cluster and/or in a certain subset of RVs. This may be one of the few cases when the structure and parameters of an ANN can be learned without directly optimizing a loss function."}, {"section_index": "3", "section_name": "LEARNING REPRESENTATIONS WITH SPNS AND MPNS", "section_text": "In this section we discuss how to exploit an SPN S or its corresponding MPN M for RL, afte structure and parameters are generatively learned over X, following|Vergari et al.[(2016). We are interested in encoding each sample x' ~ X into a continuous vector representation e' in a new. d-dimensional space, i.e. an embedding e' E Ex Rd, where e' = fs(x') (SPNs) or e' = fM(x' (MPNs). We usually refer to SPNs, since most of the time similar consideration hold also for MPNs\nTo confirm this interpretation, we visualize the features extracted from nodes in an SPN learned on image samples (Vergari et al.|2016). For ANNs, the feature filtered by a hidden neuron can be. visualized as the image in the input space that maximally activates that neuron (Erhan et al.,[2009) In SPNs this corresponds to solving MPE for the sub-SPN rooted at a particular node, and restricted to the node's scope. As stated in Section2] we employ MPEAssignment as an approximation to this generally hard problem. Figure2|shows some of the MPE solutions/filter activations for an SPN trained on a binarized version of MNIST (Larochelle et al.]2007) (see Appendix[C|for details). Note. that they resemble part-based features at different levels of complexity: from small blobs (Figure|2a to shape contours (Figures|2b|and|2c), to full digits comprising background parts (Figure|2d). The missing background pixels, visualized in a checkerboard pattern, is due to those pixels being out of the scope for those nodes. This pattern locality is an SPN peculiarity: although also fully connected ANNs typically show locality (e.g. edge filters), the locality information is explicitly encoded in SPNs. via the node scopes. This suggests that the scope information alone may already be able to convey a meaningful representation of \"object parts\"', e.g. see the 'O' shapes in Figure[2 Also, note that filters appear qualitatively different from most classical ANNs, which motivates to combine SPN features. with those from other deep architectures, an approach worth further investigation..\nWhile in classical deep ANNs the layer depth is usually associated with the level of abstraction of. its filters (Erhan et al.[2009]Zeiler & Fergus2014]Yosinski et al.]2014), note that this does not easily translate to SPNs. First, even rather simple models might yield extremely deep networks when translated into SPNs. For example, when representing a hidden Markov model (HMM) as SPN. Peharz et al.(2014b), the SPN's depth grows linearly in the length of the HMM. Thus, representations. learned by SPNs are not easily arranged in a meaningful layered hierarchy, due to their constrained.\nFor ANNs, the common approach is to use the hidden neuron activations of the upper layers as the. learned representations for f. As argued above, SPN nodes are particular interpretable due to their clear probabilistic semantics. Given an SPN S and a set of nodes N = {nj}d=1 C S, we construct our embedding as e = Sn, (x|sc(n;)) = Pwn, (x|sc(n;)), where a reasonable selection criterion for N is given below. Each value represents the probability to see that sample according to a marginal. distribution over a node scope. Thus, the so-constructed embedding is a point in the geometric space. induced by a collection of proper probability densities..\nSPN nodes can also be seen as part-based filters operating over sub-spaces given by the node scopes. Sum nodes can be interpreted as filters built by weighted averages over filters sharing the same scope, and product nodes can be seen as compositions of filters over non-overlapping scopes. From the perspective of the internal mechanisms of LearnSPN-like algorithms, each filter captures a different aspect of sub-population and sub-space of the data. Thereby, the scope information induces a hierarchy of filters at different levels of abstraction.\nFigure 2: Visualizing features learned by an SPN trained on a binarized version of MNIST: 4 cluster of 9 images generated from randomly chosen nodes from different parts of the network but having similar scope lengths. The checkerboard pattern indicates pixels out of a node scope\ntopology and how they are learned. Moreover, LearnSPN-like algorithms can introduce nodes witl very different scope information at the same level of depth. This also occurs when compiling SPN.. into a minimal layered structure (Vergari et al.]2016)."}, {"section_index": "4", "section_name": "DECODING REPRESENTATIONS", "section_text": "Now we tackle the task to revert SPN representations back to the input space, i.e. to find an inverse transformation g: Ex -> X such that x' ~ x' = g(f(x')). Being able to decode representations extends the ways one can exploit SPNs for RL to new learning scenarios for predictive tasks. For example, if one were to learn a classifier from features X to labels Y, he could train the classifier to predict the label embeddings Ey rather than Y directly. Then, the predicted embeddings could be turned into the actual outputs by applying g for decoding. By disentangling dependencies over Y in the new space Ey, one can obtain better predictors. Following this principle, label embeddings have been greatly employed in RL for structured output prediction. One common approach is to compress labels into a lower dimensional space, then a regressor is trained to predict such embeddings and the predictions are decoded (decompressed) by an inverse linear transformation Bhatia et al. (2015): Akata et al.(2013). The advantage of the decoding scheme we propose is that g does not need additional training to be learned. rather it is provided by an already learned SPN turned into MPN\n2 (a) (b) (c) (d)\nTherefore, we suggest that rather the scope length |sc(n)| of a node n should be associated with its level of abstraction. The filter activations in Figure2|give confirming evidence for our conjecture Thus, when the aim is to compress data into an abstract representation of at most d dimensions, one reasonable filter criterion for SPN/MPN representations would be to collect the d nodes with largest scopes. Clearly, the smaller we choose d, the smaller will be the theoretically achievable quality of the reconstructed data. In our experiments, we leverage SPN representations and decoding schemes by adopting full embeddings, comprising all nodes activations (all colored values in Figure|1c), and inner embeddings, dropping out the leaf information (only orange values in Figure|1c), according to the observation that the number of leaves is overabundant w.r.t. inner nodes in SPNs built by LearnSPN (Vergari et al.2016). Note that, in both cases, the embedding size d is adaptively induced by the data, when building the SPN, without the need to fix or tune it beforehand.\nLet a perfect encoder-decoder be a pair (f, g) such that, for each x ~ X, its reconstruction is the. exact same sample, i.e. g(f(x)) = x. In our analysis, we focus on MPNs and characterize when they. can be used as perfect encoder-decoders. In practice, autoencoders, for which f and g are learned from data, are usually not trained to be perfect encoder-decoders, as they often might learn trivial representation, such as the identity function. This seems not to be an issue for MPNs, since the. learning phase is decoupled from the decoding one. We will also empirically confirm it in Section[5\nGiven an MPN M, the encoder function fM is given by collecting activations as illutrated in Section[3 Concerning the decoder, we propose a procedure for gm that mimics the MPEassignment algorithm as presented in Section[2 Recall that MPEAssingment finds a solution in the input space (top-down phase) after probabilities, i.e. node activations, are evaluated (bottom-up phase). Consider Eq.|1jin the case in which Q = 0 and sample x' is fully observed. If all the activations from the bottom-up phase are collected into an embedding e', its components will exactly determine the top-down descending\nphase, i.e. which branch to take when encountering a max node. As a consequence, the set of leaves in the traversed tree path completely depend on e'. This is also true if e' components are not determined from a bottom-up phase but come from the \"outside\", e.g. they are predicted. In order to completely define gm, each leaf node encoding $n reached in the top-down phase has to provide itself a decoder. function gon, operating over its scope. Similarly to MPEAssignment, a fully decoded embedding is. then constructed by collecting the reconstructions at the leaves according to each go. decoder..\nIn practice, we are interested in decoding embeddings that have been predicted by some learned model, i.e. the decoding phase is not applied to the embeddings obtained by directly evaluating a network. Nevertheless, it is important to determine under which circumstances these models behave as perfect encoder-decoders when transforming each instance to a new representation and back.\nProposition 1. If for an MPN M over X there exist a perfect encoder-decoder for each leaj distribution n and it holds for each max node n E M that there is only one child node c E ch(n for which M,(x) = wncMc(x), given x ~ X, then M is a perfect encoder-decoder.\nProof. It is easy to demonstrate this by inductive reasoning. If M comprised only a leaf node, then it would be a perfect encoder-decoder by definition. If it were composed by a product node over. child encoder-decoder MPNs, then each input could be reconstructed perfectly by the composition of the reconstruction of the child MPNs. Lastly, if it were composed by a max node over child perfect encoder-decoder MPNs Mc, c = 1. .. k, then it would also be a perfect encoder-decoder since for each possible input, only one child component Mc* would output a value s.t. Mn = wnc* Mc*..\nThus, to complete our decoding procedure, we still have to cope with the leaf decoder func-. tions. We define the decoded state for a leaf n as the configuration over its scope that mini- mizes some distance D over the leaf activation value and its encoded representation: X|sc(n) = argminu~sc(n) D($n(u)||fMn(x)). In our experiments we will employ simple L1 distance. On(u) - fM. (x)]. Unfortunately, decoding is ambiguous for most interesting leaf distributions,. such as Gaussians. However, this approach works well for discrete data used in our experiments. as long as the state probabilities are mutually distinct. In future work, we will explore techniques. to disambiguate decoding the leaves, e.g. by duplicating and splitting Gaussians. In Section[5] we empirically evaluate how good are the decoding schemes depicted here, since it is worth investigating. how close to perfect encoder-decoders MPNs learned on real datasets can be..\nIn order to apply the proposed decoding procedure, a full embedding comprising all the node. activations is required. In some real cases (e.g. data compression), only an incomplete embedding comprising only activations from a subset of the network nodes, is available. For certain incomplete embeddings a full decoding, however, is still possible..\nA decodable incomplete embedding e is an embedding such that for each missing activation Mn (x) e corresponding to a node n E M, all the activations ec = Mc(x) Vc E ch(n) are in e. For such ar incomplete embedding, it is sufficient to evaluate the MPN by propagating the embedding activation bottom-up, evaluating parent nodes after their children. The missing embedding components are ther reconstructed and the decoding phase for the now full embedding can proceed as before. If even this child information is missing, such a reconstruction is not possible in general. We argue that in such a case, the missing node activations can be reasonable imputed by their most probable value. Wher encountering a node nj, whose corresponding embedding value e; is not available, e; is estimated a. maxu~sc(n,) Mn,(u) by employing MPEassignment on the sub-networks rooted at nj. Since the MPE activations can be precomputed for all nodes, the complexity of the whole procedure is stil linear in the size of M. The pseudocode for the complete decoding procedure is listed in Appendix[A\nIn our experiments we evaluate the effectiveness and robustness of the decoding procedure both for. complete (full) and incomplete predicted embeddings. In particular, for structured output prediction. we employ inner embeddings (cf. Section|3), where leaf values are imputed using MPEassignment as stated above. Moreover, we investigate its resilience when imputing missing at random embedding. components either by just replacing them by their MPE value or by additionally evaluating the MPN bottom-up after the missing leaf activations have been imputed.."}, {"section_index": "5", "section_name": "5 EXPERIMENTS", "section_text": "The research questions we are validating are: i) how good are learned MPNs at reconstructing their input when full/inner embeddings are decoded? ii) how meaningful are the representations learned by SPNs and MPNs and how useful is to predict these embeddings instead of the raw targets and then. decoding them? iii) how resilient to missing components are the proposed decoding schemes?.\nFor all experiments we use 10 standard benchmark datasets for MLC. To fairly compare all the algorithms in our experiments, we employ the binarized versions of all datasets already processed by|Di Mauro et al.[(2016) and divided in 5 folds. Detailed dataset statistics are reported in Appendix|B\nWe learn both the structure and weights of our SPN, and hence MPN, models on X andY separatel for each fold by employing LearnSPN-b (Vergari et al.]2015), a variant of LearnSPN (see.) Appendix|C Structural statistics, e.g. the number of inner nodes, for all the models are reported ir. Appendix[D] Please refer to Tables|3|and|4[to determine the extracted embedding sizes."}, {"section_index": "6", "section_name": "5.1 RECONSTRUCTION PERFORMANCES", "section_text": "We want to determine how close to perfect encoder/decoders are MPNs learned from real data an equipped with our decoding schemes. In particular, we evaluate their decoding performances whe. the leaf activations are available (full embeddings), and the decoder employed is the L1 distance, o1. when they are missing (inner embeddings) and therefore their MPE state is used..\nFirst we turn each learned SPN into an MPN. Then, each model is asked to reconstruct both the training and test samples. Detailed results are reported in Tables5|and[6] AppendixE.2] It can be observed that the L1 leaf decoder proves to be a very reasonable approximation for binary RVs scoring very high reconstructions for all the three measures. For the models over Y, the MPE approximation scores surprisingly good reconstructions scoring > 80% EXACT MATCH on half datasets. In general, if the network is small enough, e.g., MPNs learned on the Flags dataset or on Y alone, it behaves as a perfect encoder-decoder for full embeddings. This demonstrates the efficacy of the proposed decoding schemes and shows how the presence of tied max node children activations impacts non-deterministic MPNs learned from data. We investigate if these potentially perfect reconstructions lead to banal representations in the following experiments."}, {"section_index": "7", "section_name": "5.2 STRUCTURED OUTPUT PREDICTION PERFORMANCES", "section_text": "As a proxy measure to assess the meaningfulness of the learned representations, we are considering their prediction performances. Given a predictive model, its improvement in performance in one of the above settings over the raw input/output case, X -> Y, determines how good the representations employed are. To highlight the ability of these representations to disentangle the dependencies underlying the RVs, we always train a simple linear model in all the settings. In particular, we employ an L2-regularized logistic regressor, LR, (resp. a ridge regressor, RR) to predict each RV\nAll the code employed for the experiments and visualizations will be made available\nStructured output prediction tasks like Multi-label Classification (MLC) offer a good experimental. ground to answer the above questions. In MLC one is interested in predicting the target labels associated to a sample x ~ X and represented as binary arrays: y ~ Y. Since there is no unique way to assess a classifier performance in MLC, we measure the JACCARD, HAMMING and EXACT MATCH scores, as metrics highly employed in the MLC literature and whose maximization equals to. focus on different sets of probabilistic dependencies (Dembczynski et al.||2012)..\nWe now focus on leveraging the representations learned by SPNs and MPNs in an unsupervised way for structured output prediction. In a fully supervised scenario one wants to build a classifier on the input RVs X to predict the output RVs Y directly (X -> Y). Instead, we can first encode both the input RVs X and/or the target RVs Y into different embedding spaces, Ex, Ey, and build a predictive model on top of them. In order to do so, we explore different settings: we learn a classifier on the input embeddings instead of the raw features (Ex -> Y); alternatively, one can first train a regressor on the original input X to predict label embeddings (X -> Ey), then decoding such predictions back to the original label space; finally, the same regressor can be trained on the input embeddings instead (Ex -> Ey) and its predictions decoded as above.\nin Y (resp. component in Ey) independently. Therefore, the most natural baseline to measure th aforementioned representation meaningfulness is to employ the same L2-logistic regressor to th X -> Y setting\nWe now introduce other models as either encoders or encoder/decoders to plug into our unsupervise settings. The aim is to compare SPN/MPN representations against theirs w.r.t. the aforementionec baseline. Therefore we select them as generative models for which inference can be exactly an tractably computed. For the Ex -> Y setting, we compare to RBMs (Smolensky|[1986) as highl expressive generative models, for which, while the joint likelihood is intractable for RBMs, exac conditionals of the latent RVs can be computed exactly and have been proven to be very predictiv. features (Larochelle & Bengio] 2008] Marlin et al.]2010 Larochelle et al.]2010). To evaluat different embedding sizes, we consider RBMs having 500, 1000 and 5000 hidden units (h). A natura competitor for all settings are MADEs (Germain et al.[2015), because they are deep autoencoder which are also tractable probabilistic models. We employ MADEs comprising 3 layers and 500 anc 1000 (resp. 200 and 500) hidden units per layer for the Ex -> Y (resp. X > Ey) setting.\nAdditionally, we add to the comparison MANIAC (Wicker et al.] 2016) a non-probabilistic autoen. coder model tailored to MLC. In MANIAC, stacked autoencoders are trained to reconstruct Y by compressing each label into a new representation, which, in turn, is used to train a base model exactly as in our X -> Ey setting. We employ architectures up to 4 hidden layers with different compression factors. Finally, we employ a max-margin CRFFinley & Joachims(2008), CRFssvm, in the X -> Y setting that considers a dependency structure on the label space in the form of a Chow-Liu tree. In this way we are able to frame the performances of all the models in the unsupervised setting against a. fully supervised and discriminative method on the same datasets..\nIn Appendix[E.1|we report all the choices made to learn and tune the involved models. For SPNs and hence MPNs, we do not need to define a handcrafted structure a priori like for all the competitors above. Consequently, for RBMs, MADEs, MANIAC it is needed to learn and cross-validate several models with different capacities to obtain properly sized embeddings. On the other hand, the size of embeddings extracted from SPNs/MPNs is adaptively determined by data, as stated in Section[3] The learned embedding sizes are reported in Tables3and4in AppendixD\nDetailed average fold metrics and their average ranks for all datasets are reported in Tables|9l718 in Appendix E.3.1] In Table [1] instead, we report the aggregated scores over all datasets d E D for each method f in the form of the average relative improvement w.r.t. the LR baseline. DdD SCoreLR(d) their improvement, the better.\nIn summary, SPN and MPN embeddings proved to be highly competitive and even superior to all other models in the three settings and for all the scores. Even the fully supervised and discriminative CRFssvm performance are comparable to the best SPN/MPN JACCARD (resp. HAMMING) score in the Ex -> Ey (resp. X > Ey) setting, while reporting a largely worse EXACT MATCH improvement than our models in the Ex -> Ey setting.\nIn particular, the setting Ex -> Y has proven to be hard for many models. This likely indicates that the dependencies on the X might not contribute much to the Y prediction (Dembczynski et al. 2012). Representations from SPNs, even with smaller embeddings than RBMs and MADEs (see Table|3, yield the largest improvements. In the X -> Ey setting, disentangling the relationships among the Y gives all models a performance boost. This is not the case for MADEs on some datasets probably due to their reconstruction power being traded off to their generalization power as generative models. MPNs, on the other hand, consistently exploit the label representation space and do not provide overfitted reconstructions. This answers our question about the meaningfulness of MPN representations suggesting that their tendency to be perfect encoder/decoders does not damage their representation performances.\nConcerning two decoding schemes we proposed, operating on incomplete (inner) embeddings not. only performs comparably to the full case, but also scores the best results on some datasets for. JACCARD and EXACT MATCH scores. This aspect can be seen in the Ex -> Ey setting as well\nAdditionally, to better understand the role of our decoding procedures in the X -> Ey and Ex -> Ey settings, we run a new set of experiments in which the decoding phase for Ey is performed by a nearest neighbor (k = 5) model on the basis of the training labelled embeddings. In these settings therefore, we can add to the comparison even RBM-encoded representations. Even in this scenario MPN embeddings are the best or very competitive. As expected, the non-linear kNN predictor performs better than our full decoding on several datasets for the JACCARD and EXACT MATCH scores, but less well for the inner variant. This is likely due to the smaller inner embedding sizes and highlights the goodness of the proposed decoding approach in presence of missing values and, more in general, for maximizing the HAMMING score.\nAll in all, with these structured output prediction tasks we gathered empirical confirmation o the meaningfulness and practical usefulness of SPN and MPN embeddings. The reported larg improvements over the three scores cannot be due to SPN/MPN larger embedding sizes. In fact, thei sizes are always comparable or smaller than RBM, MADE, and MANIAC ones since the latter ma capacities have been chosen after SPNs have been learned (Tables[3]and4] Appendix[D). It is also no possible to state that these representation higher predictive performances are correlated to the SPI ability to better model the data distributions, at least we look at the model likelihoods. Indeed, MAD. log-likelihoods have proven to be higher that SPN ones on many datasets and comparable on th rest. We argue that the reason behind these results lies in the hierarchical part-based representation SPNs provide. Each embedding component is responsible for capturing only the significant featur portions according to its corresponding node scope, as shown in Section|3] The meaningfulness o these components as features has to be found in the structure learning performed by LearnSPN-I (Section 2): while its hierarchical co-clustering chooses to split the data into sub-populations in ai unsupervised way to determine a reasonable distribution estimation, it highlights meaningful ways t discriminate among them.\nLastly, we evaluate the resilience of the decoding procedure proposed when label embedding com. ponents are missing at random in the X -> Ey setting. We want to compare the two imputation. schemes presented in Section4 either employing MPEAssignment to retrieve the most probable activation or evaluating the MPN bottom-up to compute the missing predicted components.\nFor all datasets, for each label embedding that has been predicted, we remove at random a percentage of components varying from 0 (full embedding) to 90%, by increments of 10%. If leaves activations are missing, their MPE activation is considered. After the full embedding has been reconstructed, the decoding phase proceeds as before. Figure3 shows how the two strategies perform differently for the EXACT MATCH score. The re-evaluation scheme is much more resilient one among the two, being able to maintain comparable scores to the full embedding case up to 30% missing components, then decaying less faster than the MPE based one. The proposed decoding scheme is therefore proved tc be not only surprisingly effective but also quite robust. Similar, but less prominent, behaviors are reported for the JACCARD and HAMMING scores in the Appendix.\nIn this work we investigated SPNs and MPNs under a RL lens. We suggested an interpretation of MPNs as generative autoencoders by providing a decoding procedure that leverages approximate. MPE inference. We characterize when these networks can lead to perfect reconstructions of their inputs, linking this property to determinism. When empirically evaluated in an extensive comparison for MLC, SPN and MPN representations ranked as one of the most predictive features and MPN reconstructions proved to be surprisingly effective. Encouraged by these results, we plan to explore new learning schemes directly exploiting these models learned representatons, and not optimizing their likelihood scores only. For instance, a differentiable procedure for MPE inference would allow SPNs and MPNs to be trained directly to reconstruct or denoise their input, bridging the gap even. more between these networks, autoencoders and other ANNs and opening the path to hybridize them.\nTable 1: Average relative test set improvement in scores w.r.t the LR baseline (values are percentages) For each setting, best results in bold. Results for the 5-NN decoding are shown in the last two row groups.\nFigure 3: Average test EXACT MATCH scores (y axis) obtained by imputing different percentages of missing random embedding components (x axis) for the X -> Ey setting on all datasets by employing MPE inference (orange crosses) or the bottom-up evaluation imputation schemes (blue squares). Results for Cal dataset are not reported since they are all zeros (see Table[9).\nx Y LR 0.00 0.00 0.00 x Y CRFssvM +15.83 +9.94 +103.90 RBMn=500 Y LR -1.16 -2.28 -14.13 RBMn=1000 Y LR +0.90 -0.85 -7.19 RBMh=5000 Y LR +1.46 +0.20 -1.62 MADEn=500 Y LR +1.15 +0.00 -7.04 MADEh=1000 Y LR +2.57 +0.60 +2.99 SPNinner Y LR +3.54 +0.50 +17.18 x MADEh=200 RR MADE -30.76 +7.10 -29.71 x MADEh=500 RR MADE -30.42 +7.04 -28.02 x MANIACRR RR MANIAC +5.96 +5.07 +95.78 x MPNfull RR MPN +11.65 +10.45 +96.30 x MPNinner RR MPN +15.19 +7.61 +98.58 MADEn=500 MADEh=200 RR MADE -28.14 +7.10 -28.00 MADEn=500 MADEh=500 RR MADE -27.81 +6.93 -27.14 MADEh=1000 MADEn=200 RR MADE -27.80 +6.96 -29.03 MADEh=1000 MADEn=500 RR MADE -27.15 +6.94 -25.14 SPNinner MPNfull RR MPN +14.52 +9.97 +106.62 SPNinner MPNinner RR MPN +15.98 +7.50 +106.65 x RBMn=100 RR 5-NN -7.13 +6.00 +6.60 x RBMn=200 RR 5-NN -4.25 +6.82 +22.59 x RBMh=500 RR 5-NN +6.93 +8.34 +59.19 x MADEh=200 RR 5-NN +11.17 +7.37 +82.72 x MADEh=500 RR 5-NN +14.57 +7.38 +88.62 x MPNfull RR 5-NN +27.10 +8.90 +133.02 x MPNinner RR 5-NN +21.94 +7.92 +107.00 MADEn=500 MADEh=200 RR 5-NN +9.48 +7.30 +81.78 MADEh=500 MADEh=500 RR 5-NN +12.77 +7.12 +85.78 MADEh=1000 MADEh=200 RR 5-NN +11.89 +7.44 +84.00 MADEh=1000 MADEh=500 RR 5-NN +13.12 +7.24 +90.14 SPNinner MPNfull RR 5-NN +25.41 +8.25 +129.60 SPNinner MPNinner RR 5-NN +21.45 +7.65 +109.79 Arts Business Emotions 0.60 0.30 0.55 X X MPE 0.25 H eval 0.20 0.50 0.15 X X 0.45 0.18 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 8.0 0.2 0.4 0.6 Flags Health Human 0.50 0.22 0.40 0.20 0.30 0.18 X 0.20 0.16 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 8.0 0.2 0.4 0.6 Plants Scene Yeast 0.60 0.14 .50 81? 8:18 00 X 0.18 0.04 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6\nAlessandro Antonucci, Giorgio Corani, Denis Deratani Maua, and Sandra Gabaglio. An ensemble of bayesian networks for multilabel classification. In Proceedings of IJCAI, pp. 1220-1225, 2013..\nKush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, and Prateek Jain. Sparse loc embeddings for extreme multi-label classification. In NIPS 28, pp. 730-738. 2015.\nA. Darwiche. Modeling and Reasoning with Bayesian Networks. Cambridge University Press. 2009\nH. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation. In ICML, pp. 473-480, 2007.\nB.M. Marlin. K. Swersky, B. Chen, and N.D. Freitas. Inductive Principles for Restricted Boltzmann Machine Learning. In A1STATS, pp. 509-516, 2010.\nH. Poon and P. Domingos. Sum-Product Networks: a New Deep Architecture. UAI, 2011\nM. Zohrer, R. Peharz, and F. Pernkopf. Representation learning for single-channel source separatio and bandwidth extension. IEEE/ACM TASLP, 23(12):2398-2409, 2015.\nR. Peharz, R. Gens, F. Pernkopf, and P. Domingos. On the latent variable interpretation in sum-product networks. CoRR abs/1601.06180, 2016. (Accepted for publication in TPAMI)"}, {"section_index": "8", "section_name": "DECODING ALGORITHM", "section_text": "Algorithm|1lists the pseudocode for our decoding procedure as illustrated in Section 4"}, {"section_index": "9", "section_name": "B DATASETS", "section_text": "Table[2|reports the information about the adopted datasets, where N, M and L represent the number. of attributes, instances, and possible labels respectively. They are divided into five standard folds L dist(D) ={y|3(x',y) E D}.\nTable 2: Dataset descriptions: number of attributes (N), instances (N), and labels (L)\n1: Input: an MPN M over X, an embedding e E Rd and a map a : Me M -> {1, ..., d} 2: Output: a sample x ~ X decoded from e, according to M 3: x 0|x| 4: Q root(M) > top-down traversal of M by using a queue Q 5: while not empty(Q) do 6: n Q > process current node 7: if n E Mmax then > max node 8: Cmax < argmaxcEch(n) WncVc such that vc < ea(c) if c E Me else maxu~sc(c) Mc(u) 9: Q Cmax 10: else if n E MO then > product node 11: Vc E ch(n) : Q c 12: else > leaf node 13: if n E Me then 14: xsc(n) argminu~(sc(n) D($n(u)||ea(n)) 15: else > MPEAssignment (inner embedding) 16: X|sc(n) argmaxu~sc(n) Mn(u) 17: return x\ndomain N M L card dens dist Arts text 500 7484 26 1.653 0.063 599 Business text 500 11214 30 1.598 0.053 233 Cal music 68 502 174 26.043 0.149 502 Emotions music 72 593 6 1.868 0.311 27 Flags images 19 194 7 3.391 0.484 54 Health text 500 9205 32 1.644 0.051 335 Human biology 440 3106 14 1.185 0.084 85 Plant biology 440 978 12 1.078 0.089 32 Scene images 294 2407 6 1.073 0.178 15 Yeast biology 103 2417 14 4.237 0.302 198\nTo learn the structure and weights of our SPNs (and hence MPNs), we employ LearnSPN-b|Vergari et al.(2015), a variant of LearnSPN. LearnSPN-b splits the data matrix slices always into two, while performing row clustering or checking for RVs independence. With the purpose of slowing down the greedy hierarchical clustering processes, it has proven to obtain simpler and deeper networks without limiting their expressiveness as density estimators. Based on the datasets statistics reported above in Appendix[B| we define the same ranges for LearnSPN-b hyperparameters both when we learn our SPNs for the X and the Y. We set the G-test independence test threshold to 5, we limit the minimum number of instances in a slice to split to 10 and we performed a grid search for the best leaf distribution Laplace smoothing value in {0.1, 0.2, 0.5, 1.0, 2.0}. We perform all computations in the log space to avoid numerical issues.\nFor the SPN learned on the binarized version of MNIST in Section|3|we set the G-test independence test threshold to 20 and the instance threshold to 50 in order to reduce the network size. We then applied the same grid search as above for the leaf Laplace smoothing coefficient.\nStatistics for the reference SPN models learned with LearnSPN-b on the X RVs only are reported in Table[3] Their average (and standard deviations) values over the dataset folds provide information about the network topology and quality: how many nodes are in there (edges + 1), how are they divided into leaves and sum and products and their max depth (as the longest path from the root) The same statistics are reported for the SPNs over RVs Y, then turned in MPNs, in Table4\nTable 3: Statistics for the SPN models learned by LearnSPN-b on the X RVs on the ten datasets Average and standard deviation values across the five folds reported..\nedges depth leaves inner sum prod scopes Arts 9241.8 20.2 7412.6 1830.2 605.4 1224.8 1053.6 175.4 1.1 151.7 56.2 19.5 36.8 18.6 Business 8569.6 23.4 7029.0 1541.6 507.6 1034.0 971.4 228.8 1.7 170.7 73.7 24.7 49.1 22.6 Cal 263.0 7.0 219.8 44.2 14.6 29.6 82.6 17.0 0.0 18.5 3.6 1.1 2.5 1.1 Emotions 985.8 13.4 724.6 262.2 87.2 175 147.4 36.4 0.9 20.2 20.2 6.9 13.3 4.7 Flags 74.0 7.0 54.6 20.4 6.8 13.6 25.6 3.9 0.0 1.5 2.5 1.7 0.1 0.5 Health 7209.2 22.2 5917.0 1293.2 427.8 865.4 899.8 249.3 1.1 247.4 21.4 6.4 15.0 7.9 Human 15356.6 19.0 11828.6 3529.0 1170.6 2358.4 1479.2 228.9 1.4 133.8 98.7 32.0 66.8 28.8 Plant 3493.8 13.8 2741.8 753.0 247.4 505.6 681.8 58.6 1.1 42.1 32.8 10.7 22.15 8.9 Scene 14814.6 15.8 11542.6 3273.0 1089.8 2183.2 1025.6 169.1 1.1 122.9 59.9 20.0 40.0 21.8 Yeast 2215.0 18.2 1611.2 604.8 199.6 405.2 262.2 96.1 1.1 72.4 28.3 9.4 19.0 3.9\nThe length of the embeddings extracted from such models is the number of inner nodes from Table|3 for the inner embeddings over X. For the embeddings over RVs Y, their length in the full setting. shall be considered as the number of all nodes from Table4.\nTable 4: Statistics for the SPN models learned by LearnSPN-b on the Y RVs on the ten dataset. Average and standard deviation values across the five folds reported..\nedges depth leaves inner sum prod scopes Arts 495.0 17.8 340.6 155.4 50.2 105.2 74.4 28.5 1.1 21.6 10.8 3.9 7.0 3.5 Business 414.0 18.6 292.6 122.4 40.2 82.2 65.8 18.0 0.9 18.1 5.7 1.8 3.9 2.5 Cal 1840.4 12.6 1428.0 413.4 137.8 275.6 293.6 51.2 0.9 25.8 29.6 9.8 19.7 7.8 Emotions 39.2 7.0 24.6 15.6 5.2 10.4 11.2 4.5 0.0 2.2 2.5 1.7 0.1 0.8 Flags 25.2 5.4 17.8 8.4 2.8 5.6 9.6 4.2 0.9 1.8 2.5 0.8 1.7 0.5 Health 504.2 17.4 355.0 150.2 49.2 101.0 76.4 21.6 1.7 17.5 7.6 2.4 5.3 2.1 Human 118.2 14.2 85.2 34.0 11.0 23.0 25.0 8.2 1.1 5.4 3.8 1.6 2.2 1.6 Plant 80.0 14.6 57.0 24.0 8.0 16.0 20 8.2 2.2 6.2 2.1 0.7 1.4 0.7 Scene 38.4 9.0 24.4 15.0 5.0 10.0 11.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 Yeast 382.4 14.6 241.2 142.2 46.6 95.6 46.4 33.4 0.9 22.6 12.8 4.1 8.8 4.2\nTo select the best value for the regularization parameter we will perform a grid search for LR in the space {10-4, 10-3, 10-2, 10-1, 1} and for RR in the space {10-4, 10-3, 10-2, 10-1, 1, 10, 102}6 for each experiment."}, {"section_index": "10", "section_name": "E.1.2 LEARNING RBMS", "section_text": "Concerning RBMs, we train them on the X alone (or on the Y alone for the kNN experiments by using the Persistent Constrastive Divergence (PCD) [Marlin et al.(2010) algorithm, leveraging. the implementation available in scikit-learn. For the weight learning hyperparameters we run a. grid search for the learning rate in {0.1, 0.01}, the batch size in {20, 100} and let the number of epochs range in {10, 20, 30} since no early stopping criterion was available. We then select the best models according to their pseudo-log likelihoods. To generate embeddings from RBMs, we evaluate the conditional probabilities of the hidden units given each sample. To make the comparison fairer we transform these values in the log domain in the same way we do for our SPN and MPN. representations."}, {"section_index": "11", "section_name": "E.1.3 LEARNING MADES", "section_text": "For MADEs, following the experimentation reported in (Germain et al.]2015), we employ adadelta to schedule the learning rate during training and fix its decay rate at 0.95; we set the max number of worsening iterations on the validation set to 30 as for RBMs and we employed a batch size of 100 samples. We initialize the weights by employing an SVD-based init scheme.\n6we leverage the python implementations for LR and RR from the scikit-learn package (http: // scikit-learn. org/). Note that in scikit-learn the grid parameter for LR has to be interpreted as an inverse regularization coefficient.\nWe learn architectures of three hidden layers comprising 500 and 1000 (resp. 200 and 500) hidder. neurons each for the X (resp. Y). For each reference model, we extract Ex embeddings by evaluating all the hidden layer activations (d = 1500 and d = 3000); for the Ey case, however, only the las hidden layer embeddings are actually exploited for the prediction (d = 200 and d = 500).\nFollowing the experiments in Wicker et al.(2016), we perform a grid search for the following hyperparameters: the number of layers is chosen in {2,3, 4} and the compression factor E {0.7, 0.8, 0.9}. We employ the Java implementation freely available in MEKA. For the RF version of MANIAC we build a random forest comprising 100 trees as it has been used in|Wicker et al.(2016] (see Appendix E.3.2|for the results of such a model)\nWe were not able to properly learn MANIAC for one dataset, Cal, for all measures, as a numerica. error in MEKA prevented the model evaluation, thereby we removed it in the result Table\nWe were also not able to train MANIAC on the Ex -> Y and hence Ex -> Ey settings because the learned representations were not available through MEKA"}, {"section_index": "12", "section_name": "E.2 RECONSTRUCTION ERRORS", "section_text": "Table 5: Average train and test JACcard, HAMming and EXAct match scores for the reconstructior of the original X representations through our SPN models, turned into MPNs, on each dataset.\nscore Arts Business Cal Emotions Flags Health Human Plant Scene Yeast JAC 99.34 79.76 99.94 99.43 100.00 99.53 99.26 99.52 99.44 99.75 full HAM 99.98 99.65 99.97 99.74 100.00 99.99 99.83 99.93 99.86 99.86 EXA 93.95 77.50 98.35 83.47 100.00 96.92 52.39 75.89 56.69 87.67 train JAC 39.94 49.18 95.03 81.40 68.09 52.11 60.61 54.96 73.49 89.47 inner HAM 99.08 99.35 97.62 90.32 89.74 99.45 89.85 92.70 87.74 93.79 EXA 13.84 28.03 97.37 01.56 08.50 26.90 00.00 00.00 00.00 00.57 JAC 99.41 99.72 99.95 99.48 100.00 99.65 99.33 99.60 99.44 99.78 full HAM 99.98 99.99 99.98 99.76 100.00 99.99 99.85 99.94 99.76 99.87 EXA 94.64 97.19 99.00 83.81 100.00 97.61 55.11 78.62 56.37 88.20 test JAC 37.97 48.03 94.56 79.35 66.88 51.08 59.01 99.44 71.74 88.98 inner HAM 99.02 99.31 97.37 89.09 89.34 99.42 89.31 99.76 86.74 93.49 EXA 13.20 27.84 29.08 01.85 07.76 26.31 00.00 56.37 00.00 00.49"}, {"section_index": "13", "section_name": "E.3.1 JACCARD HAMMING AND EXACT MATCH MEASURES", "section_text": "In this Section we report the additional results for the JACCARD and HAMMING measures in Table7 and Table 8|respectively. Figures4|and 5|report the resilience of the decoding scheme for missing at random embedding components for the JACCARD and HAMMING measures, respectively. We employ a euclidean 5-nearest neighbor classifier to perform the decoding step on all our models These results are reported in the table last rows..\nOther hyperparameters are optimized by a log-likelihood-wise grid search. The gradient dumping. coefficient is searched in {10-5, 10-7, 10-9}, and we employ once the shuffling of mask and orders.. Both ReLus and softplus functions are explored as the non-linearities employed for each hidden neuron. We employ a MADE openly available implementation, ported to python37.\nIn this Section we provide the detailed results for the reconstructions of the input for our SPNs turned into MPNs for each train and test portion of each dataset, averaged by fold. Table|5|(resp. Table|6 reports the results for architectures trained on the X (resp. Y) and asked to reconstruct their inputs w.r.t these RVs.\nTable 6: Average train and test JACcard, HAMming and EXAct match scores for the reconstructior. of the original Y representations through our SPN models, turned into MPNs, on each dataset.\nTable 7: Average test set JACCARD scores. For each setting, best result for a dataset in bold and average ranks in the last column. Results for the 5-NN decoding are shown in the last two row groups.."}, {"section_index": "14", "section_name": "E.3.2 MORE MANIAC RESULTS", "section_text": "score Arts Business Cal Emotions Flags Health Human Plant Scene Yeast JAC 99.94 99.88 98.72 100.00 100.00 99.96 100.00 100.00 100.00 99.95 full HAM 99.99 99.98 99.80 100.00 100.00 99.99 100.00 100.00 100.00 99.98 EXA 99.80 99.67 72.75 100.00 100.00 99.87 100.00 100.00 100.00 99.73 train JAC 88.76 92.19 55.94 78.52 70.47 93.25 90.35 89.44 96.31 95.29 inner HAM 98.75 99.34 92.41 91.52 81.82 99.41 98.55 98.42 98.76 98.32 EXA 75.77 82.44 00.00 53.41 23.96 84.06 82.30 85.91 92.64 80.89 JAC 99.93 99.86 98.89 100.00 100.00 99.97 100.00 100.00 100.00 99.93 full HAM 99.99 99.98 99.82 100.00 100.00 99.99 100.00 100.00 100.00 99.97 EXA 99.75 99.62 76.50 100.00 100.00 99.89 100.00 100.00 100.00 99.62 test JAC 88.42 92.13 51.98 77.89 70.56 93.11 90.39 89.32 99.95 94.81 inner HAM 98.69 99.33 91.52 91.15 81.90 99.39 98.55 98.41 99.98 98.12 EXA 75.46 82.34 00.00 52.12 23.90 83.87 82.38 85.79 99.73 79.39\nArts Business Cal Emotions Flags Health Human Plant Scene Yeast RANK LR 28.48 49.92 17.43 55.78 48.66 41.11 29.44 32.70 65.43 38.59 - CRFssvM 33.61 73.86 19.98 54.48 56.40 62.10 28.96 31.34 66.15 45.47 - RBMn=500 26.45 47.38 17.52 58.11 51.90 38.11 26.69 33.14 71.61 36.70 4.0 RBMh=1000 27.85 47.81 17.40 57.94 51.64 39.64 29.4433.53 71.73 37.44 3.4 7 RBMn=5000 29.16 48.59 17.51 56.91 50.07 40.73 3 30.3032.5269.61 39.25 3.4 T MADEh=500 27.81 46.90 17.95 55.92 47.30 41.35 27.84 30.60 68.82 39.58 4.5 X MADEn=1000 29.71 48.50 17.86 55.66 54.03 42.84 28.07 31.5371.4940.79 3.0 SPNinner 31.63 53.29 17.02 56.84 45.24 43.88 31.51 32.2371.8739.70 2.7 MADEn=200 5.03 68.56 20.05 29.21 47.97 40.14 2.58 11.31 12.89 42.73 4.5 MADEh=500 5.08 68.60 20.04 30.02 48.95 38.78 2.47 11.12 15.37 42.82 4.3 MANIACRR 39.96 73.43 49.41 56.5160.72 33.19 31.37 54.5249.35 2.0 MPNfull 29.30 73.43 20.30 54.30 58.18 57.80 25.86 29.39 61.20 46.83 2.1 MPNinner 35.72 70.53 20.77 52.08 55.86 55.31 27.61 33.07 69.60 47.08 2.0 MADEhx=500,hy=200 6.72 68.40 20.20 34.19 48.02 39.27 3.79 12.58 16.6942.33 4.6 MADEhx=500,hy=500 6.79 68.39 20.19 33.82 48.76 39.22 3.92 12.57 17.92 42.39 4.2 MADEhx=1000,hy=200 8.37 68.55 19.81 31.65 48.01 39.22 5.59 11.8416.9542.37 4.8 R MADEhx=1000,hy=500 8.65 68.41 19.83 33.11 48.44 39.50 5.96 11.3417.6142.64 3.9 X SPNinner -> MPNfull 33.47 73.88 19.52 54.48 57.70 60.20 28.67 29.3763.6446.50 1.8 3 SPNinner > MPNinner 37.64 69.98 20.52 52.50 56.56 59.28 27.82 33.2465.2046.05 1.6 RBMh=100 17.59 51.20 20.73 43.26 52.22 32.41 24.03 25.48 70.08 44.41 5.8 NN-9 RBMn=200 17.07 47.73 21.85 53.73 57.95 38.14 26.52 23.45 60.71 43.91 4.7 RBMh=500 16.76 46.96 21.64 51.31 59.19 36.46 39.16 44.61 71.07 43.19 3.8 MADEh=200 35.23 64.91 22.07 47.20 54.46 55.2930.37 28.2565.5242.72 4.5 T MADEh=500 37.36 69.04 21.51 47.55 56.79 56.90 32.47 28.66 62.52 45.93 3.8 X MPNfull 45.24 73.51 21.09 52.96 54.08 61.56 39.05 38.40 74.22 48.07 2.2 MPNinner 43.11 72.86 20.96 50.79 51.13 59.44 35.50 33.76 73.47 48.24 3.2 3 MADEhx=500,hy=200 34.67 64.42 21.79 48.10 53.96 53.40 29.61 24.94 67.93 42.96 5.0 MADEhx=500,hy=500 36.57 66.95 20.80 48.60 54.21 58.02 30.82 27.58 64.73 45.63 3.9 35.22 64.91 21.93 49.24 55.15 55.29 29.97 27.70 68.27 43.41 3.6 MADEnx=1000,hy=500 37.04 67.57 20.97 47.15 53.36 58.43 32.27 26.64 65.55 45.48 3.9 SPNinner > MPNfull 46.38 73.90 20.56 53.04 52.16 63.81 36.36 36.86 70.27 47.90 1.8 x SPNinner -> MPNinner 44.57 73.04 20.28 50.94 50.84 62.29 34.3433.28 69.37 47.69 2.8\nIn addition to the ridge regressor (RR) employed as the base model in our previous experiments, we also evaluate a much complex regressor as a random forest (RF) in conjunction with MANIAC, as\nTable 8: Average test set HAMMiNG scores. For each setting, best result for a dataset in bold and average ranks in the last column. Results for the 5-NN decoding are shown in the last two row groups..\nsuggested in (Wicker et al. 2016). The rationale behind this is that a linear model, such as RR, could. be at disadvantage on a compressed representation space, like those learned by MANIAC. Results for the JACCARD, HAMMING and EXACT MATCH scores are reported in Table|10|along with our previous results of MPN embeddings employing RR, for the X -> Ey. The performance of a linear models on our embeddings is favorably comparable to that of a non-linear one on MANIAC embeddings, proving the efficacy of MPN as feature extractors..\nArts Business Cal Emotions Flags Health Human Plant Scene Yeast RANK LR 86.67 92.13 65.25 76.70 68.41 92.24 84.72 86.95 87.69 65.17 CRFssvM 94.93 97.67 84.28 80.91 71.59 96.96 92.13 91.53 91.09 79.22 RBMn=500 82.79 91.43 63.13 78.27 66.95 90.18 X F 80.36 83.87 89.72 61.51 5.0 RBMh=1000 84.82 91.41 64.09 78.02 68.34 90.69 83.66 85.36 89.92 63.04 3.7 RBMn=5000 85.90 92.31 64.95 77.82 68.11 91.21 85.64 86.72 89.37 65.51 3.0 T MADEn=500 83.90 92.38 64.61 77.37 67.66 91.18 83.25 85.33 89.17 69.40 4.2 X MADEn=1000 84.82 93.00 65.04 77.03 68.16 91.48 83.68 85.56 90.14 70.52 2.5 SPNinner 86.10 94.12 62.25 77.60 66.87 91.76 87.39 86.92 90.20 67.54 2.5 MADEh=200 93.80 97.17 86.14 73.97 67.11 95.81 91.54 91.09 8 82.86 77.98 3.2 MADEn=500 93.80 97.17 86.06 74.08 67.03 95.82 91.54 91.09 82.51 77.82 3.4 + MANIACRR 94.27 97.51 78.55 70.06 96.64 89.95 88.68 85.88 77.54 3.2 MPNfull 94.80 97.62 86.25 80.69 73.20 96.81 92.09 91.69 91.14 79.34 1.0 MPNinner 92.26 97.28 85.62 77.71 70.35 95.78 89.4489.3589.67 74.47 3.8 MADEhx=500,hy=200 93.83 97.17 86.15 74.39 66.43 95.85 91.53 91.17 82.78 77.80 3.3 MADEhx=500,hy=500 93.83 97.17 86.07 74.33 66.20 95.87 91.53 91.14 82.50 77.62 4.0 MADEhx=1000,hy=200 93.86 97.18 86.15 73.13 66.73 95.85 91.5391.0783.02 77.89 3.1 MADEhx=1000,hy=500 93.86 97.18 86.05 73.27 67.11 95.58 91.53 91.08 82.80 77.73 3.6 SPNinner > MPNfull 94.93 97.68 86.01 79.99 73.36 96.96 91.16 91.00 89.77 78.94 2.2 + SPNinner -> MPNinner 92.78 97.21 85.27 77.66 70.94 96.27 89.3489.18 88.20 74.16 4.0 RBMn=100 91.34 95.38 84.83 73.35 68.59 94.12 85.82 86.35 89.8378.30 5.7 IN RBMn=200 91.28 95.06 84.64 78.21 71.57 94.67 86.86 86.37 86.52 78.35 5.1 Z RBMn=500 91.18 95.00 84.53 78.22 73.13 94.57 90.64 90.45 90.15 78.40 3.8 MADEh=200 92.84 96.62 85.16 76.05 70.62 95.98 88.83 87.4188.17 77.43 4.4 MADEh=500 92.80 97.05 85.50 76.78 71.16 96.23 89.31 87.51 86.96 76.01 4.0 X MPNfull 94.04 97.60 84.96 79.42 69.22 96.78 90.56 89.38 91.15 78.88 1.9 MPNinner 93.62 97.52 85.01 78.69 66.28 96.49 89.82 88.36 90.75 78.09 3.1 MADEhx=500,hy=200 92.62 96.56 85.22 76.02 70.77 95.81 88.97 86.85 89.02 76.84 4.6 MADEhx=500,hy=500 5 92.62 96.86 85.35 76.55 69.88 96.32 89.02 87.23 87.84 75.87 4.2 MADEnx=1000,hy=200 92.78 96.62 85.04 76.73 70.82 95.98 88.91 87.38 89.17 76.53 3.9 MADEhx=1000,hy=500 92.67 96.94 85.35 77.03 69.68 96.37 89.29 87.51 88.20 75.68 3.5 SPNinner -> MPNfull 94.16 97.64 84.81 79.28 67.08 96.95 89.90 89.06 89.66 78.54 1.8 93.82 97.53 84.78 78.39 66.58 96.75 89.39 88.33 89.25 77.66 2.8\nTable 10: Average test JACcard, HAMming and EXAct match scores for MANIAC employing random forest as base model (RF) and our MPN models in the X -> Ey setting.\nTable 9: Average test set EXACT MATCH. For each setting, best result for a dataset in bold and average ranks in the last column. Results for the 5-NN decoding are shown in the last two row groups.\nArts Business Cal Emotions Flags Health Human Plant Scene Yeast RANK LR 7.00 27.31 0.00 23.78 9.81 14.14 10.11 19.23 46.36 7.20 CRFssvM 25.33 58.68 0.00 30.18 14.98 49.40 22.54 24.34 60.74 10.72 RBMn=500 6.37 23.44 0.00 27.65 6.15 11.09 5.37 13.70 55.09 6.87 4.5 RBMn=1000 6.02 24.49 0.00 26.81 7.71 12.29 8.47 15.64 56.00 6.87 3.6 7 RBMn=5000 6.52 24.59 0.00 25.80 8.76 13.26 11.01 17.90 54.09 6.62 3.3 T MADEn=500 5.90 24.37 0.00 23.95 7.70 15.53 8.40 14.82 55.25 6.82 4.6 X MADEh=1000 7.79 22.15 0.00 24.45 9.76 17.24 8.53 16.35 59.78 8.06 2.7 SPNinner 10.37 30.03 0.00 24.62 8.70 19.88 15.03 18.41 56.95 6.95 1.9 MADEn=200 3.24 53.25 0.00 10.11 1.58 28.78 1.96 6.55 9.80 3.93 4.4 MADEh=500 3.30 53.23 0.00 10.28 3.63 27.53 1.88 5.93 11.30 4.10 4.3 MANIACRR 25.70 56.51 22.11 14.51 45.23 23.08 24.75 45.37 12.41 2.1 MPNfull 22.45 58.32 0.00 29.51 15.46 46.27 21.34 23.72 56.54 12.04 2.0 MPNinner 25.18 54.50 0.00 25.97 13.44 38.79 23.66 31.29 66.51 12.04 2.1 MADEhx=500,hy=200 4.94 53.29 0.00 9.94 2.08 28.09 2.44 5.21 10.84 3.31 4.1 MADEhx=500,hy=500 5.09 53.26 0.00 9.27 2.59 28.04 2.67 4.80 10.63 3.60 4.3 MADEhx=1000,hy=200 4.73 53.29 0.00 8.26 1.56 27.98 2.73 4.80 9.72 3.93 4.8 MADEhx=1000,hy=500 5.17 53.24 0.00 8.42 3.63 28.04 3.28 5.22 9.80 3.85 3.9 SPNinner > MPNfull 25.97 58.80 0.00 29.34 16.45 48.14 22.02 24.04 12.86 1.6 55.80 SPNinner > MPNinner 27.72 53.96 0.00 26.14 14.47 43.41 23.66 31.61 62.11 12.20 1.7 RBMh=100 10.06 20.11 0.00 13.83 9.30 14.36 8.88 18.81 66.97 11.34 5.4 NN- RBMn=200 9.50 13.71 0.00 27.48 12.90 24.74 11.97 19.42 58.16 10.84 4.8 5 RBMn=500 8.51 12.18 0.00 23.26 17.00 21.84 33.0342.53 67.96 10.54 3.8 MADEh=200 24.31 44.58 0.00 17.36 15.98 41.65 22.50 26.08 60.248.31 4.4 T MADEn=500 25.08 50.82 0.00 17.19 14.93 43.55 2 24.50 26.08 56.668.68 4.3 X MPNfull 34.46 57.49 0.00 25.46 8.31 47.78 33.16 35.49 69.75 14.52 2.0 MPNinner 29.79 56.02 0.00 22.93 5.71 43.78 28.9430.17 67.59 12.90 3.2 Z MADEhx=500,hy=200 24.21 44.51 0.00 18.21 15.92 40.30 22.0923.4463.06 9.14 4.7 MADEhx=500,hy=500 25.11 48.76 0.00 18.38 10.75 44.87 23.0825.6958.57 10.21 3.8 MADEnx=1000,hy=200 24.88 44.58 0.00 20.23 13.87 41.65 22.05 25.28 63.23 9.39 4.1 MADEhx=1000,hy=500 25.53 49.79 0.00 16.35 14.43 44.85 24.53 24.66 59.45 9.51 3.8 SPNinner > MPNfull 35.98 57.79 0.00 25.46 7.24 50.13 28.75 34.25 63.60 14.81 1.6 X SPNinner -> MPNinner 31.93 56.03 0.00 23.44 6.25 47.58 2 25.94 30.06 61.36 13.16 2.7\nArts Business Cal Emotions Flags Health Human Plant Scene Yeast MANIACRF 42.04 72.61 52.81 53.62 63.08 29.37 31.22 62.26 49.56 MCC MPNfull 29.30 73.43 20.30 54.30 58.18 57.80 25.86 29.39 61.20 46.83 MPNinner 35.72 70.53 20.77 52.08 55.86 55.31 27.61 33.07 69.60 47.08 MANIACRE 93.99 97.49 76.53 72.10 96.89 90.16 89.10 88.06 77.63 WAH MPNfull 94.80 97.62 86.25 80.69 73.20 96.81 92.09 91.69 91.14 79.34 MPNinner 92.26 97.28 85.62 77.71 70.35 95.78 89.44 89.35 89.67 74.47 MANIACRF 30.51 55.47 - 24.61 10.93 48.15 18.57 23.52 54.45 12.57 EXA MPNfull 22.45 58.32 0.00 29.51 15.46 46.27 21.34 23.72 56.54 12.04 MPNinner 25.18 54.50 0.00 25.97 13.44 38.79 23.66 31.2966.51 12.04\nArts Business Cal500 0.35 0.75 0.22 0.30 0.70 X X MPE 0.21 0.25 0.65 eval 0.21 0.20 0.15.0 0.60 0.20 0.2 0.4 0.6 0.8 8.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 Emotions Flags Health 0.60 0.58 0.60 0.50 0.56 0.50 0.40 0.54 0.40 0.30 0.20 0.52 0.30 0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 Human Plants Scene 0.30 0.70 0.26 0.29 0.60 0.50 0.24 0.28 0.40 0.27 0.28.0 0.30 0.20 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 8.0 0.2 0.4 0.6 0.8 Yeast 0.50\nArts Business Cal500 0.35 0.75 0.22 0.30 0.70 X X MPE 0.21 0.25 eval 0.65 0.21 0.20 0.60 0.15 0.20 0.0 0.2 0.4 0.6 0.8 8.0 0.2 0.4 0.6 0.8 8.0 0.2 0.4 0.6 0.8 Emotions Flags Health 0.60 0.58 0.60 0.50 0.56 0.50 0.40 0.54 0.40 0.30 0.20 0.52 0.30 8.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 8.0 0.2 0.4 0.6 0.8 Human Plants Scene 0.30 0.26 0.70 0.29 0.60 0 50 0.24 0.28 0.40 0.27 0.30 0.22 0.26 0.20 26.0 0.2 0.4 0.6 0.8 8.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 Yeast 0.50 0.45 0.40 0.35 0.0 0.2 0.4 0.6 0.8\nFigure 4: Average test JACCARD scores (y axis) obtained by imputing different percentages of missing random embedding components (x axis) for the X -> Ey setting on all datasets by employing MPE inference (orange crosses) or the bottom-up evaluation imputation schemes (blue squares).\nArts Business Cal500 0.95 0.98 0.87 0.94 X X MPE 0.86 0.97 0.93 eval 0.85 0.97 0.92 0.84 0.91 0.96 K 0.83 0.0 0.2 0.4 0.6 0.8 8.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 Emotions Flags Health 0.85 0.72 0.97 0.80 0.96 0.75 0.70 0.95 0.70 0.65 0.68 0.94 0.60 0.93 0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 Human Plants Scene 0.92 0.95 0.92 0.90 0.90 0.85 0.90 0.80 0.88 0.75 0.88 0.70 8.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 8.0 0.2 0.4 0.6 0.8 Yeast 0.80 0.78 0.76 0.74 0.72 0.70 0.0 0.2 0.4 0.6 0.8\nArts Business Cal500 0.95 0.98 0.87 0.94 0.97 X MPE 0.86 0.93 0.85 eval 0.92 0.97 0.98.0 0.84 0.91 0.83 0.0 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 Emotions Flags Health 0.85 0.72 0.97 0.80 0.96 0.75 0.70 0.95 0.70 0.65 0.94 0.68 0.60 0.93 8.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 Human Plants Scene 0.92 0. .95 0.92 0.90 0.90 0.85 0.90 0.80 0.88 0. 75 0.88 0. 70 8.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 8.0 0.2 0.4 0.6 0.8 Yeast 0.80 0.78 0.76 0.74 0.72 0 0.0 0.2 0.4 0.6 0.8 Figure 5: Average test HAMMING scores (y axis) obtained by imputing different percentages of\nFigure 5: Average test HAMMING scores (y axis) obtained by imputing different percentages of missing random embedding components (x axis) for the X -> Ey setting on all datasets by employing MPE inference (orange crosses) or the bottom-up evaluation imputation schemes (blue squares)."}]
SJTQLdqlg
[{"section_index": "0", "section_name": "LEARNING TO REMEMBER RARE EVENTS", "section_text": "Lukasz Kaiser\nGoogle Brain\nlukaszkaiser@google.com\nDespite recent advances, memory-augmented deep neural networks are still lim-. ited when it comes to life-long and one-shot learning, especially in remembering rare events. We present a large-scale life-long memory module for use in deep learning. The module exploits fast nearest-neighbor algorithms for efficiency and thus scales to large memory sizes. Except for the nearest-neighbor query, the module is fully differentiable and trained end-to-end with no extra supervision. It operates in a life-long manner, i.e., without the need to reset it during training.. d111 ddod\nDespite recent advances, memory-aug aeep leulal lelwolks alc Sl ited when it comes to life-long and one-shot learning, especially in remembering rare events. We present a large-scale life-long memory module for use in deep learning. The module exploits fast nearest-neighbor algorithms for efficiency and thus scales to large memory sizes. Except for the nearest-neighbor query, the module is fully differentiable and trained end-to-end with no extra supervision. It operates in a life-long manner, i.e., without the need to reset it during training. Our memory module can be easily added to any part of a supervised neural net- work. To show its versatility we add it to a number of networks, from simple convolutional ones tested on image classification to deep sequence-to-sequence and recurrent-convolutional models. In all cases, the enhanced network gains the ability to remember and do life-long one-shot learning. Our module remembers training examples shown many thousands of steps in the past and it can success- fully generalize from them. We set new state-of-the-art for one-shot learning on"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Machine learning systems have been successful in many domains, from computer visior (Krizhevsky et al., 2012) to speech recognition (Hinton et al., 2012) and machine translatior (Sutskever et al., 2014; Bahdanau et al., 2014; Cho et al., 2014). Neural machine translation (NMT is so successful that for some language pairs it approaches, on average, the quality of human trans lators (Wu et al., 2016). The words on average are crucial though. When a sentence resembles on from the abundant training data, the translation will be accurate. However, when encountering rare word such as Dostoevsky (in German, Dostojewski), many models will fail. The correct Ger man translation of Dostoevsky does not appear enough times in the training data for the model tc sufficiently learn its translation.\nWhile more example sentences concerning the famous Russian author might eventually be added to. the training data, there are many other rare words or rare events of other kinds. This illustrates a general problem with current deep learning models: it is necessary to extend the training data and. re-train them to handle such rare or new events. Humans, on the other hand, learn in a life-long. fashion, often from single examples..\nFirst two authors contributed equally. tWork done as a member of the Google Brain Residency program (g . co/brainresidency #Work done during internship at Google Brain.\nOfir Nachum*'\nGoogle Brain\nofirnachum@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We present a life-long memory module that enables one-shot learning in a variety of neural networks.. Our memory module consists of key-value pairs. Keys are activations of a chosen layer of a neural network, and values are the ground-truth targets for the given example. This way, as the network is trained, its memory increases and becomes more useful. Eventually it can give predictions that\nleverage on knowledge from past data with similar activations. Given a new example, the networl writes it to memory and is able to use it afterwards, even if the example was presented just once\nThere are many advantages of having a long-term memory. One-shot learning is a desirable property in its own right, and some tasks, as we will show below, are simply not solvable without it. Ever real-world tasks where we have large training sets, such as translation, can benefit from long-term memory. Finally, since the memory can be traced back to training examples, it might help explain the decisions that the model is making and thus improve understandability of the model.\nIt is not immediately clear how to measure the performance of a life-long one-shot learning model. since most deep learning evaluations focus on the average performance and do not have a one-sho component. We therefore evaluate in a few ways, to show that our memory module indeed works:\nM = (Kmemory-sizexkey-size, Vmemory-size, Amemory-size)\nNN(q, M) = argmax; q : K|i]\nSince the keys are normalized, the above notion corresponds to the nearest neighbor with respect. to cosine similarity. We will also use the natural extension of it to k nearest neighbors, which we denote NN(q, M). In our experiments we always used the set of k = 256 nearest neighbors.\nl1,...,nk =NNz(q,M\nand return, as the main result, the value V [n1]. Additionally, we will compute the cosine similarities d, = q : K [n;] and return softmax(dj . t, ..., dk . t). The parameter t denotes the inverse of softmax temperature and we set it to t = 40 in our experiments. In models where the memory output is again embedded into a dense vector, we multiply the embedded output by the corresponding softmax component so as to provide a signal about confidence of the memory.\nThe forward computation of the memory module is thus very simple, the only interesting part being how to compute nearest neighbors efficiently, which we discuss below. But we must also answer the question how the memory is trained.\nMemory Loss. Assume now that in addition to a query q we are also given the correct desired. (supervised) value v. In the case of classification, this v would be the class label. In a sequence- to-sequence task, v would be the desired output token of the current time step. After computing the k nearest neighbors (n1,..., n) as above, let p be the smallest index such that V|np = v and\n1) We evaluate on the well-known one-shot learning task Omniglot, which is the only dataset with explicit one-shot learning evaluation. This dataset is small and does not benefit from life-long learning capability of our module, but we still exceed the best previous results and set new state-of-the-art. 2) We devise a synthetic task that requires life-long one-shot learning. On this task, standard models fare poorly while our model can solve it well, demonstrating its strengths. 3) Finally, we train an English-German translation model that has our life-long one-shot learning module. It retains very good performance on average and is also capable of one-shot learning. On the qualitative side, we find that it can translate rarely-occurring words like Dostoevsky On the quantitative side, we see that the BLEU score for the generated translations can be significantly increased by showing it related translations before evaluating.\nOur memory consists of a matrix K of memory keys, a vector V of memory values, and an addi. tional vector A that tracks the age of items stored in memory. Keys can be arbitrary vectors of size key-si ze, and we assume that the memory values are single integers representing a class or token. ID. We define a memory of size memory-si ze as a triple:.\nA memory query is a vector of size key-si ze which we assume to be normalized, i.e., llqll = 1. Given a query q, we define the nearest neighbor of q in M as any of the keys that maximize the dot product with q:\nFigure 1: The operation of the memory module on a q with correct value v; see text for details\nloss(q,v,M) = q: K[nb- q: K[np] + Q\nRecall that both q and the keys in memory are normalized, so the products in the above loss tern correspond to cosine similarities between q, the positive key, and the negative key. Since cosine. similarity is maximal for equal terms, we want to maximize the similarity to the positive key anc. minimize the similarity to the negative one. But once they are far enough apart (by the margin a 0.1 in all our experiments), we do not propagate any loss. This definition and reasoning behind it are. almost identical to the one in Schroff et al. (2015) and similar to many other distance metric learning works (Weinberger & Saul, 2009; Weston et al., 2011).\nMemory Update. In addition to computing the loss, we will also update the memory M to accoun for the fact that the newly presented query q corresponds to v. The update is done in a different way. depending on whether the main value returned by the memory module already is the correct value or not. As before, let n1 = NN(q, M) be the nearest neighbor to q..\nK[n]+ q, V[n]v, A[n]0\nWith every memory update we also increment the age of all non-updated indices by 1. The fu operation of the memory module is depicted in Figure 1..\nEfficient nearest neighbor computation. The most expensive operation in our memory module is the computation of k nearest neighbors. This can be done exactly or in an approximate way.\nIn the exact mode, to calculate the nearest neighbors in K to a mini-batch of queries Q = (q1, ... , qb), we perform a single matrix multiplication: Q KT. This multiplies the bat ch-si ze.\nCase 1: V[n1] = v; Loss = [q kb- q k1 + a] Case 2: V[n1] v; Loss = [qk1- qkp + a Update: K[n1] 1 q+k1 A[n1] 0 Update:K[n']qV[n']vA[n']0 l|q+k1|I V[n1] V[nb] F v V[n1] V|np = 1 V V n1 N1 np nb K k1 kb K k1 kp\nb the smallest index such that V[np] v. We call n, the positive neighbor and ny the negative neighbor. When no positive neighbor is among the top-k, we pick any vector from memory with value v instead of K [np]. We define the memory loss as:\nIf the memory already returns the correct value, i.e., if V[n1] = v, then we only update the key for n1 by taking the average of the current key and q and normalizing it:\nq+K[n1] K|n1 q+ K[n1]|]\nOtherwise, when V[n1] v, we find a new place in the memory and write the pair (q, v) there Which place should we choose? We find memory items with maximum age, and write to one of those (randomly chosen). More formally, we pick n' = argmax,A[i] + r; where [r< M]is a random number that introduces some randomness in the choice so as to avoid race conditions in asynchronous multi-replica training. We then set:\nMemory Key y1 Value <END> Encoder Output LSTM LSTM LSTM LSTM Attention LSTM LSTM LSTM LSTM X <GO> Ym X\nMemory Key y1 Value <END> Encoder Output LSTM LSTM LSTM LSTM Attention LSTM LSTM LSTM LSTM X1 x <GO> Ym n\nkey-size matrix Q by the key-size memory-size matrix KT, and the result is the. batch-size memory-size matrix of all distances, from which we can choose the top-k.. This procedure is linear in memory-si ze, so it can be expensive for very large memory sizes. But. matrix multiplication is very heavily optimized, so in our experiments on GPUs we find that this operation is not a bottleneck for memory sizes up to half a million..\nIf the exact mode is too slow, the k nearest neighbors can be computed approximately using locality sensitive hashing (LSH). LSH is a hashing scheme so that near neighbors get similar hashes (Indyk & Motwani, 1998; Andoni & Indyk, 2006). For cosine similarity, the computation of an LSH is very simple. We pick a number of random normalized hash vectors h1, ... , ht. The hash of a query q is a sequence of l bits, b1,..., bl, such that bi = 1 if, and only if, q . hi > O. It turns out that near neighbors will, with high probability, have a large number of identical bits in their hash. Tc compute the nearest neighbors it is therefore sufficient to only look into parts of the memory witl similar hashes. This makes the nearest neighbor computation work in approximately constant time - we only need to multiply the query by the hash vectors, and then only use the nearest buckets"}, {"section_index": "3", "section_name": "2.1 USING THE MEMORY MODULE", "section_text": "The memory module presented above can be added to any classification network. There are twc. main choices: which layer to use to generate queries, and how to use the output of the module\nInstead of using the output of the module directly, it is possible to embed it again into a dense representation and mix it with other predictions made by the network. To study this setting, we add the memory module to sequence-to-sequence recurrent neural networks. As described in detai below, a query to memory is made in every step of the decoder network. Memory output is embedded again into a dense representation and combined with inputs from other layers of the network.\nConvolutional Network with Memory. To test our memory module in a simple setting, we first add it to a basic convolutional network network for image classification. Our network consists of two convolutional layers with ReLU non-linearity, followed by a max-pooling layer, another two convolutional-ReLU layers, another max-pooling, and two fully connected layers. All convolutions use 3 3 filters with 64 channels in the first pair, and 128 in the second. The fully connected layers have dimension 256 and dropout applied between them. The output of the final layer is used as query. to our memory module and the nearest neighbor returned by the memory is used as the final network prediction. Even this basic architecture yields good results in one-shot learning, as discussed below.\nFigure 2: The GNMT model with added memory module. On each decoding step t, the result of the attention at is used to query the memory. The resulting value is combined with the output of the final LSTM layer to produce the predicted logits yt. See text for further details.\nIn the simplest case, we use the final layer of a network as query and the output of the module is directly used for classification. This simplest case is similar to matching networks (Oriol Vinyals 2016b) and our memory module yields good results already in this setting (see below)..\nFigure 3: Extended Neural GPU with memory module. Memory query is read from the position one. below the current output logit, and the embedded memory value is put at the same position of the output tape p. The network learns to use these values to produce the output in the next step..\nSequence-to-sequence with Memory. For large-scale experiments, we add the memory mod ule into a large sequence-to-sequence model. Such sequence-to-sequence recurrent neural network (RNNs) with long short-term memory (LSTM) cells (Hochreiter & Schmidhuber, 1997) have prove especially successful at natural language processing (NLP) tasks, including machine translatio (Sutskever et al., 2014; Bahdanau et al., 2014; Cho et al., 2014). We add the memory module t the Google Neural Machine Translation (GNMT) model (Wu et al., 2016). This model consists o an encoder RNN, which creates a representation of the source language sentence, and a decode RNN that outputs the target language sentence. We left the encoder RNN unmodified. In the de coder RNN, we use the vector retrieved by the attention mechanism as query to the memory module In the GNMT model, the attention vector is used in all LSTM layers beyond the second one, so th computation of the other layers and the memory can happen in parallel. Before the final softma layer, we combine the embedded memory output with the output of the final LSTM layer using a additional linear layer, as depicted in Figure 2\nExtended Neural GPU with Memory.. To test versatility of our memory module, we also add it to. the Extended Neural GPU, a convolutional-recurrent model introduced by Kaiser & Bengio (2016) The Extended Neural GPU is a sequence-to-sequence model too, but its decoder is convolutional. and the size of its state changes depending on the size of the input. Again, we leave the encoder. part of the model intact, and extend the decoder part by a memory query. This time, we use the. position one step ahead to query memory, and we put the embedded result to the output tape, as. shown in Figure 3. Note that in this model the result of the memory will be processed by two. recurrent-convolutional cells before the corresponding output is produced. The fact that this model. still does one-shot learning confirms that the output of our memory module can be used deep inside. a network, not just near the output layer..\nMemory in Neural Networks. Augmenting neural networks with memory has been heavily stud. ied recently. Many of these approaches design a memory component that is intended as a general ization of the memory in standard recurrent neural networks. In recurrent networks, the state passec from one time step to the next can be interpreted as the network's memory representation of the. current example. Moving away from this fixed-length vector representation of memory to a large. and more versatile form is at the core of these methods..\nAugmenting recurrent neural networks with attention (Bahdanau et al., 2014) can be interpreted as creating a large memory component that allows content-based addressing. More generally, Graves et al. (2014) augmented a recurrent neural network with a computing-inspired memory component that can be addressed via both content- and address-based queries. Sukhbaatar et al. (2015) present a similar augmentation and show the importance of allowing multiple reads and writes to memory between inputs. These approaches excel at tasks where it is necessary to store large parts of a se-\nM M P001 P1 02 Pn-1On i1 .. CGRU CGRU ... CGRU CGRUd CGRUd CGRUd... CGRUa in So S1 Sn = do d1 d2 dn\nquential input in a representation that can later be precisely queried. Such tasks include algorithm sequence manipulation tasks, natural language modelling, and question-answering tasks\nThe success of these approaches hinges on making the memory component fully differentiable anc backpropagating signal through every access of memory. In this setting, computational requirement necessitate that the memory be small. Some attempts have been made at making hard access querie to memory (Zaremba & Sutskever, 2015; Xu et al., 2015), but it was usually challenging to matcl the soft version. Recently, more successful training for hard queries was reported (Gulcehre et al 2016) that makes use of a curriculum strategy that mixes soft and hard queries at training time. Ou approach applies hard access as well, but we encourage the model to make good queries via a specia memory loss.\nModifications to allow for large-scale memory in neural networks have been proposed. The origina. implementation of memory networks (Weston et al., 2014) and later work on scaling it (Bordes et al., 2015; Chandar et al., 2016) used memory with size in the millions. The cost of doing so is. that the memory must be fixed prior to training. Moreover, since during the beginning of training the. model is unlikely to query the memory correctly, strong supervision is used to encourage the mode to query memory locations that are useful. These hints are either given as additional supervising. information by the task or determined heuristically as in Hill et al. (2015)..\nAll the work discussed so far has either used a memory that is fixed before training or used a memory. that is not persistent between different examples. For one-shot and lifelong learning, a memory must. necessarily be both volatile during training and persistent between examples. To bridge this gap,. Santoro et al. (2016) propose to partition training into distinct episodes consisting of a sequence. of labelled examples {(xi, yi)}=1. A network augmented with a fully-differentiable memory is. trained to predict yi given the previous sequence (x1, Y1, ..., X-1). This way, the model learns to. store important examples with their corresponding labels in memory and later re-use this information to correctly classify new examples. This model successfully exhibits one-shot learning on Omniglot.\nHowever, this approach again requires fully-differentiable memory access and thus limits the size o the memory as well as the length of an episode. This restriction has recently been alleviated by Ra et al. (2016). Their model can utilize large memories, but unlike our work does not have an explici cost to guide the formation of memory keys.\nOne-shot Learning. While the recent work of Santoro et al. (2016) succeeded in bridging the gap between memory-based models and one-shot learning, the field of one-shot learning has seen a variety of different approaches over time.\nA more sophisticated neural network approach is given by Vinyals et al. (2016). The strengths of this approach are (1) the model architecture utilizes recent advances in attention-augmented neural networks for set-to-set learning (Oriol Vinyals, 2016a), and (2) the training algorithm is designed to exactly match the testing phase (given k distinct images and an additional image, the model mus1 predict which of the k images is of the same class as the additional image). This approach may also be considered as a generalization of previous work on metric learning..\nFor classification tasks like Omniglot, it is easy to construct short episodes so that they include a few. examples from each of several classes. However, this becomes harder as the output becomes richer For example, in the difficult sequence-to-sequence tasks which we consider, it is hard to determine. which examples would be helpful for correctly predicting others a priori, and so constructing short. episodes each containing examples that are similar and act as hints to each other is intractable..\nEarly work utilized Bayesian methods to model data generatively (Fei-Fei et al., 2006; Lake et al 2011). The paper that introduced the Omniglot dataset (Lake et al., 2011) approached the task with a enerative model for strokes. This way, given a single character image, the probability of a differen mage being of the same character may be approximated via standard techniques. One early neura etwork approach to one-shot learning was given by Siamese networks (Koch, 2015). When ou pproach is applied to the Omniglot image classification dataset, the resulting training algorithm i ctually similar to that of Siamese networks. The only difference is in the loss function: Siames etworks utilize a cross-entropy loss whereas our method uses a margin triplet loss.\nTable 1: Results on the Omniglot dataset. Although our model uses only a simple convolutional neural network, the addition of our memory module allows it to approach much more complex models on 1-shot and multi-shot learning tasks."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "Omniglot. The Omniglot dataset (Lake et al., 2011) consists of 1623 characters from 50 differen alphabets, each hand-drawn by 20 different people. The large number of classes (characters) witl relatively few data per class (20), makes this an ideal data set for testing one-shot classification. Ir the N-way Omniglot task setup we pick N unseen character classes, independent of alphabet. We provide the model with one drawing of each character and measure its accuracy the K -th time it sees the character class. Our setup is identical to Oriol Vinyals (2016b), so we also augmented the data set with random rotations by multiples of 90 degrees and use 1200 characters for training, and the remaining character classes for evaluation. We present the results from Oriol Vinyals (2016b) anc ours in Table 1. Even with a simpler network without batch normalization, we get similar results.\nSynthetic task. To better understand the memory module operation and to test what it can remem ber, we devise a synthetic task and train the Extended Neural GPU with and without memory (we use a small Extended Neural GPU with 32 channels and memory of size half a million).\nIn our synthetic task, the input is a sequence consisting of As and Bs with one continuous substring. of 7 digits from the set 0, 1, 2, 3. The substring is interpreted as a number written in base-4, e.g.. 1982 = 1323324, so the string 132332 would be interpreted as 1982. The corresponding output is created by copying all As and Bs, but mapping the number through the random function f. For instance, assuming f(1982) = 3726, the output corresponding to 132332 would be 322032 as 3726 = 3220324. Here is an example of an input-output pair:\nThis task clearly requires memory to store the fixed random function. Since there are 16K elements to learn, it is hard to memorize, and each single instance occurs quite rarely. The raw Extended Neural GPU (or any other sequence-to-sequence model) are limited by their size. With long training,. the small model can memorize some of the sequences, but it is only a small fraction..\nAdditionally, there is no direct indication in the data what part of the input should trigger the pro duction of each output symbol. For example, to produce the first 3 output in the above example, the\n1 https://github.com/tensorflow/models/tree/master/learning_to_remember. rare_events\nModel 5-way 1-shot 5-way 5-shot 20-way 1-shot 20-way 5-shot Pixels Nearest Neighbor 41.7% 63.2% 26.7% 42.6% MANN (no convolutions) 82.8% 94.9% Convolutional Siamese Net 96.7% 98.4% 88.0% 96.5% Matching Network 98.1% 98.9% 93.8% 98.5% ConvNet with Memory Module 98.4% 99.6% 95.0% 98.6%\nWe perform experiments using all three architectures described above. We experiment both on real-. world data and on synthetic tasks that give us some insight into the performance and limitations of. the memory module. In all our experiments we use the Adam optimizer (Kingma & Ba, 2014) and. the parameters for the memory module remain unchanged (k = 256, a = 0.1). Good performance with a single set of parameters shows the versatility of our memory module. The source code for the. memory module, together with our settings for Omniglot, is available on github1..\nTo create training and test data for our synthetic task, we use symbols from the set S {2, ..., 16000} and first fix a random function f : S -> S. The function f is chosen at random, but fixed and the same for all training and testing examples (we used 40K training examples).\nTable 2: Results on the synthetic task. We report the percentage of fully correct sequences from th test set, which contains 10o00 random examples. See text for details.\nModel Full Test Odd Test GNMT 23.25 23.17 GNMT with Memory Module 23.29 23.16 GNMT with Memory Module and Even Test context 23.60 GNMT with Memory Module and Whole Test context 31.11*\nmemory key needs to encode all base-4 symbols from the input. Not just one or two aligned sym bols, but a number of them. Moreover, it should not encode more symbols or it will not generalize to the test set. Similarly, a basic nearest neighbor classifier fails on this task. We use sequences of length up to 40 during training, but there are only 7 relevant symbols. The simple nearest neighboi by Hamming distance will most probably select some sequence with similar prefix or suffix of As and Bs, and not the one with the corresponding base-4 part. We also trained a large sequence-to- sequence model with attention on this task (a 2-layer LSTM model with 256 units in each layer) This model can memorize the whole training set, but it suffers from a similar problem as the Ham- ming nearest neighbor - it almost doesn't generalize, its accuracy on the test set is only about 1% The same model with a memory module generalizes much better, reaching over 30% accuracy. The Extended Neural GPU with our memory module yields even better results, see Table 2.\nTranslation. To evaluate the memory module in a large-scale setting we use the GNMT model (Wu et al., 2016) extended with our memory module on the WMT14 English-to-Germar translation task. We evaluate the model both qualitatively and quantitatively.\nOn the qualitative side, we note that our memory-augmented model can successfully translate rare words like Dostoevsky, unlike the baseline model which predicts an identity-mapped Dostoevsky for the German translation of Dostoevsky.\nOn the quantitative side, we use the WMT test set. We find that in terms of BLEU score, an aggregat measure, the memory-augmented GNMT is on par with the baseline GNMT, see Table 3\nTo evaluate our memory-augmented model for one-shot capabilities we split the test set in two. We take the even lines of the test set (index starting at O) as a context set and the odd lines of the test se as the one-shot evaluation set. While showing the context set to the model, no additional training occurs, only memory updates are allowed. So the weights of the model do not change, but the memory does. Since the sentences in the test set are highly-correlated to each other (they come from paragraphs with preserved order), we expect that if we allow a one-shot capable model to use the context set to update its memory and then evaluate it on the other half of the test set, its accuracy will increase. For our GNMT with memory model, we passed the context set through the memory update operations 3 times. As seen in Table 3, the context set indeed helps when evaluating on the odd lines, increasing the BLEU score by almost O.5. As further indication that our memory module works properly, we also evaluate the model after showing the whole test set as a context set. Note that this is essentially an oracle: the memory module gets to see all the correct answers, we do this only to test and debug. As expected, this increases BLEU score dramatically, by over 8 points.\nTable 3: Results on the WMT En-De task. As described in the text, we split the test set in two (odd lines and even lines) to evaluate the model on one-shot learning. Given the even test set, the model can perform better on the odd test set. We also see a dramatic improvement when the model is provided with the whole test set, validating that the memory module is working as intended.."}, {"section_index": "5", "section_name": "5 DISCUSSION", "section_text": "We presented a long-term memory module that can be used for life-long learning. It is versatile, so i1. can be added to different deep learning models and at different layers to give the networks one-sho. learning capability. Several parts of the presented memory module could be tuned and studied ir more detail. The update rule that averages the query with the correct key could be parametrized. Instead of returning only the single nearest neighbor we could also return a number of them to be. processed by other layers of the network. We leave these questions for future research..\nThe main issue we encountered, though, is that evaluating one-shot learning is difficult, as standard. metrics do not focus on this scenario. In this work, we adapted the standard metrics to investigate our approach. For example, in the translation task we used half of the test set as context for the other. half, and we still report the standard BLEU score. This allows us to show that our module works. but it is only a temporary solution. Better metrics are needed to accelerate progress of one-shot anc. life-long learning. Thus, we consider the present work as just a first step on the way to making deep. models learn to remember rare events through their lifetime..\nLi Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. IEEE Trans Pattern Anal. Mach. Intell., 28(4):594-611, April 2006. ISSN 0162-8828. doi: 10.1109/TPAMI 2006.79. URLhttp://dx.doi.0rq/10.1109/TPAMI.2006.79\nCaglar Gulcehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. Dynamic neural turing. machine with soft and hard addressing schemes. CoRR, abs/1607.00036, 2016.\nFelix Hill. Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children's books with explicit memory representations. CoRR, abs/1511.02301, 2015.\nSarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Ben. gio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016.\nKyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and. Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical ma-. chine translation. CoRR, abs/1406.1078, 2014. URL http://arxiv.org/abs/1406. 107 8.\nAlex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401 2014. URLhttp://arxiv.orq/abs/1410.5401.\nGeoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdelrahman Mohamed, Navdeep Jaitly, An drew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury. Deep. neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82-97, 2012\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. Imagenet classification with deep convolu tional neural network. In Advances in Neural Information Processing Systems. 2012\nManjunath Kudlur Oriol Vinyals, Samy Bengio. Order matters: Sequence to sequence for sets. Ir International Conference on Learning Representations (ICLR), 2016a..\nTimothy Lillicrap Koray Kavukcuoglu Daan Wierstra Oriol Vinyals, Charles Blundell. Matching networks for one shot learning. CoRR, abs/1606.04080, 2016b.\nJack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves, and Timothy P Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. In Advances in Neural Information Processing Systems, (NIPS), 2016\nAdam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. One. shot learning with memory-augmented neural networks. CoRR, abs/1605.06065, 2016.\nFlorian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for fac recognition and clustering. In CVPR, pp. 815-823, 2015\nSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. Weakly supervised memory networks. CoRR, abs/1503.08895, 2015. URL http://arxiv.0rg/abs/1503.08895.\nOriol Vinyals, Charles Blundell, Timothy P. Lillicrap, Koray Kavukcuoglu, and Daan Wierstra Matching networks for one shot learning. CoRR, abs/1606.04080, 2016.\nKilian Q Weinberger and Lawrence K Saul. Distance metric learning for large margin nearest neigh bor classification. Journal of Machine Learning Research. 10(Feb):207-244. 2009\nWojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR abs/1505.00521,2015. URL http://arxiv.0rg/abs/1505.00521.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. 2014. URL http z .org/abs/1412.6980.\nIlya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural net\nJason Weston, Samy Bengio, and Nicolas Usunier. Wsabie: Scaling up to large vocabulary image annotation. In Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI 2011.\nonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. URL http://arxiv.0rg/abs/1609.08144."}]
r1IRctqxg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Sample importance is the sample's contribution to the parameter change during training. In statistics the concept \"leverage\"' of a point is used (St Laurent & Cook (1992)) to measure the impact of sample on the training of a model. In the context of SVM, the most important samples are the suppor vectors as they define the separating hyperplane. Understanding the importance of the sample. can help us interpret trained models and structure training to speed up convergence and improv prediction accuracy. For example, Curriculum learning (CL) from Bengio et al.[(2009) shows tha training with easy samples first, then gradually transitioning to difficult samples can improve th learning. In CL, the \"easiness\"' of a sample is predefined either manually or using an evaluatio. model. Self-paced learning (SPL) (Kumar et al.[(20i0) shows that it is possible to learn fron samples in order of easiness. In this framework, easiness is related to the prediction error and ca. be estimated from the model. However, easiness of a sample may not be sufficient to decide whe. it should be introduced to a learner. Maintaining diversity among the training samples can have. substantial effect on the training (Jiang et al.(2014))."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "In this work, we explore the sample importance in deep neural networks. Deep learning methods have been successfully applied in many tasks and routinely achieve better generalization error than classical shallow methods (LeCun et al.(2015)). One of the key characteristics of a deep network is its capacity to construct progressively more complex features throughout its layers (Lee et al. (2011)). An intuitive question arises: which samples contribute the most to the training of the differ-. ent layer's parameters? From literature |Saxe et al.(2011), we know that even randomly generated filters can compute features that lead to good performance -- presumably on easy samples. However,. to learn hard samples correctly, the model may need to construct complex features, which require both more training time and refined filters from bottom layers. Hence, we hypothesized that the. hard samples shape the bottom layers - closer to the input -- and easy samples shape the top layers -. closer to the output.\nMotivated by the above hypothesis, we analyzed the sample importance in a 3 layer ReLU networl. on two standard datasets. The results reveal several interesting facts about the sample importance ir easy and hard samples:\n2. Easy and hard samples impact the parameters in different layers. Easy samples impact have larger impact on top layer parameters, while hard samples shape the bottom layer parameters.\nIn this section, we are going to introduce the terminology and provide a quantitative measurement of sample importance for a training procedure.\nn L(yi,f(xi,0)) + R(0) i=1\nn V;L(yi,f(xi,0))+ R(O) i=1\nWe define the weight v, as the sample weight. Similar definitions on v, has been proposed i Self-paced learning (SPL)Kumar et al.(2010)\nIn Stochastic Gradient descend (SGD) methods, parameters 0 are updated with a certain step size. n in each iteration with regard to a set of training samples. If we allow different sample weights in different iterations. a single update can be written as:.\nn Qt+1=0t-nvgt-nr i=1"}, {"section_index": "2", "section_name": "2.2 SAMPLE IMPORTANCE", "section_text": "If we change the weight of a sample i at iteration t, how would such change impact the parameter training in that iteration? We can answer this question by calculating the first order derivative of\n1. Easy and hard samples impact the parameters in different training stages. The biggest. impact of easy samples on parameters are mostly during the early training stage, while the impact of hard samples become large in the late training stage..\n3. Mixing hard samples with easy samples in each batch helps training. We conducted batches with homogeneous or mixed \"easiness\". We found that use of homogeneous batches hinders the training. Hence, it is preferable for network to see both easy and hard samples during all stages of training.\nNext, we are going to give the definition of sample importance in Section[2] The empirical analysis. for sample importance in the deep neural network in two real datasets is discussed in Section 3 Extension about sample importance is showed in Section4.\nIn supervised learning, a model is trained by optimizing an objective over a set of observed training samples (x, y). Let f(x;, 0) be the output of a model for parameter 0. The training objective can be written as:\nwhere L(y, f(x;, 0)) is the loss on sample i, and R(0) is the regularization on the parameters. In order to highlight contribution of each sample, we can introduce sample specific weights v; E 0, 1 which scale sample's contribution to the loss. Hence, the objective in (1) can be rewritten as:\na Aot =-ngi.\nWe call ot the parameter affectibility by ith sample at iteration t. $ is a vector consists of param eter affectibility from all parameters in the network. Specifically, $., is the parameter affectibility for jth parameter in the network. , reflects the relationship between parameter change and different samples.\nTypical deep networks contains millions of parameters. Hence, we are going to focus on groups of parameters of interests. We define ith sample's importance for parameters of dth layer of as:.\n7 jE Qd\nwhere Qg is a set consists of the indexes of all parameters in layer d. Hence, sample's importance for all the parameters in the model is:\nx= > I\nThe sum of sample's importance across all iterations is defined as overall importance of a sample.\nIn general, for each sample i, computing t g allows us to decompose its influence in the model's training across training stages and different layers\nWe are going to explore the samples' importance for different layers at different epoch through a series of empirical experiments on two standard datasets."}, {"section_index": "3", "section_name": "3.1 EXPERIMENT SETUP", "section_text": "DatasetAll the analysis are performed on two standard datasets: MNIST[(LeCun et al.(1998)) a benchmark dataset that contains handwritten digit images. Each sample is a 28 28 image from 10 classes. We used 50000 samples for training and 10000 samples for testing. CIFAR-10 2 (Krizhevsky & Hinton(2009)), a dataset contains 32 32 tiny color images from 10 classes. Each. sample has 3072 features. We used 50000 samples for training and 10000 samples for testing.\nArchitecture We used a multilayer feed forward neural network with 3 hidden layers of 512 hid. den nodes with rectified linear units (ReLU) activation function, a linear output layer, and a softmax. layer on top for classification. The weights in each hidden layer are initialized according to Glorot & Bengio(2010). For hyper-parameters, we used learning rate of 0.1, batch size of 100, 50 total. epochs, and weight-decay of 1e - 5. No momentum or learning decay was used. All the code are. based on a common deep learning package Theano from Bergstra et al.(2010); Bastien et al.[(2012).\nt\nWe note that the sample importance is a high-level measurement of the samples influence on param-. eters at each iteration of the update. This quantity is not an accurate measurement of the relationship between a sample and final trained model. Refinements of this concept are discussed in Section|4\nFirstly, we want to explore whether the sample importance is stable under different initializations We used three different random seeds to initialize the network parameters and calculated the sample importance every five epochs. We computed the Spearman's rank correlation between sample im portance to the model, at, in each pair of initializations. This correlation remains high in all epochs.\nMNIST CIFAR-10 0.98 0.99 0.97 0.98 earnnnn earnenn 0.96 0.97 0.95 0.96 0.94 0.95 0.93 0.94 0.92 Corr 1,2 Corr 1,2 Corr 1,3 0.93 Corr 1,3 0.91 Corr 2,3 Corr 2,3 0.90 0.92 0 10 20 30 40 50 0 10 20 30 40 50 Epochs Epochs\nFigure 1: Does initialization affect sample's importance? Sample importance is preserved be tween initializations of the network. For each epoch, and a pair of initializations, we computec Spearman Correlation of samples' importance. Across all epochs, the correlation is of greater thar 0.9 in both MNIST and CIFAR-10. Early epochs show higher consistency between ranks of sample importance across different initializations.\nTo better understand and visualize the sample importance, we firstly calculate the overall sampl importance at each epoch as At = t=1 t. Similarly, the overall sample importance to layer is Bt = i=1 t,d. We show the overall sample importance and its decomposition in layers fo. two datasets in Figure2 Firstly, we found that even with a fixed learning rate, the overall sampl. mportance is different under different epochs. Output layer always has the largest average sampl importance per parameter, and its contribution reaches the maximum in the early training stage anc. then drops. Each layer contributes differently to the total sample importance. In both MNIST an. CIFAR-10, the 2nd layer contributes more than the 3rd layer. In CIFAR-10, the 1st layer's provide largest contribution the total sample importance, as it contains much more parameters than othe. layers. Interestingly, all classes do not provide the same amount of sample importance..\nWe found that most samples have small sample importance (Appendix Figure|10). To visualize the. contribution of different samples, we split the samples based on their total importance into three groups: 10%, top 10% - top 20% most important samples, and other 80% samples. We show the decomposition of importance contribution in each layer in Figure[3] In MNIST, the top 10% samples contribute almost all the sample importance. In CIFAR-10, most important samples contribute more. in lower layers rather than output layer. This result indicates that top 20 % most important samples. contribute to the majority of the sample importance.\nNegative log likelihood (NLL) is the loss metric we used for training objective. It has been used to measure the \"easiness\"' of a sample in Curriculum learningBengio et al.[(2009) and Self-paced learning Kumar et al. (2010). Intuitively, the samples with large NLL should also have large sample importance (SI). However, in our experiment, we found that this is not always the case. In Figure4 we found that 1) NLL and SI become more correlated as training goes on. However, 2) NLL is not predictive of the SI. There are many points with high NLL but small SI, and otherwise.\nabove O.9, as shown in Figure|1] This indicates that the sample importance is relatively stable to initialization of the network. Hence, all the following analysis are based on the results from initial- ization seed 1. (Details of training and test error for the chosen model can be viewed in Appendix Figure 9).\nFigure 2: Which classes and at which stage shape the network's layer's parameters? Parameters. of different layers are learned at different times. Parameters in Output layers are learned mostl during the early training stage. In the lower layers, parameters are learned predominantly during the. middle and late training stage. All classes do not contribute equally to training of the model..\nFigure 3: Are all data samples equally important for all layers? The top 20% most importar samples contributes to the majority of parameter learning, especially in lower layers. \"L1\" to \"L3. stands for Layer 1 to Layer 3. \"Out' stands for output layer..\nTo better visualize the importance of different samples, we provide three representative clusters of. samples for each dataset. In MNIST, we clustered all digit \"5' samples into 20 clusters based on their epoch-specific, layer-specific sample importance. In CIFAR-10, we clustered all \"horse\" samples. into 30 clusters using the same features as MNIST. Kmeans algorithm is used for clustering..\nMNIST MNIST MNIST 101 180000 180000 1st Layer 160000 160000 lnnennnee 2nd Layer 140000 3rd Layer lnnnnnnee 140000 100 Output Layer 120000 120000 100000 100000 10-1 80000 80000 abeaa 60000 60000 1st Layer 10-2 2nd Layer 40000 40000 AV 3rd Layer 20000 20000 Output Layer 103 0 0 0 10 20 30 40 50 0 10 20 30 40 50 10 20 30 0 40 50 Epochs Epochs Epochs CIFAR-10 CIFAR-10 CIFAR-10 102 1800000 1800000 1st Layer airplane 1600000 2nd Layer 1600000 lnnennnee automobile 101 lnnnnnnee 1400000 3rd Layer g 1400000 bird Output Layer 1200000 1200000 cat 100 deer 1000000 1000000 dog 800000 m! 800000 Aeerge frog 10-1 600000 600000 horse 1st Layer 400000 400000 ship 10-2 2nd Layer 3rd Layer truck 200000 200000 Output Layer 10-3 0. 0 2030 0 10 20 30 40 50 0 0 10 40 50 10 20 30 40 50 Epochs Epochs Epochs\nMNIST L1 mNISt L2 MNIST L3 MNIST Out 35000 innnnnnee 25000 innnnnnee 18000 40000 16000 35000 top10% 30000 20000 14000 30000 25000 top10-20% 12000 25000 20000 15000 10000 other80% 20000 15000 8000 10000 sadwes sadmes sadwes 15000 6000 10000 4000 10000 5000 5000 2000 5000 0 0 0 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Epochs Epochs Epochs Epochs CIFAR-10 L1 CIFAR-10 L2 CIFAR-10 L3 CIFAR-10 Out nannnnnne nanneee 1400000 140000 50000 lnnnnnnee e 70000 nu top10% 1200000 120000 60000 40000 50000 top10-20% 1000000 100000 80000 30000 800000 40000 other80% 600000 60000 30000 20000 sadmes 400000 40000 20000 jdwes 10000 200000 20000 10000 0 0 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Epochs Epochs Epochs Epochs\nMNIST MNIST 1.0 1600 0.8 1400 0.6 1200 1000 laion 0.4 800 0.2 600 0.0 400 0.2 200 0.4 0 0.6 -200 0 10 20 30 40 50 2 0 2 4 6 8 10 12 14 Epochs NLL CIFAR-10 CIFAR-10 0.7 3000 0.6 2500 0.5 2000 eorrron 0.4 1500 0.3 S 1000 0.2 500 0.1 0.0 0 0.1 500 1 1 0 10 20 30 40 50 -2 0 2 4 6 8 10 12 Epochs NLL\nFigure 4: Is Sample Importance correlated with Negative log-likelihood of a sample? Sample importance is positively correlated with negative log-likelihood. As training goes on, their corre- lation becomes higher. However, there remain many samples with high NLL and low SI, and vice versa. Left column: correlation between sample importance and negative log likelihood for all sam ples across epochs. Right column: scatter plot for NLL in the last epoch and all epoch sample importance for each sample.\nMNIST In Figure[5] we showed 3 example clusters on digit \"5\". In the cluster of easy samples, where NLL converges very fast, most of the sample importance is concentrated in the first few epochs in output layer parameters. The cluster of medium samples has a slow NLL convergence compared to the easy cluster. The biggest impact is in middle training stage and more towards bottom layer parameters. Hard samples hardly converge even during the late stage of training. As training goes on, the sample importance for the bottom layer parameters become larger.\nCIFAR-10In Figure[6] we showed 3 examples clusters on class horse\". We observed very similar. sample importance changing pattern as for the MNIST examples for easy, medium and hard clusters. Comparing to MNIST, all three clusters in CIFAR-10 have a very large impact on the parameters in the bottom layer. We note that the CIFAR-10 has almost 4 times larger number of parameters (3075 512 ~ 1574k) in the first layer than MNIST (785 512 ~ 401k)."}, {"section_index": "4", "section_name": "3.6 BATCH ORDER AND SAMPLE IMPORTANCE", "section_text": "With the observations from empirical analysis on sample importance, we know that time - iteration - and place - layer - of sample's impact varies according to its \"easiness' . We wanted to know whether constructing batches based on the sample importance or negative log likelihood would make a difference in training. Hence, we designed an experiment to explore how the different construction of batches influence the training. We note that the information used to structure the batches (negative log-likelihood and sample importance) - was obtained from a full training run.\nWe split all 50, 000 samples into b = 500 batch subsets {B1, B2,..., B}. Each batch has batch size |B, = 100. In our experiment, each training sample must be in exactly one batch. There is no intersection between batches.\nDuring training, in each epoch, we update the parameters with each batch in order 1, 2, ..., b itera tively.\nWe used seven different batch construction methods in this experiment:\nFigure 5: When and where does an MNIST sample make the biggest impact? For \"easy' samples, their biggest impact is on output layers and during the early training stage. As sample's difficulty increases (medium and hard), the biggest impact moves to lower layers and in the late training stage. Each row is a sample cluster. In each row, from left to right: example images in the cluster; average sample importance and layer-wise decomposition across epochs; A boxplot of average training negative log likelihood across epochs.\nEasy Samples (2950 samples) 0.7 2.5 0.6 + 2.0 S 0.5 ++H + L3 Arereae 0.4 1.5 5 Out 0.3 1N 1.0 0.2 0.5 + 0.1 - 0.0 0.0 0 10 20 30 40 50 0 5 101520253035404550 Epochs Epochs Medium Samples (65 samples) 14 5 +F ++ L1 ++ ++- L2 4 1 ++F ++ ++++- - L3 - 3 Out NLL ++ + I- -H - 2 ++# -H 5 +H < 1 +-0+ - 1 - 0 0 10 20 30 40 50 0 5 101520253035404550 Epochs Epochs Hard Samples (14 samples) 70 7 60 L1 6 S 50 5 + +I- heeeeeee 1 I- - 40 4 m 1 30 H 1 H 20 2 10 1 - - 1 0 0 0 10 20 30 40 50 0 5 101520253035404550 Epochs Epochs\n1. Rand: Randomly constructed batch. All 5Ok samples are randomly split into b batche. before training. The batches and orders stay fixed during training.. 2. NLO: Negative Log-likelihood Order. We sort all the samples based on their final NLI. from low to high. The batches are constructed based on the sorted samples. First 10 samples with least NLL are in B1, 101 to 200 samples are in B2, and so on. Hence, during. training, the batches with small NLL will be trained first.. 3. RNLO Reverse-Negative Log-likelihood Order. We construct the batches same as NLO. During training, we update the batches in reverse order Bs, B-1,...,B1. Hence, the. batches with large NLL will be trained first.. 4. NLM Negative Log-likelihood Mixed. We sort all the samples based on their final NLI. from low to high. Next, for each sample i in the sorted sequence, we put that sample int batch j = (i mod b) + 1. This ordering constructs batches out of samples with diverse. levels of NLL. 5. SIO: Sample Importance Order. Similar to NLO, except that we sort all the samples basec. on their sum sample importance over all epochs from low to high. Hence, batches witl. small sample importance will be trained first. 6. RSiO Reverse-Sample Importance Order. We construct the batches same as SIO. During. training, we update the batches in reverse order Bb, By-1, ..., B1. Hence, during training. the batches with large sample importance will be trained first..\nFigure 6: When and where does a CIFAR-10 sample make the biggest impact? For \"easy\" sam ples, their biggest impact is on the first layer during the early training stage. As samples's difficulty. increases (medium and hard), the biggest impact moves to lower layers and to late training stage.. Each row is a sample cluster. In each row, from left to right: example images in the cluster; aver- age sample importance and layer-wise decomposition across epochs; A boxplot of average training. negative log likelihood across epochs..\nWe performed five different runs (with different random initializations) on MNIST and CIFAR-10 The result is shown in Figure[7] From the result, we found that: 1) In both MNIST and CIFAR-10 Rand, SIS, and NLS have the lowest test error compared to all other methods. This indicates tha diverse batches are helpful for training. 2) NLO and SIO got the worst performance in CIFAR 10. Their training error even goes up after the early stage. RNLO and RSIO have same batch. constructions as NLO and SIO, but their performances are drastically different. This indicates tha the order of batches during training is important. Further, training on easy samples first and hard. later seems to be counter-productive.\nTo better understand the impact of different batch construction, we performed the principle compo nent analysis on the learned parameters in each epoch (Figure 8). In MNIST, the impact of batch construction is not very significant. In CIFAR-10, batch construction and even the order of batch training do have a large impact on the training.\nOur experiment result shows a different conclusion to Curriculum Learning and Self-paced learning where easy samples are trained on before introducing hard samples. We found that constructing and ordering the batches - hard to easy and easy to hard -- seems to hinder the performance of learning. Having hard samples mixed in with the easy ones in each batch helps the training.\nAlso, the results show that we want to learn from the hard samples in early epochs and \"see\"' hard samples more frequently, even if their major impact on parameters is during the late stage. As hard.\nEasy Samples (1331 samples) 3.0 3.5 2.5 3.0 ++++++ 2 2.5 2.0 + Out 2.0 1. 5 + # Z 1.5 1.0 +++# 1.0 ++ + 0.5 - ++ 0.5 +++ L + 0.0 0.0 0 10 20 30 40 50 0 5 101520253035404550 Epochs Epochs Medium Samples (284 samples) 12 3.5 + 1 10 3.0 L2 ++++ 2.5 8 L3 + 1 ++HHH- herreae ++++# Out 2.0 6 +++++ 1.5 +++I 4 N - +H #+1 1.0 - -0F1 +# IF + ## 0.5 4 -# 1 0.0 0 10 20 30 40 50 0 5 101520253035404550 Epochs Epochs Hard Samples (19 samples) 250 8 + + 7 X 200 6 I- hrereae 150 Out 5 4 100 3 HI + 50 2 I 0 1 0 10 20 30 40 50 0 5 101520253035404550 Epochs Epochs\n7. SIM Sample Important Mixed. Similar to NLM, but we sort the samples based on overall sample importance. Thus, batches contain samples with divers sample importance\n7. SIM Sample Important Mixed. Similar to NLM, but we sort the samples based on overal sample importance. Thus, batches contain samples with divers sample importance..\nMNIST CIFAR10 0.20 0.9 Rand 0.8 NLO 0.15 Error Trrrreror RNLO 0.7 NLM 0.10 lest SIO 0.6 0.05 RSIO 0.5 SIM 0.00 0 10 20 30 40 50 0 10 20 30 40 50 Epochs Epochs\nFigure 7: Does organizing batches by \"easiness' affect training? When batches are constructec. with homogeneous easiness, the training performance become worse. Batches with mixed easiness have lower test error. The solid color line represents the mean over 5 runs. The error bar indicates. the standard error over 5 runs.\nMNIST CIFAR10 2.0 4 Start Rand 1.5 3 NLO 2 Start 0 1.0 1 RNLO 0.5 O ZCd ZCd 8 0 NLM O O 0.0 -1 SIO 0.5 Rand,NLM,SIM -2 - 0 RSIO 8 -1.0 -3 Q. .0 SIM 0 -1.5 5-4-3-2-1 0 1 2 3 4-3-2-1 0 1 3 4 PC1 PC1\n2.0 MNISI 4 ARIO Start 0 Rand 1.5 3 0 NLO 2 Start 1.0 RNLO 1 ) 0 0.5 O0 ZCd ZCd 0 NLM O 0.0 -1 SIO O O 0.5 Rand,NLM,SIM -2 RSIO O -1.0 0 -3 0 SIM -1.5 0 4 5 -4-3-2-1 0 1 2 3 4-3-2-1 0 1 2 3 4 PC1 PC1\nFigure 8: Do parameters converge differently under different batch construction? In MNIST the converging path for all batch constructions are very similar. In CIFAR-10, batch construction with mixed easiness (Rand, NLM, SIM) has a very different converging path with all other methods. Notably, we found that even with same batch constructions but just reversed order (NLO vs. RNLO. SIO vs. RSiO), the parameters converge to different points. Each circle dotted line shows the path of the first two principle components of all parameters in different epochs. Note that in CIFAR-10 the paths of Rand, NLS and SIS are very similar and they are overlapped in the plot."}, {"section_index": "5", "section_name": "EXTENSIONS OF SAMPLE IMPORTANCE", "section_text": "n min V;L(yi,f(xi,0))+ R(0) 0 i=1\nexamples are few compared to easy samples and hard examples need a longer time to train, we do want to mix the hard samples into each batch to start learning from those samples early and learn longer.\nWe calculated the sample importance in each iteration in Stochastic Gradient Descent. However such quantity only reflects the impact on the change in parameters within each iteration. The influ- ence of a sample at a particular iteration can be accumulated through updates and impact the final model. Here, we are going to derive the exact calculation of the sample's influence to the model. We rewrite the Objective (2) in Section2here:\nHere, we deem the sample weight v, is fixed across all iterations. The update rule for stochastic gradient descent in each iteration is:\nThe derivative of et+1 with respect to sample weight v; is\na a a 0 t+1 dvi dvi a 01 dvi\na a a ng-nH0 0* dvi dvi dui\nHence, the derivative of parameters in the final model with regard to a sample weight is\na 0*=-HO*)-g dui\nEquation (4) indicates that we can calculate the sample specific impact on final trained model by using the parameters learned at the convergence point. In deep learning methods, due to early stopping, fix point might not be achieved, and Equation (4) might not be an accurate.\nFor any target quantity T(0*) that depends on the final trained parameter 0*, we can calculate the impact of a particular sample on that target as:.\nd a Ox dvi de\n0 dvi de iESc\nWe note evaluating the exact impact of a sample, as shown above, is computationally cumbersome for all but the simplest models"}, {"section_index": "6", "section_name": "5 DISCUSSION", "section_text": "Samples' impact on the deep network's parameters vary across stages of training and network's layers. In our work, we found that easy samples predominantly shape parameters the top layers. at the early training stages, while hard samples predominantly shape the parameters of the bottom. layers at the late training stage. Our experiments show that it is important to mix hard samples intc. different batches rather than keep them together in the same batch and away from other examples..\n9t+1=0t-n> Vigi -nr\nwhere H(0t) is the Hessian matrix of the objective in (2) with regard to all parameters in iteration t. If we iterate the updates until convergence, then we can assume that 0T is a fix-point, 0* = T+1 _ oT, and we obtain:\nThere are many future extensions to the current work. Firstly, we want to expand our sample impor tance analysis to different deep learning structures, like Convolution Neural Network and Recurrent Neural Networks. Secondly, we want to use the sample importance as a guidance to extract a mini- mal subset of samples that are sufficient to achieve performance comparable to a network trained on the full dataset."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "James Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu, Guillaum Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPt. math expression compiler. In Proceedings of the Python for Scientific Computing Conferenc. (SciPy), June 2010. Oral Presentation.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In International Conference on Artificial Intelligence and Statistics, pp. 249-256, 2010\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009\nAndrew Saxe, Pang W Koh, Zhenghao Chen, Maneesh Bhand, Bipin Suresh, and Andrew Y Ng On random weights and unsupervised feature learning. In Proceedings of the 28th international conference on machine learning (1CML-11). pp. 1089-1096. 2011.\nRoy T St Laurent and R Dennis Cook. Leverage and superleverage in nonlinear regression. Journa of the American Statistical Association, 87(420):985-990. 1992\nM Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models. In Advances in Neural Information Processing Systems, pp. 1189-1197, 2010."}, {"section_index": "8", "section_name": "APPENDIX", "section_text": "MNIST MNIST 0.5 2.5 0 Train Error 1 Test Error 0.4 2.0 2 3 fesrreror 0.3 1.5 4 sse| 5 0.2 1.0 6 7 0.1 0.5 8 9 0.0 0.0 0 10 20 30 40 50 0 10 20 30 40 50 Epochs Epochs CIFAR-10 CIFAR-10 0.80 2.4 airplane Train Error 0.75 2.2 automobile Test Error 0.70 2.0 bird cat Error 0.65 1.8 deer 0.60 sse| 1.6 dog est 0.55 1.4 frog horse 0.50 1.2 ship 0.45 1.0 truck 0.40 0.8 0 10 20 30 40 50 0 10 20 30 40 50 Epochs Epochs\nFigure 9: The training and test error on MNIST (first row) and CIFAR-10 (second row). The lef column showed the average class-specific negative log likelihood..\nMNIST CIFAR-10 50000 40000 35000 40000 30000 25000 30000 Counts 20000 15000 10000 10000 5000 0 0 200 400 600 800100012001400 O 500 1000 1500 200025003000 Sample importance Sample importance\n0 1 2 3 4 5 6 7 8 9 0\nFigure 10: Histogram of total sample importance"}]