forum_id
stringlengths
9
9
sections
stringlengths
20.5k
121k
r1w7Jdqxl
[{"section_index": "0", "section_name": "COLLABORATIVE DEEP EMBEDDING\nVIA DUAL NETWORKS", "section_text": "Yilei Xiong & Dahua Lin\nDepartment of Information Engineering\nThe Chinese University of Hong Kong\n{niu.haoying, cheng. jiefeng, 1i.zhenguo} @huawei.com\nDespite the long history of research on recommender systems, current approaches\nstill face a number of challenges in practice, e.g. the difficulties in handling new\nitems, the high diversity of user interests, and the noisiness and sparsity of ob-\nservations. Many of such difficulties stem from the lack of expressive power to\ncapture the complex relations between items and users. This paper presents a\nnew method to tackle this problem, called Collaborative Deep Embedding. In\nthis method, a pair of dual networks, one for encoding items and the other for\nusers, are jointly trained in a collaborative fashion. Particularly, both networks\nproduce embeddings at multiple aligned levels, which, when combined together,\ncan accurately predict the matching between items and users. Compared to existing\nmethods, the proposed one not only provides greater expressive power to capture\ncomplex matching relations, but also generalizes better to unseen items or users.\nOn multiple real-world datasets, this method outperforms the state of the art."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "What do consumers really want? \u2014 this is a question to which everyone wishes to have an answer\nOver the past decade, the unprecedented growth of web services and online commercial platforms\nsuch as Amazon, Netflix, and Spotify, gives rise to a vast amount of business data, which contain\nvaluable information about the customers. However, \u201cdata don\u2019t speak for themselves\u201d. To accurately\npredict what the customers want, one needs not only the data, but also an effective means to extract\nuseful messages therefrom.\nThere has been extensive study on recommender systems. Existing methods roughly fall into two\ncategories, namely content-based filtering (Pazzani & Billsus||2007) and collaborative filtering (Mnih\n\n& Salakhutdinov| 2008} Hu et al 2008 (2009). The former focuses on extracting relevant\neatures fr\n\n\u2018om the content, while the latter attempts to exploit the common interest among groups of\n\nusers. In recent efforts, hybrid methods (Agarwal & Chen||2009||Van den Oord et al.) /2013) that\n\ncombine both aspects have also been developed\nWhereas remarkable progress has been made on this topic, the state of the art remains far fror\nsatisfactory. The key challenges lie in several aspects. First, there is a large semantic gap between th\ntrue cause of a matching and what we observe from the data. For example, what usually attracts a boo\nconsumer is the implied emotion that one has to feel between the lines instead of the occurrence\nof certain words. It is difficult for classical techniques to extract such deep meanings from th\nobservations. Second, the cold-start issue, namely making predictions for unseen items or user\nhas not been well addressed. Many collaborative filtering methods rely on the factorization of th\nmatching matrix. Such methods implicitly assume that all the users and items are known in advance\nand thus are difficult to be applied in real-world applications, especially online services."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The success of deep learning brings new inspiration to this task. In a number of areas, including\nimage classification ee echo er al|20ltp speech recognition (Hinton et al.|[2012), and natural\nlanguage understanding (Socher et al.||2011), deep learning techniques have substantially pushed\nforward the state of the art. The power of deep networks in capturing complex variations and bridging\nsemantic gaps has been repeatedly shown in previous study. However, deep models were primarily\n\nused for classification or regression, e.g. translating images to sentences. How deep networks can be\nused to model cross-domain relations remains an open question.\nIn this work, we aim to explore deep neural networks for learning the matching relations across two\ndomains, with our focus placed on the matching between items and users. Specifically, we propose\na new framework called Collaborative Deep Embedding, which comprises a pair of dual networks.\none for encoding items and the other for users. Each network contains multiple embedding layers\nthat are aligned with their dual counterparts of the other network. Predictions can then be made by\ncoupling these embeddings. Note that unlike a conventional network, the dual networks are trained\non two streams of data. In this paper, we devise an algorithm that can jointly train both networks\nusing dual mini-batches. Compared to previous methods, this method not only narrows the semantic\ngap through a deep modeling architecture, but also provides a natural way to generalize \u2014 new items\nand new users can be encoded by the trained networks. just like those present in the training stage."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Existing methods for recommendation roughly fall into two categories: content-based methods (Paz\n) and collaborative filtering (CF) (Mnih & Salakhutdinov] |2008}/Hu et al}\n\nSpecifically, content-based methods rely primarily on feature representation o!\nmtent, in which recommendations are often made based on feature similarity S aney et al.\nFollowing this, there are also attempts to incorporate additional information, such as meta-date\nof users, to further improve the performance (McFee et al.|/2012). Instead, collaborative filtering\nexploits the interaction between users and items. A common approach to CF is to derive latent factor:\nof both users and items through matrix factorization, and measure the degree of matching by thei\ninner products. Previous study showed that CF methods tend to have higher rec\nommendation accuracy than content-based methods, as they directly target the recommendation task\nHowever, practical use of CF is often limited by the cold start problem. It is difficult to recommenc\nitems without a sufficient amount of use history. Issues like this motivated hybrid methods (Agarwa\n|& Chen] |2009} [Van den Oord et al.|/2013) that combine both aspects of information, whic\n\nshowed encouraging improvement. Our exploration is also along this line.\nDespite the progress on both family of methods, the practical performance of state-of-the-art still\nleaves a lot to be desired. This, to a large extent, is due to the lack of capability of capturing complex\n\nvariations in interaction patterns. Recently, deep learning (Bengio}|2009) emerges as an important\ntechnique in machine learning. In a number of successful stories (Krizhevsky et al. 2012} Hinton!\n, deep models have demonstrated remarkable representation power in\ncapturing complex patterns. This power has been exploited by some recent work for recommendation.\n[Van den Oord et al.| (2013) applies deep learning for music recommendation. It uses the latent\n\nitem vector learned by CF as ground truth to train a deep network for extracting content features,\nobtaining considerable performance gain. However the latent vectors for known users and items are\n\nnot improved. |Wang & Wang|(2014) proposed an extension to this method, which concatenates both\n\nthe CF features and the deep features. resulting in slight improvement.\n(2011) showed that CF and topic modeling, when combined, can benefit each other.\nInspired by this, proposed Collaborative Deep Learning (CDL), which incorporates\n\nCF and deep feature learning with a combined objective function. This work represents the latest\nadvances in recommendation methods. Yet, its performance is still limited by several issues, e.g. the\ndifficulties in balancing diversified objectives and the lack of effective methods for user encoding. An\nimportant aspect that distinguishes our work from CDL and other previous methods is that it encodes\nboth items and users through a pair of deep networks that are jointly trained. which substantially\nOn a number of real world tasks, the proposed method yields significant improvement over the current\nstate-of-the-art. It is worth stressing that whereas our focus is on the matching between items and\nusers, Collaborative Deep Embedding is a generic methodology, which can be readily extended to\nmodel other kinds of cross-domain relations.\nenhance the representation power on both sides. Moreover, the objective function of our learning\nframework directly targets the recommendation accuracy, which also leads to better performance.\nAt the heart of a recommender system is matching model, namely, a model that can predict whether \u00a2\ngiven item matches the interest of a given user. Generally, this can be formalized as below. Suppose\nthere are m users and n items, respectively indexed by 7 and 7. Items are usually associated witk\ninherent features, e.g. the descriptions or contents. Here, we use x; to denote the observed features o}\nthe j-th item. However, inherent information for users is generally very limited and often irrelevant\nHence, in most cases, users are primarily characterized by their history, i.e. the items they have\npurchased or rated. Specifically, the user history can be partly captured by a matching matria\nRe {0,1}\u201d*\", where R(i, j) = 1 indicates that the i-th user purchased the j-th item and gave <\npositive rating. Note that R is often an incomplete reflection of the user interest \u2014 it is not uncommon\nthat a user does not purchase or rate an item that he/she likes."}, {"section_index": "4", "section_name": "3.1 DUAL EMBEDDING", "section_text": "To motivate our approach, we begin with a brief revisit of collaborative filtering (CF), which is widely\nadopted in practical recommender systems. The basic idea of CF is to derive vector representations\nfor both users and items by factorizing the matching matrix R. A representative formulation in this\n\nfamily is the Weighted Matrix Factorization (WMF) 2008), which adopts an objective\nfunction as below:\n2 2\nDoD eig (Bay = wf'v5)? + Aw DT ill + Av Dvd\na j\n\nig\nHere, u; and v; denote the vector representations of the i-th user and the j-th item, c;; the confidence\ncoefficient of an observed entry, and A,,, A, the regularization coefficients. Underlying such methods\nlies a common assumption, namely, all users and items must be known a priori. As a result, they will\nface fundamental difficulties when handling new items and new users.\nIn this work, we aim to move beyond this limitation by exploring an alterna-\n\nEncoding Networks.\n\ntive approach. Instead of pursuing the embeddings of a given set of items and users, our approach\njointly learns a pair of encoding networks, respectively for items and users. Compared to CF, the key\nadvantage of this approach is that it is generalizable by nature. When new items or new users come,\n\ntheir vector embeddings can be readily derived using the learned encoders.\nws = argmin 0 Rj \u2014 ul vj ||? + Au > lias? = (VO;V\" +.) ~ Vi\nis the i-th row of R treated as a column vector, which represents the history of the i-th user; anc\nGC; = diag(c;1.....c;,,) captures the confidence weights.\n\nHere, V = [vi,..., Vn] is a matrix comprised of all item embeddings, each column for one; r,\nu; = g(ri; W.),\nGenerally, the items can be encoded based on their own inherent features, using, for example, an\nauto-encoder. The key question here, however, is how to encode users, which, as mentioned, have no\ninherent features. Again, we revisit conventional CF methods such as WMF and find that in these\nmethods, the user representations can be expressed as:\nThe analysis above reveals that u; is a linear transform of r; as u; = W,,r;, where the transform\nmatrix W,,, depends on the item embeddings V. This motivates our idea of user encoding, that is, to\nuse a deep neural network instead the linear transform above, as\nwhere g denotes a nonlinear transform based on a deep network with parameters W,,. As we will\nshow in our experiments, by drawing on the expressive power of deep neural networks, the proposed\nway of user encoding can substantially improve the prediction accuracy.\nFigure 1: This figure shows three different designs of the dual networks. Here, \u00a9 indicates dot product and 4\nindicates summation. (a) The basic design adopts the MLP structure for each network. (b) The multi-level desig\nintegrates the dot products of embeddings at different levels to produce the prediction. (c) In the branchin;\ndesign, the embeddings (except those of the top level) used in the dot products are produced by transforn\nbranches. In this way, the main abstraction paths won\u2019t be directly twisted.\nOverall Formulation. By coupling an item-network denoted by f(xj;;W,) and a user-network ,\nas introduced above, we can predict the matching of any given pair of user and item based on the\ninner product of their embeddings, as (f(x; W,,), g(r; W,,,)). The inputs to these networks includ\nx, the inherent feature of the given item, and r, the history of the given user on a set of reference\nitems. With both encoding networks, we formulate the learning objective as follows:\nyinin, 2 Dallas =F (xj; We). 90: Wi)?\nHere, X = [x1,...,Xn] denotes the input features of all reference items. This formulation differs\nfrom previous ones in two key aspects: (1) Both users and items are encoded using deep neural\nnetworks. The learning objective above encourages the cooperation of both networks such that the\ncoupling of both sides yield the highest accuracy. Hence, the user-network parameters W,,, depends\non the item embeddings V, and likewise for the item-network. (2) The learning task is to estimate\nthe parameters of the encoding networks. Once the encoding networks are learned, they encode user:\nand items in a uniform way, no matter whether they are seen during training. In other words, new\nusers and new items are no longer second-class citizens \u2014 they are encoded in exactly the same way\nas those in the training set.\nComparison with CDL. The Collaborative Deep Learning (CDL) recently proposed by {[Wang|\n\n(2015) was another attempt to tackle the cold-start issue. This method leverages the item\neatures by aligning the item encoder with the embeddings resulted from matrix factorization. In\n\nparticular, the objective function is given as follows:\nSe cay (Raj ul v5)? +Av SO [vj \u2014 fe(%j, OI? +An D> Rj \u2014 fe(K, OP + Au S- [fusl? +r).\n\nij j\nTM NID TAU ZW GT Sev FIT An 2 WAG Ir Ags PI Aw 2 Me AY)\nij j j i\n\nHere, a Stacked Denoising Autoencoder (SDAE) ) with parameter 6 is used tc\nencode the items, based on {X;}, noisy versions of their features. Compared to our formulation\nCDL has several limitations: (1) The objective is to balance the SDAE reconstruction error anc\nthe matching accuracy, which does not necessarily lead to improved recommendation. Tuning thi:\nbalance also turns out to be tricky. (2) Only items are encoded, while the representations of the user:\nare still obtained by matrix factorization. As a result, its expressive power in capturing user interes\nremains limited. (3) There are inconsistencies between known items and new ones \u2014 the embeddins\nof known items is resulted from a tradeoff between the matching accuracy and the fidelity to SDAE\nfeatures, while the embedding of new items are purely based on SDAE encoding.\n\n(5\n(eee)\n\nOX eeee (eee) (eee)\n\n(a) Basic Design (b) Multi-level Design (c) Multi-level Branching Design\nThe basic design, as shown in Figure[I](a) adopts the multilayer perceptron as the basic architecture\nusing tanh as the nonlinear activation function between layerg'| The top layer of the item-network\nproduces a vector f(x;;W,) for each item; while that of the user-network produces a dual vector\ng(ri; W.,,) for each user. During training, the loss layer takes their inner products and compares ther\nwith the ground-truth R(, 7).\nEach layer in these networks generates a vector representation. We observe that representations\nfrom different layers are complementary. Representations from lower layers tend to be closer to the\ninputs and preserve more information; while those from higher layers focus on deeper semantics\nThe representations from these levels have their respective values, as different users tend to focus\non different aspects of an item. Following this intuition, we reach a multi-level design, as shown\nin Figure[I](b). In this design, dot products between dual embeddings at corresponding levels are\naggregated to produce the final prediction.\nThere is an issue of the multi-level design \u2014 the output of each intermediate layer actually play:\n\u2018wo roles. On one hand, it is the input to the next layer for further abstraction; on the other hand, i\nilso serves as a facet to be matched with the other side. These two roles require different propertie:\nof the representations. Particularly, for the former role, the representation needs to preserve mor\nnformation for higher-level abstraction; while for the latter, those parts related to the current leve\nof matching need to be emphasized. To address this issue, we design a multi-level branchins\narchitecture, as shown in Figure[I](c). In this design, a matching branch is introduced to transform the\n\u2018epresentation at each level to a form that is more suitable for matching. This can also be considerec\nis learning an alternative metric to measure the matchness between the embeddings. As we will shov\nin our experiments, this design can considerably improve the prediction accuracy."}, {"section_index": "5", "section_name": "4. TRAINING WITH DUAL MINI-BATCHES", "section_text": "A distinctive aspect of our training algorithm is the use of dual mini-batches. Specifically, in eac\niteration, B,, items and B,, users are selected. In addition to the item features and user histories, th\ncorresponding part of the matching matrix R will also be loaded and fed to the network. Here, th\ntwo batch sizes B, and B,, can be different, and they should be chosen according to the sparsity o\nthe matching matrix R, such that each dual mini-batch can cover both positive and zero ratings.\nDuring the backward pass, the loss layer that compares the predictions with the ground-truth match-\nings will produce two sets of gradients, respectively for items and users. These gradients are then\nback-propagated along respective networks. Note that when the multi-level designs (both with and\nwithout branching) are used, each intermediate layer will receive gradients from two sources \u2014 those\nfrom the upper layers and those from the dual network (via the dot-product layer). Hence, the training\nof one network would impact that of the other.\nThe entire training procedure consists of two stages: pre-training and optimization. In the pre-training\nstage, we initialize the item-network with unsupervised training and the user\nnetwork randomly. The unsupervised training of the item-network allows it to capture the feature\nstatistics. Then both networks will be jointly refined in a layer-by-layer fashion. Particularly, we first\ntune the one-level networks, taking the dot products of their outputs as the predictions. Subsequently\nwe stack the second layers on top and refine them in a similar way. Empirically, we found that this\nlayer-wise refinement scheme provides better initialization. In the optimization stage, we adopt the\nSGD algorithm with momentum and use the dual mini-batch scheme presented above. In this stage\nthe training is conducted in epochs. Each epoch, through multiple iterations, traverses the whole\nmatching matrix R without repetition. The order of choosing mini-batches is arbitrary and will be\nshuffled at the beginning of each epoch. Additional tricks such as dropout and batch normalizatior\nare employed to further improve the performance.\nThe choice of tanh as the activation function is based on empirical comparison.\nOur model consists of two networks, namely the item-network f and the user-network g. We went\nthrough a progressive procedure in designing their architectures, obtaining three different designs,\nfrom basic design, multi-level design, to multi-level branching design. Each new design was motivated\nby the observation of certain limitations in the previous version."}, {"section_index": "6", "section_name": "5.1 EVALUATION", "section_text": "The performance of a recommender system can be assessed from different perspective. In this paper\nwe follow [Wang & Bleij(2011} and perform the evaluation from the retrieval perspective. Specifically\na fraction of rating entries are omitted in the training phase, and the algorithms being tested will\nbe used to predict those entries. As pointed out by[Wang & Blei (2011), as the ratings are implicit\nfeedback (Hu et al.|/2008) \u2014 some positive matchings are not reflected in the ratings, recall is more\n\nsuitable than precision in measuring the performance. In particular, we use Recall@ M averaged ovet\nall users as the performance metric. Here, for a certain user, Recall@ M is defined as follows:\nFollowing|Wang & Blei|(2011), we consider two tasks, in-matrix prediction and out-matrix prediction\nSpecifically, we divide all users into two disjoint parts, known and unknown, by the ratio of 9 to 1\nThe in-matrix prediction task only considers known items. For this task, all rating entries are split\ninto three disjoint sets: training, validation and testing, by the ratio 3 : 1 : 1. It is ensured that al\nitems in the validation and testing sets have appeared in the training stage (just that part of thei\nratings were omitted). The out-matrix prediction task is to make predictions for the items that are\ncompletely unseen in the training phase. This task is to test the performance of generalization and the\ncapability of handling the cold-start issue."}, {"section_index": "7", "section_name": "5.2. COMPARISON WITH OTHER METHODS", "section_text": "We compared our method, which we refer to as DualNet with two representative methods in previous\nwork: (1) Weighted Matrix Factorization (WMF) (Hu et al.|/2008), a representative method for for\ncollaborative filtering (CF), and (2) Collaborative deep learning (CDL) (Wang et al.|{2015), a hybric\n\nmethod that combines deep encoding of the items and CF, which represents the latest advances ir\nrecommendation techniques.\nOn each dataset, we chose the design parameters for each method via grid search. The parametet\ncombinations that attain best performance on the validation set are used. For our DualNet method\nwe adopt a three-level branching configuration, where the embedding dimensions of each network.\nfrom bottom to top, are set to 200, 200, 50. For WMF, the latent dimension is set to 300 on CDL\nand 450 on other datasets. For CDL, the best performance is attained when the structure of SDAE is\nconfigured to be (2000, 1000, 300), with drop out ratio 0.1. Other design parameters of CDL are set\nasa = 1.0,b = 0.01, lu = 1,lv = 10,/n = 1000, lw = 0.0005.\n1.\n\n2.\n\nCiteULike, constructed by|Wang & Blei|(2011), provides a list of researchers and the paper:\n\nthat they interested. Each paper comes with a text document that comprises both the title anc\nthe abstract. In total, it contains 5, 551 researchers (as users) and 16, 980 papers (as items) with\n0.22% density. The task is to predict the papers that a researcher would like.\n\nMovieLens+Posters is constructed based on the MovieLens 20M Dataset (Harper & Konstan\n(2016), which provides about 20M user ratings on movies. For each movie, we collect a movie\nposter from TMDb and extract a visual feature therefrom using a convolutional neural net\nwork as the item feature. Removing all those movies without posters anc\nthe users with fewer than 10 ratings, we obtain a dataset that contains 76,531 users and 14, 10:\nitems with 0.24% density. In this dataset, all 5 ratings are considered as positive matchings.\n\n. Ciao is organized by|Tang et al.|(2012) from a product review site, where each product come:\n\nwith a series of reviews. The reviews for each product are concatenated to serve as the iter\ncontent. We removed those items with less than 5 rated users and the users with less than L(\nratings. This results in a dataset with 4,663 users and 12,083 items with 0.25% density. Al\n\nratinoce with AN ar ahove (the ratino ranoec fram 1) ta 51) are reoarded ac nncitive matchinoc\nthe number of items a user likes in top 17 recommendation\nTable 1: Comparison of performance on three datasets. The performances are measure with the\nmetric Recall@ M. We report the results where V/ are set to 50, 100, and 200.\nTable 2: Comparison for out-matrix predictions on CiteULike\nRecall@50 | Recall@100 | Recall@200\nCDL 32.18% 43.90% 56.36%\nDualNet 47.51% 56.59% 66.36%\nNote that on CiteULike, there are two ways to split the data. One is the scheme in (Wang et al.\n2015), and the other is the scheme in (Wang & Blei| [2011), which is the one presented in the previous\nsection. Note that in the former scheme, a fixed number of ratings from each user are selected fot\ntraining. This may result in some testing items being missed in the training set. To provide a complete\ncomparison with prior work, we use both schemes in our experiments, which are respectively denotec\nas CiteULikel and CiteULike2.\nTable[T]compares the performance of WML, CDL, and DualNet on all three datasets (four data splitting\nsettings). From the results, we observed: (1) Our proposed DualNet method outperforms both WML\nand CDL on all datasets. On certain data sets, the performance gains are substantial. For example.\non MovieLens, we obtained average recalls at 44.95%, 59.15%, and 72.56% respectively when\nM = 50,100, 200. Comparing what CDL achieves (38.11%, 49, 73%, and 61.00%), the relative\ngains are around 18%. On other data sets, the gains are also considerable. (2) The performance\ngains vary significantly across different datasets, as they are closely related to the relevance of the\nitem features. Particularly, when the item features are pertinent to the user interest, we may see\nremarkable improvement when those features are incorporated; otherwise, the performance gains\nwould be relatively smaller."}, {"section_index": "8", "section_name": "5.3. DETAILED STUDY", "section_text": "We conducted additional experiments on CiteULike to further study the proposed algorithm. In\nthis study, we investigate the performance of out-matrix prediction, the impact of various modeling\nchoices, e.g. multi-level branching, as well as the influence of training tactics.\nOut-matrix prediction. As mentioned, the out-matrix prediction task is to examine an algorithm\u2019s\ncapability of handling new items, i.e. those unseen in the training stage. For this task, we compared\nCDL and DualNet on the CiteULike dataset. WML is not included here as it is not able to handle new\nitems. Table 2|shows the results. It can be clearly seen that DualNet outperforms CDL by a notable\nmargin. For example, Recall @50 increases from 32.18% to 47.51% \u2014 the relative gain is 47.6%, a\nvery remarkable improvement. The strong generalization performance as demonstrated here is, to a\nlarge extent, ascribed to our basic formulation, where the encoding networks uniformly encode both\nknown and new items.\nMulti-level branching. We compared three different designs presented in Section 3: basic design,\nmulti-level design, and multi-level branching design. From the results shown in Table} we can ob-\nserve limited improvement of the multi-level design over the basic one. More significant performance\nCiteULike CiteULike2\n50 100 200 50 100 200\nWMF 22.14% | 32.58% | 43.65% | 40.45% | 50.28% | 59.95%\nCDL 25.02% | 36.57% | 48.32% | 39.49% | 52.02% | 64.41%\nDualNet | 30.41% | 41.71% | 52.24% | 41.26% | 53.80% | 65.21%\nMovieLens Ciao\n30 100 200 50 100 200\nWME 37.14% | 48.81% | 60.25% | 14.46% | 19.66% | 26.22%\nCDL 38.11% | 49.73% | 61.00% | 17.90% | 24.55% | 32.53%\nDualNet | 44.95% | 59.15% | 72.56% | 17.94% | 24.58% | 32.52%\nTable 3: Comparison of different network architecture designs on CiteULike\ngains are observed when the branching design is introduced. This shows that the branches contribute\na lot to the overall performance.\nNoise injection. Sometimes we noticed overfitting during training i.e. the validation performance\ngets worse while the training loss is decreasing. To tackle this issue, we inject noises to the inputs\ni.e. setting a fraction of input entries to zeros. Generally, we observed that noise injection has\nlittle effect for Recall@ M on in-matrix predictions when M < 30. However, it can considerably\nincrease the recall for large M value or out-matrix predictions. Particularly, on CiteULike, it increases\nin-matrix Recall@ 300 from 67.3% to 71.2%, and out-matrix Recall@50 from 38.6% to 47.5%.\nUnsuccessful Tactics. Finally, we show some tactics that we have tried and found to be not working\n(1) Replacing the weighted Euclidean loss with logistic loss would lead to substantial degradatior\nof the performance (sometimes by up to 20%). Also, when using logistic loss, we observed severe\noverfitting. [Rendle et al.|(2009) proposed Bayesian Personalized Recommendation (BPR) whict\ndirectly targets on ranking. We tested this on CiteULike with parameters tuned to obtain the optima\nperformance. Our experimental results showed that its performance is similar to WMF. Particularly\nthe Recall@50, 100, 200 for BPR are respectively 39.11%, 49.16%, 59.96%, while those for WME\nare 40.45%, 50.25%, 59.95%.\n(2) Motivated by the observation that positive ratings are sparse, we tried a scheme that ignores a\nfraction of dual mini-batches that correspond to all zero ratings, with an aim to speed up the training\nWhereas this can reduces the time needed to run an epoch, it takes significantly more epochs to reach\nthe same level of performance. As a result, the overall runtime is even longer.\nThis paper presented a new method for predicting the interactions between users and items, callec\nCollaborative Deep Embedding. This method uses dual networks to encode users and items respec\ntively. The user-network and item-network are trained jointly, in a collaborative manner, based on tw\u00a2\nstreams of data. We obtained considerable performance gains over the state-of-the-art consistently or\nthree large datasets. The proposed method also demonstrated superior generalization performance\n(on out-matrix predictions). This improvement, from our perspective, is ascribed to three importan\nreasons: (1) the expressive power of deep models for capturing the rich variations in user interests, (2\nthe collaborative training process that encourages closely coupled embeddings, and (3) an objective\nfunction that directly targets the prediction accuracy.\nWe consider this work as a significant step that brings the power of deep models to relational modeling\nHowever, the space of deep relational modeling remains wide open \u2014 lots of questions remain yet to\nbe answered. In future, we plan to investigate more sophisticated network architectures, and extend\nthe proposed methodology to applications that involve more than two domains."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Yoshua Bengio. Learning deep architectures for ai. Foundations and trends) in Machine Learning\n2(1):1-127, 2009.\nRecall@10 | Recall@50 | Recall@100\nbasic 15.86% 38.86% 51.03%\nmulti-level 16.89% 39.92% 51.26%\nmulti-level branching 17.43% 40.31% 51.78%\nGeoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly\nAndrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks\nfor acoustic modeling in speech recognition: The shared views of four research groups. Signal\nProcessing Magazine, IEEE, 29(6):82\u201497, 2012.\nAndriy Mnih and Ruslan R Salakhutdinov. Probabilistic matrix factorization. In J. C. Platt, D. Koller\n\nY. Singer, and S. T. Roweis (eds.), Advances in Neural Information Processing Systems 20\npp. 1257-1264. Curran Associates, Inc., 2008. URL|http://papers.nips.cc/paper/\n3208-probabilistic-matrix-factorization.pdf\nMichael J Pazzani and Daniel Billsus. Content-based recommendation systems. In The adaptive web,\npp. 325-341. Springer, 2007.\nSteffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian\npersonalized ranking from implicit feedback. In Proceedings of the twenty-fifth conference on\nuncertainty in artificial intelligence, pp. 452-461. AUAI Press, 2009.\nFrancesco Ricci, Lior Rokach, and Bracha Shapira. Introduction to recommender systems handbook\nSpringer, 2011.\nMalcolm Slaney, Kilian Weinberger, and William White. Learning a metric for music similarity. Ir\nInternational Symposium on Music Information Retrieval (ISMIR). 2008.\nRichard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. Parsing natural scenes and natural\nlanguage with recursive neural networks. In Proceedings of the 28th international conference on\nmachine learning (ICML-11), pp. 129-136, 2011.\nChristian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and thi\nimpact of residual connections on learning. arXiv preprint arXiv: 1602.07261, 2016.\nJ. Tang, H. Gao, and H. Liu. mTrust: Discerning multi-faceted trust in a connected world. In\nProceedings of the fifth ACM international conference on Web search and data mining, pp. 93-102\nACM, 2012.\nAaron Van den Oord, Sander Dieleman, and Benjamin Schrauwen. Deep content-based musi\nrecommendation. In Advances in Neural Information Processing Systems, pp. 2643-2651, 2013\nPascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol\nStacked denoising autoencoders: Learning useful representations in a deep network with a local\ndenoising criterion. The Journal of Machine Learning Research, 11:3371-3408, 2010.\nHao Wang, Naiyan Wang, and Dit-Yan Yeung. Collaborative deep learning for recommender systems.\nIn Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and\nData Mining, pp. 1235-1244. ACM, 2015."}]
Hk8rlUqge
[{"section_index": "0", "section_name": "ABSTRACT", "section_text": "We investigate deep generative models that can exchange multiple modalities bi:\ndirectionally, e.g., generating images from corresponding texts and vice versa. Re:\ncently, some studies handle multiple modalities on deep generative models, suct\nas variational autoencoders (VAEs). However, these models typically assume tha\u2019\nmodalities are forced to have a conditioned relation, i.e., we can only generate\nmodalities in one direction. To achieve our objective, we should extract a join\nrepresentation that captures high-level concepts among all modalities and throug!\nwhich we can exchange them bi-directionally. As described herein, we propose\njoint multimodal variational autoencoder (JMVAE), in which all modalities are in.\ndependently conditioned on joint representation. In other words, it models a join\ndistribution of modalities. Furthermore, to be able to generate missing modal.\nities from the remaining modalities properly, we develop an additional method\nJMVAE-KI, that is trained by reducing the divergence between JMVAE\u2019s encode:\nand prepared networks of respective modalities. Our experiments show that out\nproposed method can obtain appropriate joint representation from multiple modal.\nities and that it can generate and reconstruct them more properly than conventiona\nVAEs. We further demonstrate that JMVAE can generate multiple modalities bi\ndirectionally."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In our world, information is represented through various modalities. While images are represente\nby pixel information, these can also be described with text or tag information. People often exchang\nsuch information bi-directionally. For instance, we can not only imagine what \u201ca young female wit\na smile who does not wear glasses\u201d looks like, but also add this caption to a corresponding photo\ngraph. To do so, it is important to extract a joint representation that captures high-level concept\namong all modalities. Then we can bi-directionally generate modalities through the joint repre\nsentations. However, each modality typically has a different kind of dimension and structure, e.g\nimages (real-valued and dense) and texts (discrete and sparse). Therefore, the relations between eac!\nmodality and the joint representations might become high nonlinearity. To discover such relations\ndeep neural network architectures have been used widely for multimodal learning Ngiam et al\n>i TE Srivastava & Salakhutdinov, |2012). The common approach with these models to learn join\nrepresentations is to share the top of hidden layers in modality specific networks. Among ther\n\nenerative approaches using deep Boltzmann machines (DBMs) (Srivastava & Salakhutdinow 2017\nSohn et al.,[2014) offer the important advantage that these can generate modalities bi-directionally\nRecently, variational autoencoders (VAEs) (Kingma & Welling, 2013} [Rezende et al (2014) hav.\nbeen proposed to estimate flexible deep generative models by variational inference methods. Thes:\nmodels use back-propagation during training, so that it can be trained on large-scale and high\ndimensional dataset compared with DBMs with MCMC training. Some studies have addressed t\nhandle such large-scale and high-dimensional modalities on VAEs, but they are forced to model con\nditional distribution (2016). Therefore\nit can only generate modalities in one direction. For example, we cannot obtain generated image\nfrom texts if we train the likelihood of texts given images. To generate modalities bi-directionally"}, {"section_index": "2", "section_name": "JOINT MULTIMODAL LEARNING WITH DEEP GENERA-\nTIVE MODELS", "section_text": "H JMVAE 1\n\n! \u2014\u00bbB\nI Male Not Yc Eyeglasses Not Smiling\n\u2014> B PE\n\nAverage face\n\nRandom faces\n\n\u2018Attributes\nft\ni JMVAE\n\nsomes \u2014\u2014 ==j\u2014> e\n\nS = Not Yc Eyeglasses Not Smiling\n\nAverage face Random faces\n\n\u201cAttributes\nre 1: Various images and attributes generated from an input image. We used the CelebA dataset\n) to train and test models in this example. Each yellow box corresponds to different processes. All\nprocesses are estimated from a single generative model: the joint multimodal variational autoencoder (JMVAE),\nwhich is our proposed model.\nAs described in this paper, we develop a novel multimodal learning model with VAEs, which we\ncall a joint multimodal variational autoencoder (JMVAE). The most significant feature of our model\nis that all modalities, x and w (e.g., images and texts), are conditioned independently on a latent\nvariable z corresponding to joint representation, i.e., the JMVAE models a joint distribution of all\nmodalities, p(x, w). Therefore, we can extract a high-level representation that contains all informa-\ntion of modalities. Moreover, since it models a joint distribution, we can draw samples from both\np(x|w) and p(w|x). Because, at this time, modalities that we want to generate are usually missing,\nthe inferred latent variable becomes incomplete and generated samples might be collapsed in the\ntesting time when missing modalities are high-dimensional and complicated. To prevent this issue,\nwe propose a method of preparing the new encoders for each modality, p(z|x) and p(z|w), and\nreducing the divergence between the multimodal encoder p(z|x, w), which we call JMVAE-k1. This\ncontributes to more effective bi-directional generation of modalities, e.g., from face images to texts\n(attributes) and vice versa (see Figure 1).\nThe main contributions of this paper are as follows"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "The common approach of multimodal learning with deep neural networks is to share the top of\nhidden layers in modality specific networks. (2011) proposed this approach with deep\n\nautoencoders (AEs) and found that it can extract better representations than single modality settings.\nSrivastava & Salakhutdinoy| (2012) also took this idea but used deep Boltzmann machines (DBMs)\n(Salakhutdinov & Hinton||2009). DBMs are generative models with undirected connections based\n\non maximum joint likelihood learning of all modalities. Therefore, this model can generate modali-\nties bi-directionally. improved this model to exchange multiple modalities effec-\ntively, which are based on minimizing the variation of information and JMVAE-k! in ours can be\nregarded as minimizing it with variational learning on parameterized distributions (see Section B.3]\nall modalities should be treated equally under the learned joint representations, which is the same as\nprevious multimodal learning models before VAEs.\nWe introduce a joint multimodal variational autoencoder (JMVAE), which is the first study\nto train joint distribution of modalities with VAEs.\n\nWe propose an additional method (JMVAE-k1), which prevents generated samples from\nbeing collapsed when some modalities are missing. We experimentally confirm that this\nmethod solves this issue.\n\nWe show qualitatively and quantitatively that JMVAE can extract appropriate joint distri-\nbution and that it can generate and reconstruct modalities similarly or more properly than\nconventional VAEs.\n\nWe demonstrate that the JMVAE can generate multiple modalities bi-directionally even if\nthese modalities have completely different kinds of dimensions and structures, e.g., high-\ndimentional color face images and low-dimentional binary attributes.\nRecently, VAEs (Kingma & Welling, [Rezende et al, (2014) are used to train such high\ndimensional modalities. ; [Sohn et al! (2015) propose conditional VAE\n(CVAEs), which maximize a conditional log-likelihood by variational methods. Many studie:\nare based on CVAEs to train various multiple modalities such as handwriting digits and label\n(Kingma et al} 2014; Sohn et all, Bois oil images and degrees of rotation (Kulkarni et al.\n(ex face images and attributes (Larsen et al] 2015), and natural images anc\ncaptions (Mansisovet all DOI), The main features of CVAEs are that the relation between modal\n\nities is one-way and a latent variable does not contain the information of a conditioned modality!\nwhich are unsuitable for our objective.\nPandey & Dukkipati (2016 proposed a conditional multimodal autoencoder (CMMA), which also\nmaximizes the conditional log-likelihood. The difference between CVAEs is that a latent variable is\nconnected directly from a conditional variable, i.e., these variables are not independent. Moreover.\nthis model forces the latent representation from an input to be close to the joint representation from\nmultiple inputs, which is similar to JMVAE-kl. However, the CMMA still considers that modalities\nare generated in fixed direction. This is the most different part from ours."}, {"section_index": "4", "section_name": "3. METHODS", "section_text": "This section first introduces the algorithm of VAEs briefly and then proposes a novel multimodal\nlearning model with VAEs, which we call the joint multimodal variational autoencoder (JMVAE).\nGiven observation variables x and corresponding latent variables z, their generating processes ar\u00a2\ndefinable as z ~ p(z) = N(0,1) and x ~ po(x|z), where 0 is the model parameter of p. The\nobjective of VAEs is maximization of the marginal distribution p(x) = [ po(x|z)p(z)dx. Becausi\nthis distribution is intractable, we instead train the model to maximize the following lower bound o:\nthe marginal distribution Ly 47(x) as\nlog p(x) => \u2014Drx(q\u00a2(2|x)||p(z)) + Eq, (z\\x) [log po(x|z)] = Lv.az(x),\nTo optimize the lower bound \u00a3(x) with respect to parameters 0, \u00a2, we estimate gradients of Equa-\ntion [I] using stochastic gradient variational Bayes (SGVB). If we consider q4(z|x) as Gaussian\ndistribution \\V(z; 2, diag(o?)), where \u00a2 = {,07}, then we can reparameterize z ~ qy(z|x)\ntoz = +o \u00a9e, where \u20ace ~ N(0,I). Therefore, we can estimate the gradients of the neg-\native reconstruction term in Equation [I] with respect to @ and \u00a2 as Vo,gEq,(z\\x) [log pa(x|z)] =\nEn e,0,1)|Vo,\u00a2 log po(z|u + o \u00a9 \u20ac)]. Because the gradients of the regularization term are solvable\nanalytically, we can optimize Equation[I]with standard stochastic optimization methods.\nNext, we consider i.i.d. dataset (X, W) = {(x1, wy), ..., (x, wa)}, where two modalities x and\nw have different kinds of dimensions and structured. Our objective is to generate two modalities\nbi-directionally. For that reason, we assume that these are conditioned independently on the same\nlatent concept z: joint representation. Therefore, we assume their generating processes as z ~ p(Z)\nand x, w ~ p(x, w|z) = po, (x|z)po,, (w|z), where 6, and Oy represent the model parameters of\n\neach independent p. Figure\n\n[2{a) shows a graphical model that represents generative processes. One\n\ncan see that this models joint distribution of all modalities, p(x, w). Therefore, we designate this\nmodel as a joint multimodal variational autoencoder (JMVAE).\nwhere gq(z|x) is an approximate distribution of posterior p(z|x) and \u00a2 is the model parameter of gq.\nWe designate q4(z|x) as encoder and po(x|z) as decoder. Moreover, in Equation[I] the first term\nrepresents a regularization. The second one represents a negative reconstruction error.\nFigure 2: (a) Graphical model of the JMVAE. Gray circles represent observed variables. The white one denotes\na latent variable. (b) Two approaches to estimate encoders with a single input, g(z|x) and g(z|w), on the\nJMVAE: left, make modalities except an input modality missing (JMVAE-zero); right, prepare encoders that\nhave a single input and make them close to the JMVAE encoder (JM VAE-kI).\nConsidering an approximate posterior distribution as q4(z|x, w), we can estimate a lower bound o!\nthe log-likelihood log p(x. w) as follows:\nPo(X, W, Z)\nLau(%w) = Eygtaiamllos oe)\n= \u2014Dxx(qs(2|x,w)||p(2))\n\n+E 4, (2\\x,w) [log po, (x|Z)] + Eq, (z\\x,w) log po,, (w|Z))\nWe can apply the SGVB to Equation] just as Equation[]] so that we can parameterize the encoder\nand decoder as deterministic deep neural networks and optimize them with respect to their param-\neters, 0x, Ow, and \u00a2. Because each modality has different feature representation, we should set\ndifferent networks for each decoder, pg,.(x|z) and pg,,(w|z). The type of distribution and corre-\nsponding network architecture depends on the representation of each modality, e.g., Gaussian when\nthe representation of modality is continuous, and a Bernoulli when it is a binary value.\nUnlike original VAEs and CVAEs, the JMVAE models joint distribution of all modalities. In this\nmodel, modalities are conditioned independently on a joint latent variable. Therefore, we can extract\nbetter representation that includes all information of modalities. Moreover, we can estimate both\nmarginal distribution and conditional distribution in bi-directional, so that we can not only obtain\nimages reconstructed themselves but also draw texts from corresponding images and vice versa.\nAdditionally, we can extend JMVAEs to handle more than two modalities such as p(x, W1, W2, -.-)\nin the same learning framework."}, {"section_index": "5", "section_name": "3.3. INFERENCE MISSING MODALITIES", "section_text": "In the JMVAE, we can extract joint latent features by sampling from the encoder g4(z|x, w) al\ntesting time. Our objective is to exchange modalities bi-directionally, e.g., images to texts and vice\nversa. In this setting, modalities that we want to sample are missing, so that inputs of such modalitie:\nare set to zero (the left panel of Figure 2{b)). The same is true of reconstructing a modality only\nfrom itself. This is a natural way in discriminative multimodal settings to estimate samples from\nunimodal information . However, if missing modalities are high-dimensiona\nand complicated such as natural images, then the inferred latent variable becomes incomplete anc\ngenerated samples might collapse.\nWe propose a method to solve this issue, which we designate as JMVAE-kI. Moreover, we describe\nthe former way as JMVAE-zero to distinguish it. Suppose that we have encoders with a single input.\n\nq\u00e9,,(2|X) and qy,,(z|w), where dx and dy are parameters. We would li\ntheir encoders close to an encoder g4(z|x, w) (the right panel of Figure\nfunction of JMVAE-kI becomes\n\ne to train them by bringing\n\n[2{b)). Therefore, the object\nwhere a is a factor that regulates the KL divergence terms.\nFrom another viewpoint, maximizing Equation|4]can be regarded as minimizing the variation of in-\nformation with variational learning on parameterized distributions (proven and derived in Appendix\n[A). The variation of information, a measure of the distance between two variables, is written as\n(a)\n\nJMVAE-zero\n\nJMVAE-k1\n\nea j GHG}\n\nCS \u00ab\n\natals) ~ galzls,0) \u2014(zlw) ~ go(2|0. w)\n\n(b)\n\nrn (Z|)\n\nqa(a\\x, w)\n\nine (Z|)\nEquation [3] has two negative reconstruction terms which are correspondent to each modality. As\nwith VAEs, we designate g.,(z|x, w) as the encoder and both pg_ (x|z) and pg. (w|z) as decoders.\nIMur(a) (XW) = Lau (x,w) \u2014 a: [Dir (qo(2|x, w)||9\u00a2,.(2|x)) + Dir (ag(2lx, w)||\u00a26,, (2|w))],\n\u2014Epp(x,w) log p(x|w) + log p(w|x)], where pp is the data distribution. It is apparent that th\nvariation of information is the sum of two negative conditional log-likelihoods. Therefore, mini\nmizing the variation of information contributes to appropriate bi-directional exchange of modalities\nSohn et al] (2014) also train their model to minimize the VI for the same objective as ours. However\n\nthey use DBMs with MCMC training.\nThis section presents evaluation of the qualitative and quantitative performance and confirms the\nJMVAE functionality in practice."}, {"section_index": "6", "section_name": "4.1 DATASETS", "section_text": "MNIST is not a dataset for multimodal setting. In this work, we used this dataset for toy problem\nof multimodal learning. We consider handwriting images and corresponding digit labels as twc\ndifferent modalities. We used 50.000 as training set and the remaining 10,000 as a test set.\nCelebA consists of 202,599 color face images and corresponding 40 binary attributes such as male\neyeglasses, and mustache. In this work, we regard them as two modalities. This dataset is challeng:\ning because these have completely different kinds of dimensions and structures. Beforehand, we\ncropped the images to squares and resized to 64 x 64 and normalized. From the dataset, we chos\u00ab\n191,899 images that are identifiable face by OpenCV and used them for our experiment. We usec\n90% out of all the dataset contains as training set and the remaining 10% of them as test set.\nFor MNIST, we considered images as x \u20ac R*\u00b0**\u00ae and corresponding labels as w \u20ac {0,1}'\u00b0. We\nprepared two networks each with two dense layers of 512 hidden units and using leaky rectifiers\nand shared the top of each layers and mapped them into 64 hidden units. Moreover, we prepared\ntwo networks each with three dense layers of 512 units and set p(x|z) as Bernoulli and v wz) as\ncategorical distribution whose output layer is softmax. We used warm-up (Bowman et al |2015;\n(Sonderby et al [2016) which first forces training only of the term of the negative reconstruction\nerror and then gradually increases the effect of the regularization term to prevent local minima durin,\nearly training. We increased this term linearly during the first N; epochs as with Senderby et al!\naig We set N; = 200 and trained for 500 epochs on MNIST. Moreover, same as [Burda et al!\n\n(2015); |Sonderby et al | (2016), we resampled the binarized training values randomly from MNIST\nfor each epoch to prevent over-fitting.\nFor CelebA, we considered face images as x \u20ac IR\u00ae?*3?*3 and corresponding attributes as w |\n{\u20141,1}4\u00b0. We prepared two networks with layers (four convolutional and a flattened layers for\nand two dense layers for w) with ReLU and shared the top of each layers and mapped them into 12\nunits. For the decoder, we prepared two networks, with a dense and four deconvolutional layers fc\nx and three dense layers for w, and set Gaussian distribution for decoder of both modalities, wher\nthe variance of Gaussian was fixed to 1 for the decoder of w. In CelebA settings, we combine\nJMVAE with generative adversarial networks (GANs) to generate cleare\nimages. We considered the network of p(x|z) as generator in GAN, then we optimized the GAN los\nwith the lower bound of the JMVAE, which is the same way as a VAE-GAN model (Larsen et al\n(2015). As presented herein, we describe this model as JMVAE-GAN. We set NV, = 20 and traine\nfor 100 epochs on CelebA.\nTable 1: Evaluation of test log-likelihood. All models are trained and tested on MNIST. a is a coefficient o!\nregularization term in JMVAE-kI (Equatio! left, marginal log-likelihood; right, conditional log-likelihood.\n< log p(x)\n\nJMVAE-zero -86.89 -86.89 JMVAE-zero\nJMVAE-kKl,a = 0.01 \u2014 -86.89 \u2014--86.55 JMVAE-KI, a\n\nJMVAE-KI, a = 0.1 -86.86 \u2014 -86.73\n\n= JMVAE-K1, a\nJMVAE-Kk1, a = 1 -89.20 \u2014 -89.20 JMVAE-KL, a\nTable 2: Evaluation of log-likelihood. Models are trained and tested on CelebA. We trained JMVAE-kI and set\na = 0.1: left, marginal log-likelihood; right, conditional log-likelihood (with the multiple lower bound).\n< log p(x) < log p(x]w)\n\nmultiple single ~=CVAE-GAN ~4152\nVAE-GAN -4439_ =CMMA-GAN -4147\n\nJMVAE-GAN -4141 -4144. JMVAE-GAN -4130"}, {"section_index": "7", "section_name": "4.3.1 EVALUATION METHOD", "section_text": "For this experiment, we estimated test log-likelihood to evaluate the performance of model. This\nestimate roughly corresponds to negative reconstruction error. Therefore, higher is better. From\nthis performance, we can find that not only whether the JMVAE can generate samples properly but\nalso whether it can obtain joint representation properly. If the log-likelihood of a modality is low,\nrepresentation for this modality might be hurt by other modalities. By contrast, if it is the same\nor higher than model trained on a single modality, then other modalities contribute to obtaining\nappropriate representation.\ncompare the test marginal log-likelihood against VAEs ( ;\nand the test conditional log-likelihood against CVAEs (Kingma et al.,|2014\nand CMMAs (Pandey & Dukkipatil (2016). On CelebA, we combine all competitive models with\nGAN and describe them as VAE-GAN, CVAE-GAN, and CMMA-GAN. For fairness, architectures\nand parameters of these competitive models were set to be as close as possible to those of JMVAE.\nWe calculate the importance weighted estimator from lower bounds at testin;\ntime because we would like to estimate the true test log-likelihood from lower bounds. To es\ntimate the test marginal log-likelihood p(w) of the JMVAE, we use two possible lower bounds\nsampling from q4(z|x,w) or q\u00a2,,(z|x). We describe the former lower bound as the multiple lowe\nbound and the latter one as the single lower bound. When we estimate the test conditional log\nlikelihood log p(x|w), we also use two lower bounds, each of which is estimated by sampling fron\nqo(z|x, w) (multiple) or qg,, (z|w) (single) (see Appendix[B]for more details). To estimate the sing]\nlower bound, we should approximate the single encoder (q4,,(z|x) or q\u00a2,,(z|w)) by JMVAE-zerc\nor JMVAE-kl. When the value of log-likelihood with the single lower bound is the same or large\nthan that with the multiple lower bound, the approximation of the single encoder is good. Note tha\noriginal VAEs use a single lower bound and that CVAEs and CMMAs use a multiple lower bound."}, {"section_index": "8", "section_name": "4.3.2 MNIST", "section_text": "Our first experiment evaluated the test marginal log-likelihood and compared it with that of the VAE\non MNIST dataset. We trained the model with both JMVAE-zero and JMVAE-kI and confirmed\nthese differences. As described in Section we have two possible ways of estimating the\nmarginal log-likelihood of the JMVAE, i.e., multiple and single lower bounds. The left of Table\n[shows the test marginal log-likelihoods of the VAE and JMVAE. It is apparent that log-likelihood\nof the JMVAE-zero is the same or slightly better than that of the VAE. In the case of the log-\nlikelihood of JMVAE-k1, the log-likelihood becomes better as a is small. Especially, JMVAE-k1\nwith a = 0.01 and single lower bound archives the highest log-likelihood in Table If a is\n1, however, then the test log-likelihood on JMVAE-kI becomes much lower. This is because the\ninfluence of the recularization term hecomes strong as - is larce.\nFigure 3: Visualizations of 2-D latent representation. The network architectures are the same as those in Section\nexcept that the dimension of the top hidden layer is forced into 2. Points with different colors correspond\nto the digit labels. These were sampled from q(z|x) in the VAE and q(z|x, w) in both the CVAE and JMVAE.\nWe used JMVAE-zero as the JMVAE.\nNext, we evaluated the test conditional log-likelihood and compared it with that of the CVAE and\nCMMA conditioned on w. As in the case of the marginal log-likelihood, we can estimate the\nJMVAE\u2019s conditional log-likelihood by both the single and multiple lower bound. The single bound\ncan be estimated using JMVAE-zero or JMVAE-KI. The right of Table [I]shows the test conditional\nlog-likelihoods of the JMVAE, CVAE, and CMMA. It is apparent that the CVAE achieves the highest\nlog-likelihood. Even so, in the case of multiple bound, log-likelihoods with both JMVAE-zero and\nJMVAE-k]I (except a = 1) outperform that of the CMMA.\nIt should be noted that the log-likelihood with JMVAE-zero and single bound is significantly low.\nAs described in Section [3.3] this is because a modality w is missing as input. By contrast, it is\napparent that the log-likelihood with JMVAE-kI is improved significantly from that with JMVAE-\nzero. It shows that JMVAE-k1 solves the issue of missing modalities (we can also find this result in\n\ngenerated images, see Appendix|E}\n\n. Moreover, we find that this log-likelihood becomes better as a\n\nis large, which is opposite to the other results. Therefore, there is a trade-off between whether each\nmodality can be reconstructed properly and whether multiple modalities can be exchanged properly\n\nand it can be regulated by a.\nIn this section, we used CelebA dataset to evaluate the JMVAE. Table[2]presents the evaluations of\nmarginal and conditional log-likelihood. From this table, it is apparent that values of both marginal\nand conditional log-likelihood with JMVAEs are larger than those with other competitive methods.\nMoreover, comparison with Table[I] shows that the improvement on CelebA is greater than that on\nMNIST, which suggests that joint representation with multiple modalities contributes to improve-\nment of the quality of the reconstruction and generation in the case in which an input modality is\nlarge-dimensioned and complicated."}, {"section_index": "9", "section_name": "4.4.1 JOINT REPRESENTATION ON MNIST", "section_text": "In this section, we first evaluated that the JMVAE can obtain joint representation that includes th\ninformation of modalities. Figure B] shows the visualization of latent representation with the VAE\nCVAE, and JMVAE on MNIST. It is apparent that the JMVAE obtains more discriminable later\nrepresentation by adding digit label information. Figure B{b) shows that, in spite of using multi\nmodal information as with the JMVAE, points in CVAE are distributed irrespective of labels becaus\nCVAEs force latent representation to be independent of label information, i.e., it is not objective fo\nCVAEs to obtain joint representation.\nNext, we confirm that JMVAE-GAN on CelebA can generate images from attributes. Figure [4(a\nportrays generated faces conditioned on various attributes. We find that we can generate an average\nface of each attribute and various random faces conditioned on a certain attributes. Figure A{b\nshows that samples are gathered for each attribute and that locations of each variation are the same\nirrespective of attributes. From these results, we find that manifold learning of joint representatior\nwith images and attributes works well.\n4 4\n: 3\n\n3] a 3\n\n2 2 2 2\n\n1 a 2\n; Ft\n\n\u00b0 | o \u00b0\n6\n\na a zs\n\n\u201c9\n\n2 8 2 2\n\na :9 3 4\n\n\u201c4 +\n\ne 6 ad\n\n(b) CVAE\n2\n\n(c) IMVAE\n\n(b) CVAE\n\n(a) VAE\nFigure 4: (a) Generation of average faces and corresponding random faces. We first set all values of attribute\n{\u20141, 1} randomly and designate them as Base. Then, we choose an attribute that we want to set (e.g., Male\nBald, Smiling) and change this value in Base to 2 (or \u20142 if we want to set Not\u201d). Each column correspond\nto same attribute according to legend. Average fe re generated from p(X|Zmean), Where Zmean iS a\nof g(z|w). Moreover, we can obtain various images conditioned on the same values of attributes\nx ~ p(x|z), where Z = Zmean + 0 \u00a9 \u20ac, \u20ac ~ N(O,C), and \u00a2 is the parameter which determines the range o\nvariance. In this figure, we set \u20ac = 0.6. Each row in random faces has the same e. (b) PCA visualizations o\nlatent representation. Colors indicate which attribute each sample is conditioned on.\nGenerated Nlouth\nInput attributes \u2018Average face Reconstruction NotMale Eyeglasses. Not Young Smiling __ slightly open\n\n\u2018Male : 0.95 EB\n\nEyeglasses : -0.99\n\nYoung : 0.30\n\nSmiling :-0.97 2 7 2\n\n+ : 5 . &\n\nMale : 0.22\n\nEyeglasses : -0.99 7\nbd Young : 0.87 = = 4 a za\n\nSmiling :-1.00\nFigure 5: Portraits of the Mona Li (upper) and Mozar\u20ac\\lower), generated their attributes, and reconstructec\nimages conditioned on varied attributes, according to the legend. We cropped and resized it in the same way\nas CelebA. The procedure is as follows: generate the corresponding attributes w from an unlabeled image x\ngenerate an average face Xmean from the attributes w; select attributes which we want to vary and change th\nvalues of these attributes; generate the changed average face x\u2019,,.,,, from the changed attributes; and obtain ;\nchanged reconstruction image x\u2019 by X + X},ean \u2014 Xmean-\nFinally, we demonstrate that JMVAE-GAN can generate bi-directionally between faces and at:\nributes. Figure[5]shows that MVAE-GAN can generate both attributes and changed images condi.\ntioned on various attributes from images which had no attribute information. This way of generating\nan image by varying attributes is similar to the way of the CMMA (2016)\nHowever, the CMMA cannot generate attributes from an image because it only generates image:\nfrom attributes in one direction."}, {"section_index": "10", "section_name": "5 CONCLUSION AND FUTURE WORK", "section_text": "In this paper, we introduced a novel multimodal learning model with VAEs, the joint multimodal!\nvariational autoencoders (JMVAE). In this model, modalities are conditioned independently on joint\nrepresentation, i.e., it models a joint distribution of all modalities. We further proposed the method\n(JMVAE-kI) of reducing the divergence between JMVAE\u2019s encoder and a prepared encoder of each\nmodality to prevent generated samples from collapsing when modalities are missing. We confirmed\nthat the JMVAE can obtain appropriate joint representations and high log-likelihoods on MNIST\nAverage face Random faces\n\nBase\nBate 1s . LS | eee Not Male\n(random) y E\n| = oe\nNot Mal | ee fa) B\n. S|\nEd of 6g\n-0.5\n4 \u201ca\nSmiling be\n85 =2 =I 0 1 2\nIn future work, we would like to evaluate the multimodal learning performance of JMVAEs using\nvarious multimodal datasets such as containing three or more modalities."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Ben-\ngio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.\nYuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXi\npreprint arXiv: 1509.00519, 2015.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair\nAaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor\nmation Processing Systems. pp. 2672\u20142680. 2014.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin\narXiv: 1412.6980, 2014.\nAnders Boesen Lindbo Larsen, Sgren Kaae Sgnderby, and Ole Winther. Autoencoding beyonc\npixels using a learned similarity metric. arXiv preprint arXiv: 1512.09300, 2015.\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild.\nIn Proceedings of the IEEE International Conference on Computer Vision. pp. 3730\u20143738. 2015.\nElman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating image\nfrom captions with attention. arXiv preprint arXiv: 1511.02793, 2015.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and\napproximate inference in deep generative models. arXiv preprint arXiv: 1401.4082, 2014.\nRuslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In AJSTATS, volume |\npp. 3, 2009.\nSander Dieleman, Jan Schlter, Colin Raffel, Eben Olson, Sren Kaae Snderby, Daniel Nouri, Daniel\nMaturana, Martin Thoma, Eric Battenberg, Jack Kelly, Jeffrey De Fauw, Michael Heilman,\nDiogo Moitinho de Almeida, Brian McFee, Hendrik Weideman, Gbor Takcs, Peter de Rivaz, Jon\nCrall, Gregory Sanders, Kashif Rasul, Cong Liu, Geoffrey French, and Jonas Degrave. Lasagne:\nFirst release. Anoust 2015. URL Arto: //dse do, oras10. 5281 /7enodo 27878)\nJiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. Multi-\nmodal deep learning. In Proceedings of the 28th international conference on machine learning\n(ICML-11), pp. 689-696, 2011.\nKihyuk Sohn, Wenling Shang, and Honglak Lee. Improved multimodal deep learning with variation\nof information. In Advances in Neural Information Processing Systems, pp. 2141-2149. 2014.\nThe Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof\nAngermueller, Dzmitry Bahdanau, Nicolas Ballas, Fr\u00e9d\u00e9ric Bastien, Justin Bayer, Anatoly Be-\nlikov, et al. Theano: A python framework for fast computation of mathematical expressions.\narXiv preprint arXiv: 1605.02688, 2016.\nXinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image\ngeneration from visual attributes. arXiv preprint arXiv: 1512.00570. 2015.\nThe variation of information can be expressed as \u2014E,5(x,w) [log p(x|w) + log p(w|x)], where pp\nis the data distribution. In this equation, we specifically examine the sum of two negative log-\nlikelihoods and do not consider the expectation in this derivation. We can calculate the lower bounds\nof these log-likelihoods as follows:\nlog p(x|w) + logp(w|x) >\n\n(xlz)p(z ~\n\np p(w\\|z)p(z[x)\nEq(a\\x,w) [log qZlx, W) \u2014\n\n]\n\n| + Eq(zix,w) llog ? qalx, w)\n\nEqz|x,w) [log p(x|z)] + Eqczix,wy [log p(w|z)]\n\n\u2014Dx1(q(2|x, w)||p(zlx)) \u2014 Dx (q(z|x, w)||p(z|w))\n\nLiu (x,w) \u2014 [Dx (q(2lx, w)||p(@|x)) + Dx (q(2|x, w)||p(2ly\n+Drx(q(z|x, w)||p(z)).\nLim (x, w) \u2014 [Dri (q(z|x, w)||q(Zx)) + Dex (4(2|x, w)||9(z|w))] + Dr (a(2|x, w)||p(2))\n= Ly.) w) + Der (q(2lx, w)||p(2)) > Lama) Ww),\nwhere Lj y4,,(1) is Equation[4] with a = 1. Therefore, maximizing Equation|4]is regarded as min-\nimizing the variation of information with variational learning on parameterized distributions, i.e.,\nmaximizing the lower bounds of the negative variation of information.\nTwo lower bounds used to estimate test marginal log-likelihood p(x) of the JMVAE are as follows\npo, (x|z)p(2)\nLsingle(X) = Ey, (a|x) [log \u2014\u2014-\u2014\u2014],\ngle (X) a2 (z|x) [log qo. (a|X) ]\n_ Pa, (x|z)p(Z)\nLmutiple(X) = Eyy(aixw) [log oh\nInput\n\nReconstuction (multiple)\n\nReconstuction (single)\n\noom BB 2 Bla\n\n(b)\nm PLIO4ILYASG\nstuction (multiple) 7 Z / fe) Y / Y La Ss g\nReconstuction (single) \u201c7 2/0 YlIl_YTAS g\n\nRecon:\nFigure 6: Comparison of the original images and reconstructed images by the JMVAE (a = 0.1). We used (a\nMNIST and (b) CelebA datasets.\nWe can also estimate test conditional log-likelihood p(x|w) from these two lower bounds as\np(x, z|w)\nLeingle(x|w) = Eq, (zjw) [lo 85 Calw =F \u2018doy, (z|w) [log\nOw\n\nPo, (X|Z)Po,, ( (wle)p()\n\n\u2014 log p(w),\nIbw (z|w)\n_ Pox (%|2)P0. (Wl2)p(2)\n\nConuttipe (|W) = Eg xia [log \u2014\u2014 =] \u2014 log p(w),\nWe can obtain a tighter bound on the log-likelihood by k-fold importance weighted sampling. For\nexample, we obtain an importance weighted bound on log p(x) from Equation|I I]as follows:\nlog p(x) > Fay,...en~qy, (alx) log = Ly bo, (xlz)p(2)\n\nk a Poa ee Le ingtel (x).\nStrictly speaking, these two lower bounds are not equal. However, if the number of importance\nsamples is extremely large, the difference of these two lower bounds converges to 0.\nProof. Let the multiple and single k-hold importance weighted lower bounds as L*\nFrom the theorem of the importance weighted bound, both L* and \u00a3*\nlog p(x) as k > oo.\n\nand LSingle*\nconverge to\n\nsingle\n\nsingle multiple\nTherefore,\n7 k\nTim oo [Linultiple\n\n\u2014 Li ingtel < Wimp scoLuttipte \u2014 iM o0 \u00a35;\n\n=0\n\nc\nFigure [6] presents a comparison of the original image and reconstructed image by the JMVAE on\nboth MNIST and CelebA datasets. It is apparent that the JMVAE can reconstruct the original image\nproperly with either a multiple or single encoder.\nTable 3: Evaluation of test log-likelihood. All models are trained on the MNIST dataset: left, marginal log\nlikelihood; right, conditional log-likelihood."}, {"section_index": "12", "section_name": "E IMAGE GENERATION FROM CONDITIONAL DISTRIBUTION ON MNIST", "section_text": "Figure [7] presents generation samples of x conditioned on single input w. It is apparent that thi\nJMVAE with JMVAE-k! generates conditioned digit images properly, although that with JMVAE\nzero cannot generate them. As results showed, we also confirmed qualitatively that JMVAE-kI ca1\nmodel gq, (z|x) properly compared to JMVAE-zero.\n0 1 2 3 4 5 6 7 8 9\nJMVAE-zero fe re er ee | en rs\n\nIMVAEKLa=01 O] AXZYS OTF 4\nFigure 7: Image generation from conditional distribution p(x|w). We used a single encoder p(z|w) for both\ngenerations.\nTable B] shows the joint log-likelihood of the JMVAE on MNIST dataset by both JMVAE-zero and\nJMVAE-KI. It is apparent that the log-likelihood test on both approaches is almost identical (strictly,\nJMVAE-zero is slightly lower). The test log-likelihood on JMVAE-kl becomes much lower if a is\nlarge."}]
B1IzH7cxl
[{"section_index": "0", "section_name": "A NEURAL STOCHASTIC VOLATILITY MODEL", "section_text": "Rui Luo\u2019, Xiaojun Xu\u2018, Weinan Zhang\u2018, Jun Wang\u2018\nIn this paper, we show that the recent integration of statistical models with re-\ncurrent neural networks provides a new way of formulating volatility models that\nhave been popular in time series analysis and prediction. The model comprises a\npair of complementary stochastic recurrent neural networks: the generative net-\nwork models the joint distribution of the stochastic volatility process; the inference\nnetwork approximates the conditional distribution of the latent variables given the\nobservable ones. Our focus in this paper is on the formulation of temporal dynam-\nics of volatility over time under a stochastic recurrent neural network framework.\nOur derivations show that some popular volatility models are a special case of\nour proposed neural stochastic volatility model. Experiments demonstrate that\nthe proposed model generates a smoother volatility estimation, and outperforms\nstandard econometric models GARCH, EGARCH, GJR-GARCH and some other\nGARCH variants as well as MCMC-based model stochvol and a recent Gaussian\nprocesses based volatility model GPVOL on several metrics about the fitness of\nthe volatility modelling and the accuracy of the prediction."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The volatility of the price movements reflects the ubiquitous uncertainty within financial markets. It\nis critical that the level of risk, indicated by volatility, is taken into consideration before investment\ndecisions are made and portfolio are optimised (Hull, 2006); volatility is substantially a key variable\nin the pricing of derivative securities. Hence, estimating and forecasting volatility is of great im-\nportance in branches of financial studies, including investment, risk management, security valuation\nand monetary policy making (Poon & Granger, 2003).\nVolatility is measured typically by using the standard deviation of price change in a fixed time in\nterval, such as a day, a month or a year. The higher the volatility, the riskier the asset. One o\nthe primary challenges in designing volatility models is to identify the existence of latent (stochas\ntic) variables or processes and to characterise the underlying dependences or interactions betwee!\nvariables within a certain time span. A classic approach has been to handcraft the characteris\ntic features of volatility models by imposing assumptions and constraints, given prior knowledg\nand observations. Notable examples include autoregressive conditional heteroskedasticity (ARCH\nmodel (Engle, 1982) and its generalisation GARCH (Bollerslev, 1986), which makes use of autore\ngression to capture the properties of time-variant volatility within many time series. Heston (1993\nassumed that the volatility follows a Cox-Ingersoll-Ross (CIR) process (Cox et al., 1985) and de\nrived a closed-form solution for options pricing. While theoretically sound, those approaches requir\nstrong assumptions which might involve complex probability distributions and non-linear dynamic\nthat drive the process, and in practice, one may have to impose less prior knowledge and rectify ;\nsolution under the worst-case volatility case (Avellaneda & Paras, 1996).\nIn this paper, we take a fully data driven approach and determine the configurations with as few\nexogenous input as possible, or even purely from the historical data. We propose a neural network\nre-formulation of stochastic volatility by leveraging stochastic models and recurrent neural networks\n(RNNs). We are inspired by the recent development on variational approaches of stochastic (deep)\nneural networks (Kingma & Welling, 2013; Rezende et al., 2014) to a recurrent case (Chung et al..\n2015; Fabius & van Amersfoort, 2014; Bayer & Osendorfer, 2014), and our formulation shows that\nexisting volatility models such as the GARCH (Bollerslev, 1986) and the Heston model (Heston.\n1993) are the special cases of our neural stochastic volatility formulation. With the hidden latent"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "variables in the neural networks we naturally uncover the underlying stochastic process formulatec\nfrom the models.\nExperiments with synthetic data and real-world financial data are performed, showing that the pro-\nposed model outperforms the widely-used GARCH model on several metrics of the fitness and the\naccuracy of time series modelling and prediction: it verifies our model\u2019s high flexibility and rich\nexpressive power.\nA notable volatility method is autoregressive conditional heteroskedasticity (ARCH) model (En-\ngle, 1982): it can accurately capture the properties of time-variant volatility within many types ot\ntime series. Inspired by ARCH model, a large body of diverse work based on stochastic process\nfor volatility modelling has emerged. Bollerslev (1986) generalised ARCH model to the gener-\nalised autoregressive conditional heteroskedasticity (GARCH) model in a manner analogous to the\nextension from autoregressive (AR) model to autoregressive moving average (ARMA) model by\nintroducing the past conditional variances in the current conditional variance estimation. Engle\n& Kroner (1995) presented theoretical results on the formulation and estimation of multivariate\nGARCH model within simultaneous equations systems. The extension to multivariate model allows\nthe covariances to present and depend on the historical information, which are particularly useful in\nmultivariate financial models. Heston (1993) derived a closed-form solution for option pricing with\nstochastic volatility where the volatility process is a CIR process driven by a latent Wiener process\nsuch that the current volatility is no longer a deterministic function even if the historical information\nis provided. Notably, empirical evidences have confirmed that volatility models provide accurate\nforecasts (Andersen & Bollerslev, 1998) and models such as ARCH and its descendants/variants\nhave become indispensable tools in asset pricing and risk evaluation.\nOn the other hand, deep learning (LeCun et al., 2015; Schmidhuber, 2015) that utilises nonlinear\nstructures known as deep neural networks, powers various applications. It has triumph over pattern\nrecognition challenges, such as image recognition (Krizhevsky et al., 2012; He et al., 2015; van den\nOord et al., 2016), speech recognition (Hinton et al., 2012; Graves et al., 2013; Chorowski et al.,\n2015), machine translation (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014; Luong\net al., 2015) to name a few.\nTime-dependent neural networks models include RNNs with advanced neuron structure such as long\nshort-term memory (LSTM) (Hochreiter & Schmidhuber, 1997), gated recurrent unit (GRU) (Ch\net al., 2014), and bidirectional RNN (BRNN) (Schuster & Paliwal, 1997). Recent results show tha\nRNNs excel for sequence modelling and generation in various applications (Graves, 2013; Grego!\net al., 2015). However, despite its capability as non-linear universal approximator, one of the draw:\nbacks of neural networks is its deterministic nature. Adding latent variables and their processes int\u00a2\nneural networks would easily make the posterori computationally intractable. Recent work shows\nthat efficient inference can be found by variational inference when hidden continuous variables ar\u00a2\nembedded into the neural networks structure (Kingma & Welling, 2013; Rezende et al., 2014). Some\nearly work has started to explore the use of variational inference to make RNNs stochastic (Chung\net al., 2015; Bayer & Osendorfer, 2014; Fabius & van Amersfoort, 2014). Bayer & Osendorfe:\n(2014) and Fabius & van Amersfoort (2014) considered the hidden variables are independent be:\ntween times, whereas (Fraccaro et al., 2016) utilised a backward propagating inference network\naccording to its Markovian properties. Our work in this paper extends the work (Chung et al., 2015.\nwith a focus on volatility modelling for time series. We assume that the hidden stochastic variable:\nfollow a Gaussian autoregression process, which is then used to model both the variance and the\nmean. We show that the neural network formulation is a general one, which covers two major fi\nnancial stochastic volatility models as the special cases by defining the specific hidden variables anc\nnon-linear transforms.\nStochastic processes are often defined by stochastic differential equations (SDEs), e.g. a (univariate)\ngeneralised Wiener process is da, = jxdt + od w;,, where ju and o denote the time-invariant rates\nof drift and standard deviation (square root of variance) while dw, ~ N(0, dt) is the increment of\nt= 1+ Ut+oe,"}, {"section_index": "3", "section_name": "3.1 DETERMINISTIC VOLATILITY", "section_text": "The time-invariant variance \u00a9\u2019 can be extended to be a function \u00a9; = Y(a#<,) relying on history\nof the (observable) underlying stochastic process {a <;}. The current variance 3, is therefore de-\n\ntermined given the history {a <,} up to time t. An example of such extensions is the univariate\nGARCH(1.1) model (Bollerslev. 1986):\n\u201c NE fo eee 4\n\nwhere x;_1 is the observation from N\u2019(j14-1, 97_,) at time \u00a2 \u2014 1. Note that the determinism is ir\na conditional sense, which means that it only holds under the condition that the complete history\n{x <} is presented, such as the case of 1-step-ahead forecast. otherwise the current volatility woulc\nstill be stochastic as it is built on stochastic process {a,}. However, for multi-step-ahead forecast, we\nusually exploit the relation E;_[(a:\u2014jz)\u201d] = 0? to substitute the corresponding terms and calculate\nthe forecasts with longer horizon in a recursive fashion, for example, Ory = ao + ay Ey_1[(xe -\n1t)?] + Bio? = ao + (a1 + B1)o?. For n-step-ahead forecast, there will be n iterations and the\nprocedure is hence also deterministic.\nAnother extension is applicable for &\u2019, from being conditionally deterministic (i.e. deterministic\ngiven the complete history {a <;}) to fully stochastic: %, = \u00a5'(z<;) is driven by another latent\n\nstochastic process {z;} instead of the observable process {a}. Heston (1993) model instantiates a\ncontinuous-time stochastic volatility model for univariate processes:\nlo, = ao, dt +bdw,\nre = (a1-1 + wp \u2014 0.507) + oe\nop = (L+a)or-1 + b%\n\nwhere \u00b0 =N(0,\n2 ~ N(H* (Zt), 2\" (Zt),\nLy~ N (w\" (@ <1; 2<t); 2\" (Ht, 2<1)),\nwhere pi? (@<;, z<;) and Y* (a <;, z<;) denote the autoregressive time-varying mean and variance\nof the latent variable z, while yx* (@<;, z<;) and \u00a9\" (a# <;, z<,) represent the mean and variance of\nobservable variable \u00ab,, which depend on not only history of the observable process {a <;} but that\nof the latent process {z<,}.\nThese two formulas (Eqs. (6) and (7)) abstract the generalised formulation of volatility models.\nTogether, they represents a broad family of volatility models with latent variables, where the Heston\nstandard Wiener process at time t. In a small time interval between t and t + At, the change in the\nvariable is Ax, = wAt + cAw;. Let At = 1, we obtain the discrete-time version of basic volatility\nmodel:\nH-1)? + 61024,\n\n04 (24-1\nday = (u\u2014 0.507) dt + o% dul\u201d,\nwhere the correlation between dw;* \u2019 and dw,\" applies: E[d wr dw; oN) = = pdt. We apply Euler\u2019s\nscheme of quantisation (Stoer & Bulirsch, 2013) to obtain the discrete analogue to the continuous-\ntime Heston model (Eqs. (3) and (4)):\nAs discussed above, the observable variable x; follows Gaussian distribution of which the mean and\nvariance depend on the history of observable process {a,} and latent {z,}. We presume in addition\nthat the latent process {z,} is an autoregressive model such that z, is (conditionally) Gaussian\ndistributed. Therefore, we formulate the volatility model in general as:"}, {"section_index": "4", "section_name": "4.1 GENERATING OBSERVABLE SEQUENCE", "section_text": "po(Z) = [J po(zilz<e) = [] M20; HS (220), 55 (2<t));\n\npe(X|Z) = [ro (ailece, z<0) = [Ve 15 wt, 2<2), Xo (@ <1, Z<1))\nt t\npo(X,Z) = || po(ailece, z<+)po(z1|2<1)\nt\n\n= [IN ee: ma (220), 55 (20) N (ee; Ma (Wet, Zt), UG (et, Z<1))-\nIt is observed that the means and variances are conditionally deterministic: given the historic\u00e9\ninformation {z<;}, the current mean pj = ug (z<;) and variance UF = YG (z<1) of z is obtaine\nand hence the distribution N(z,; 47, X7) of z, is specified; after sampling z, from the specifie\ndistribution, we incorporate {a <;,} and calculate the current mean pf? = yg (@ <1, Z<;) and varianc\nDY = YGo(@ er, Z<,) of a, and determine its distribution N(a,; w7, X77) of a,. It is natural an\nconvenient to present such a procedure in a recurrent fashion because of its autoregressive nature\nAs is known that RNNs can essentially approximate arbitrary function of recurrent form (Hamme!\n2000), the means and variances, which may be driven by complex non-linear dynamics, can b\nefficiently computed using RNNs.\nz zez\n2 = mi + Aje;,\nwhere A7(A7)' = 07, A?(A?)' = DP and ef ~ N(O,I,),\u20ac7 ~ N(0, I.) are auxiliary vari-\nables. Note that the randomness within the variables of interest (e.g. z;) is extracted by the auxiliary\nvariables (e.g. \u20ac;) which follow the standard distributions. Hence, the reparameterisation guarantees\nthat gradient-based methods can be applied in learning phase (Kingma & Welling, 2013).\nIn this paper, the joint generative model is comprised of two sets of RNN and multilayer perceptror\n(MLP): RNN{/MLP7% for the latent variable, while RNN{/MLP{ for the observables. We stack\nthese two RNN/MLP together according to the causal dependency between those variables. The\nmodel for stochastic volatility is merely a special case of the family. Furthermore, it will degenerate\nto deterministic volatility models such as the well-studied GARCH model if we disable the latent\nprocess.\nIn this section, we establish the neural stochastic volatility model (NSVM) for stochastic volatility\n>stimation and forecast.\nRecall that the latent variable z, (Eq. (6)) and the observable a, (Eq. (7)) are described by autore-\ngressive models (a, has the exogenous input {z<;}.) For the distributions of {z;,} and {a,}, the\nfollowing factorisation applies:\nwhere X = {a,} and Z = {z;} are the sequences of observable and latent variables, respectively,\nwhile \u00ae represents the parameter set of the model. The full generative model is defined as the joint\ndistribution:\nX,Z) =| [ pe(wile<i, 2<1)po(zelz<t)\nt\n\n= [IN ee: ma (220), 55 (20) N (ee; Ma (Wet, Zt), UG (et, Z<1))-\nIt is always a good practice to reparameterise the random variables before we go into RNN ar-\nchitecture. As the covariance matrix \u00bb' is symmetric and positive definite, it can be factorised\nas \u00a9 = UAU', where A is a full-rank diagonal matrix with positive diagonal elements. Let\nA=UA?, we have 3 = AA\". Hence we can reparameterise the latent variable z,; (Eq. (6)) and\nobservable a, (Eq. (7)):\njoint generative model is implemented as the generative network:\neee = MEP G(s),\n~eaNst 2 21-159),\n+ Aje;\n(ut ar) = MPS (hh \u00ae),\nhi = RNN\u00ae(hi_1, 21-1, 215),\n= t+ Ape;\nwhere hj and h/ denote the hidden states of the corresponding RNNs. The MLPs map the hidden\nstates of RNNs into the means and deviations of variables of interest. The parameter set \u00ae is\ncomprised of the weights of RNNs and MLPs.\nOne should notice that when the latent variable z is obtained, e.g. by inference (details in the next\nsubsection), the conditional distribution p(X |Z) (Eq. (9)) will involve in generating the observable\na, instead of the joint distribution pe (X, Z) (Eq. (10)). This is essentially the scenario of predicting\nfuture values of the observable variable given its history. We will use the term \u201cgenerative model\u201d\nand will not discriminate the joint generative model or the conditional one as it can be inferred in\ncontext."}, {"section_index": "5", "section_name": "4.2 INFERENCING THE LATENT PROCESS", "section_text": "qu(Z|X) = J] qu (zel2ce, ect) = [NM 21s ha (2c, tr), BG (Zt, Pet);\n\nt t\nwhere iy (Z<1, %<t) and Ly (z<1, %<;) are functions of the historical information {z<;}, {x<+},\nrepresenting the approximated mean and variance of the latent variable z,, respectively. Note that\nW represents the parameter set of inference model.\n{it7, AZ} = MLP}(h?),\nh NN 124-1, @p-1)\n= fi + Af,"}, {"section_index": "6", "section_name": "4.3. FORECASTING OBSERVATIONS IN FUTURE", "section_text": "As the generative model involves latent variable z;, of which the true valus are unaccessible even we\nhave observed a. Hence, the marginal likelihood p(X) becomes the key that bridges the model\nand the data. The calculation of marginal likelihood involves the posterior distribution po (Z|.X),\nwhich is often intractable as complex integrals are involved. We are unable to learn the paramters\nor to infer the latent variables. Therefore, we consider instead a restricted family of tractable dis-\ntributions qy(Z|X_), referred to as the approximate posterior family, as approximations to the true\nposterior p@(Z|-X ) such that the family is sufficiently rich and flexible to provide good approxima-\ntions (Bishop, 2006: Kinema & Welling, 2013: Rezende et al., 2014).\nWe define the inference model in accordance with the approximate posterior family we have pre-\nsumed, in a similar fashion as (Chung et al., 2015), where the factorised distribution is formulated\nis follows:\nThe inference model essentially describes an autoregressive model on z;, with exogenous input a;.\nHence, in a similar fashion as the generative model, we implement the inference model as the infer-\nence network using RNN/MLP:\nIn the realm of time series analysis, we usually pay more attention on forecasting over generating\n(Box et al., 2015). It means that we are essentially more interested in the generation procedure\nGenerate\n\nInference\n\n~OOOO EmeKerere) E0000\nEO EO) ZOFO\nOOOO HOOOO OOOO\nZOOOO Emererere Exelerere\nOHO) Goro EO FO)\nOOOO OOOO OOOO\n\u2122 OOOO (EmeXerere) 0000\nFigure 1: Forecasting the future using Neural Stochastic Volatility Model\nconditioning on the historical information rather than generation purely based on a priori beliet\nsince the observations in the past of #<, influences our belief of the latent variable z;. Therefore.\nwe apply the approximate posterior distribution of the latent variable z; (Eq. (19)) as discussed ir\nprevious subsection, in place of the prior distribution (Eq. (8)) to build our predictive model.\nNSVM is learned using Stochastic Gradient Variational Bayes following (Kingma & Welling, 2013;\nRezende et al., 2014). For readability, we provide the detailed derivation in Appendix A.\nAlthough we refer to GARCH and Heston as volatility models, the purposes of them are quite\ndifferent: GARCH is a predictive model used for volatility forecasting whereas Heston is more\nof a generative model of the underlying dynamics which facilitate closed-form solutions to SDEs\nin option pricing. The proposed NSVM has close relations to GARCH(1,1) and Heston model:\nboth of them can be regarded as a special case of the neural network formulation. Recall Eq. (2),\nGARCH(1,1) is formulated as \u00a2? = a + 04(x;~1 \u2014 fy\u2014-1)\u201d + 6107_1, where ju;_, is the trend\nestimate of {x,} at time step t calculated by some mean models. A common practice is to assume\nthat ju; follows the ARMA family (Box et al., 2015), or even simpler, as a constant that u, = uw. We\nadopt the constant trend for simplicity as our focus is on volatility estimation.\nWe define the hidden state as h? = [j,0;]', and disable the latent variable z; = 0 as the volatil-\nity modelled by GARCH(1,1) is conditionally deterministic. Hence, we instantiate the generative\nnetwork (Eqs. (16), (17) and (18)) as follows:\nThe set of generative parameters is \u00ae = {j1, ap, a1, By}.\nNext, we show the link between NSVM and (discrete-time) Heston model (Eq. (5)). Let hf =\nlz,_1..0,]! be the hidden state and z, be i.i.d. standard Gaussian instead of autoregressive vari-\nGiven the historical observations x \u2014;, the predictive model infers the current value of latent variable\nZ, using inference network and then generates the prediction of the current observation a; using\ngenerative network. The procedure of forecasting is shown in Fig. 1.\n1,04} = MLPG(h?; \u00ae) = {[1, 0] hf, [0, 1h7},\nhy = RNNG(hi_1, %1;\u00ae)\n\n= , felt [a] eer tomes |p 3] Gene\n\nty, = w+ore where e, ~ N(0, 1).\n{#01} = MLPG (he; \u00ae) = {[1, Oh? (0, Nhe}, (2\nhj = RNNG (hy _1, 241; )\n\n0 0 - 1 0 2 \\\n= [el + a (te-1 ~ (1, O)hf_4)? + I Al (hf_4)?, Q\nt= U+ore where e, ~ N(0, 1). (2\nable, we represent the Heston model in the framework of NSVM as:\net) i Pp\n[2] \u201cMf ap\n{ie,0\u00a2} = MLPS(h?;\u00ae) = {[1, 1, 0h? \u2014 (0,0, 0.5](h7)?, [0,0, hz\nhi = RNNG(hj_1, vt-1, 243 \u00ae)\n0 0 0 1 0\n=|0 1 O | At, + JO} a1 + ]0] x,\n0 0 l+a 0 b\n\nLt = fr + Ores.\nThe set of generative parameters is \u00ae = {j1, a, b}.\nOne should notice that, in practice, the formulation may change in accordance with the specific ar-\nchitecture of neural networks involved in building the model, and hence a closed-form representation\nmay be absent."}, {"section_index": "7", "section_name": "5 EXPERIMENTS", "section_text": "In this section, we present our experiments! both on the synthetic and real-world datasets to validate\nthe effectiveness of NSVM.\nTo evaluate the performance of volatility modelling, we adopt the standard economet-\nric model GARCH(1,1) Bollerslev (1986) as well as its variants EGARCH(1,1) Nelson\n(1991), GJR-GARCH(1,1,1) Glosten et al. (1993), ARCH(5), TARCH(1,1,1), APARCH(1,1,1),\nAGARCH(1,1,1), NAGARCH(1,1,1), IGARCH(1,1), IAVGARCH(1,1), FIGARCH(1,d,1) as base-\nlines, which incorporate with the corresponding mean model AR(20). We would also compare out\nNSVM against a MCMC-based model \u201cstochvol\u201d and the recent Gaussian-processes-based model\n\u201cGPVOL\u2019 Wu et al. (2014), which is a non-parametric model jointly learning the dynamics and\nhidden states via online inference algorithm. In addition, we setup a naive forecasting model as an\nalternative baseline referred to as NAIVE, which maintains a sliding window of size 20 on the most\nrecent historical observations and forecasts the current values of mean and volatility by the average\nmean and variance of the window.\nFor synthetic data experiments, we take four metrics into consideration for performance evaluation:\n1) the negative log-likelihood (NLL) of observing the test sequence with respect to the generative\nmodel parameters; 2) the mean-squared error (MSE) between the predicted mean and the ground\ntruth (44-MSE), 3) MSE of the predicted variance against the true variance (o-MSE); 4) smoothness\nof fit, which is the standard deviation of the differences of succesive variance estimates. As for the\nreal-world scenarios, the trend and volatility are implicit such that no ground truth is accessible to\ncompare with, we consider only NLL and smoothness as the metrics for evaluation on real-world\ndata experiment."}, {"section_index": "8", "section_name": "5.2 MODEL IMPLEMENTATION", "section_text": "The implementation of NSVM in experiments is in accordance with the architecture illustrated in\nFig. 1: it consists of two neural networks, namely inference network and generative network. Each\nnetwork comprises a set of RNN/MLP as we have discussed above: the RNN is instantiated by\nstacked LSTM layers whereas the MLP is essentially a 1-layer fully-connected feedforward network\nwhich splits into two equal-sized sublayers with different activation functions \u2014 one sublayer applies\nexponential function to impose the non-negativity and prevents overshooting of variance estimates\nwhile the other uses linear function to calculate mean estimates. During experiment, the model is\nstructured by cascading the inference network and generative network as depicted in Fig. 1. The\ninput layer is of size 20, which is the same as the embedding dimension Dz; the layer on the\n\u2018Repeatable experiment code: https: //github.com/xxj96/nsvm\nEJs, tp\n\n{ie 01} = MLPS (hy; \u00a9) = {[1, 1, 0]hy \u2014 [0,0,0.5](h7)\u201d, (0,0, 1]he},\n\nhi = RNNG(hj_1, vt-1, 243 \u00ae)\n\n00 0\n=|0 1 0 |rat,+\n0 0 1+a\n\nLp = fle + OE.\n\n1\nJ T-1+\n\n0\n\n0\n0 Zt,\nb\n\nyenerative parameters is \u00ae = {j1, a, b}.\n\n(26\n\n27\n\n(28\n\n(29\ninterface of inference network and generative network \u2014 we call it latent variable layer \u2014 represents\nthe latent variable z, where its dimension is 2. The output layer has the same structure as the input\none, therefore the latent variable layer acts as a bottleneck of the entire architecture which helps to\nextract the key factor. The stacked layers between input layer, latent variable layer and output layer\nare the hidden layers of either inference network or generative network, it consists of 1 or 2 LSTM\nlayers with size 10, which contains recurrent connection for temporal dependencies modelling.\nFor econometric models, we utilise several widely-used packages for time series analysis: statsmod-\nDOr CCOUOMCUIC TOOCIS, WE ULTISe SOVETAaL WIGELY-USCU PaChKd Os LOL LINE SCLICS allalysis. Staisiload~\nels (http://statsmodels.sourceforge.net/), arch (https://pypi.python.\norg/pypi/arch/3.2), Oxford-MFE-toolbox (https: //www.kevinsheppard.\ncom/MFE_Toolbox), stochvol (https://cran.r-project .org/web/packages/\nstochvol) and fGarch (https://cran.r-project.org/web/packages/fGarch).\nThe implementation of GPVOL is retrived from http://jmhl.org and we adopt the same\nhyperparameter setting as in Wu et al. (2014)."}, {"section_index": "9", "section_name": "5.3. SYNTHETIC DATA EXPERIMENT", "section_text": "We build up the synthetic dataset by generating 256 heteroskedastic univariate time series, each with\n2000 data points i.e. 2000 time steps. At each time step, the observation is drawn from a Gaussian\ndistribution with pre-determined mean and variance, where the tendency of mean and variance is\nsynthesised as linear combinations of sine functions. Specifically, for the trend and variance, we\nsynthesis each using 3 sine functions with randomly chosen amplitudes and frequencies; then the\nvalue of the synthesised signal at each timestep is drawn from a Gaussian distribution with the\ncorresponding value of trend and variance at that timestep. A sampled sequence is shown in Fig. 2a.\nWe expect that this limited dataset could well simulate the real-world scenarios: one usually has very\nlimited chances to observe and collect a large amount of data from time-invariant distributions. In\naddition, it seems that every observable or latent quantity within time series varies from time to time\nand seldom repeats the old patterns. Hence, we presume that the tendency shows long-term patterns\nand the period of tendency is longer than observation. In the experiment, we take the former 1500\ntime steps as the training set whereas the latter 500 as the test set.\nFor the synthetic data experiment, we simplify the recurrent layers in both inference net and\ngenerative net as single LSTM layer of size 10. The actual input {#@,} fed to NSVM is Dp-\ndimensional time-delay embedding (Kennel et al., 1992) of raw univariate observation {x,} such\nthat #, = [r141-p,,.--, 2%]. 2-dimensional latent variable z, is adopted to capture the latent pro-\ncess, and enforces an orthogonal representation of the process by using diagonal covariance matrix.\nAt each time step, 30 samples of latent variable z; are generated via reparameterisation (Eq. (22))."}, {"section_index": "10", "section_name": "5.4 REAL-WORLD DATA EXPERIMENT", "section_text": "We select 162 out of more than 1500 stocks from Chinese stock market and collect the time serie:\nof their daily closing prices from 3 institutions in China. We favour those with earlier listing date\nof trading (from 2006 or earlier) and fewer suspension days (at most 50 suspension days in tota\nduring the period of observation) so as to reduce the noise introduced by insufficient observatior\nor missing values, which has significant influences on the performance but is essentially irrelevan\nto the purpose of volatility forecasting. More specifically, the dataset obtained contains 162 time\nseries, each with 2552 data points (7 years). A sampled sequence is shown in Fig. 2b. We divide the\nwhole dataset into two subsets: the training subset consists of the first 2000 data points while the\ntest subset contains the rest 552 data points.\nSimilar model configuration is applied to the real-world data experiment: time-delay embedding\nof dimension Dz on the raw univariate time series; 2-dimensional latent variable with diagonal\nState-of-the-art learning techniques have been applied: we introduce Dropout (Zaremba et al., 2014)\ninto each LSTM recurrent layer and impose L2-norm on the weights of each fully-connected feed-\nforward layer as regularistion; NADAM optimiser (Dozat, 2015) is exploited for fast convergence,\nwhich is a variant of ADAM optimiser (Kingma & Ba, 2014) incorporated with Nesterov momen-\ntum; stepwise exponential learning rate decay is adopted to anneal the variations of convergence as\ntime goes.\nvariance\n\n0.4\n\n0.3\n\n0.2\n\noal| \u2014 garch's prediction\n\u2014_nsvm's prediction\n\nground truth variance |\n\n0.0\n\n500\n\n1000\ntimestep\n\n2000\n(a) Synthetic time series prediction. (up) The data and the predicted 4\u201d and bounds ju\u201d 4\ngroundtruth data variance and the corresponding prediction from GARCH(1,1) and NSVM.\n\n. (down) Thi\n(b) Real-world stock price prediction. (up) The data and the predicted jz* and bounds * + o*. (down) The\nvariance prediction from GARCH(1,1) and NSVM. The prediction of NSVM is more smooth and stable than\nthat of GARCH(1,1), also yielding smaller NLL.\nFigure 2: A case study of time series prediction.\ncovariance matrix; 30 sampling for the latent variable at each time step. Instead of single LST)\nlayers, here we adopt stacked LSTM layers composed of 2 x 10 LSTM cells."}, {"section_index": "11", "section_name": "5.5 RESULT AND DISCUSSION", "section_text": "The overall performance of NSVM and baselines is listed in details in Table 1 and case studies on\nsynthetic data and real-world financial data are illustrated in Fig. 2. The results show that NSVM\nhas higher accuracies for modelling heteroskedastic time series on various metrics: NLL shows the\nfitness of the model under likelihood measure; the smoothness indicates that NSVM obtains more\nrobust representation of the latent volatility; 1.-MSE and o-MSE in synthetic data experiment imply\nthe ability of recognising the underlying patterns of both trend and volatility, which in fact verifies\nour claim of NSVM\u2019s high flexibility and rich expressive power for volatility (as well as trend)\nmodelling and forecasting compared with the baselines. Although the improvement comes at the\ncost of longer training time before convergence, it can be mitigated by applying parallel computing\ntechniques as well as more advanced network architecture or training procedure.\nvariance\n\n500\n\n1000\n\ntimestep\n\n1500\n\ngarch's prediction\nnsvm's prediction\n\n2000 2500\n*the same results obtained from AR(20) mean models\nThe newly proposed NSVM_ outperforms standard econometric models GARCH(1,1).\nEGARCH(1,1), GJR-GARCH(1,1,1) and some other variants as well as the MCMC-based\nmodel \u201cstochvol\u201d and the recent GP-based model \u201cGPVOL\u201d. Apart from the higher accuracy\nNSVM obtained, it provides us with the ability to simply generalise univariate time series analysis\nto multivariate cases by extending network dimensions and manipulating the covariance matrices.\nFurthermore, it allows us to implement and deploy a similar framework on other applications, for\nexample signal processing and denoising. The shortcoming of NSVM comparing to GPVOL is\nthat the training procedure is offline: for short-term prediction, the experiments have shown the\naccuracy, but for long-term forecasting, the parameters need retraining, which will be rather time\nconsuming. The online algorithm for inference will be one of the work in the future.\nSpecifically, our NSVM outperforms GARCH(1,1) on 142 out of 162 stocks on the metric of NLL.\nIn particular, NSVM obtains \u20142.111, \u20142.044, \u20142.609 and \u20141.939 on the stocks corresponding to\nFig2(b), Fig 4(a), (b) and (c) respectively, each of which is better than the that of GARCH (0.3433,\n0.589, 0.109 and 0.207 lower on NLL)."}, {"section_index": "12", "section_name": "6 CONCLUSION", "section_text": "In this paper, a novel volatility model NSVM has been proposed for stochastic volatility estima-\ntion and forecast. We integrated statistical models and RNNs, leveraged the characteristics of each\nmodel, organised the dependences between random variables in the form of graphical models, im-\nplemented the mappings among variables and parameters through RNNs, and finally established\na powerful stochastic recurrent model with universal approximation capability. The proposed ar-\nchitecture comprises a pair of complementary stochastic neural networks: the generative network\nand inference network. The former models the joint distribution of the stochastic volatility process\nwith both observable and latent variables of interest; the latter provides with the approximate pos-\nterior i.e. an analytical approximation to the (intractable) conditional distribution of the latent vari-\nables given the observable ones. The parameters (and consequently the underlying distributions) are\nlearned (and inferred) via variational inference, which maximises the lower bound for the marginal\nlog-likelihood of the observable variables. Our NSVM has presented higher accuracy compared to\nGARCH(1,1), EGARCH(1,1) and GJR-GARCH(1,1,1) as well as GPVOL for volatility modelling\nand forecasting on synthetic data and real-world financial data. Future work on NSVM would be\nto incorporate well-established models such as ARMA/ARIMA and to investigate the modelling of\nseasonal time series and correlated sequences.\nAs we have known, for models that evolve explicitly in terms of the squares of the residuals (e7 =\n(a, \u2014 4,)2), e.g. GARCH, the multi-step-ahead forecasts have closed-form solutions, which means\nBRAN RR RN BA\n\nWe\n\nNLL L-MSE \u2014 o-MSE smoothness NLL smoothness\nNSVM 3.932e-2 -2.393e-3 6.178e-4 4.322e-3 | -2.184 3.505e-3\nGARCH(1,1) 6.905e-2 7.594e-3* 8.408e-4 4.616e-3 | -1.961 6.659e-3\nGJRGARCH(1,1,1) | 6.49le-2 7.594e-3* 7.172e-4 4.426e-3 | -2.016 4.967e-3\nEGARCH(1,1) 5.913e-2 7.594e-3* 8.332e-4 4.546e-3 | -2.001 5.451e-3\nARCH(5) 7.57Te-2 7.594e-3* \u20141.610e-3 5.880e-3 | -1.955 7.91Te-3\nTARCH(1,1,1) 6.365e-2 7.594e-3* 7.284e-4 4.727e-3 | -2.012 3.399e-3\nAPARCH(1,1,1) 6.187e-2 7.594e-3* 9.115e-4 4.531e-3 | -2.014 4.214e-3\nAGARCH(1,1) 6.31le-2 7.594e-3* 9.543e-4 4.999e-3 | -2.008 5.847e-3\nNAGARCH(1,1,1) | 1.134e-1 7.594e-3* \u20149.516e-4 4.904e-3 | -2.020 5.224e-3\nIGARCH(1,1) 6.75le-2 7.594e-3* 9.322e-4 4.019e-3 | -1.999 4.284e-3\nIAVGARCH(1,1) 6.90le-2 7.594e-3* 7.174e-4 4.282e-3 | -1.984 4.062e-3\nFIGARCH(1,d,1) 6.666e-2 7.594e-3* \u20141.055e-3 5.045e-3 | -2.002 5.604e-3\nMCMC-stochvol 0.368 7.594e-3* 3.956e-2 6.421e-4 | -0.909 1.511e-3\nGPVOL 1.273 7.594e-3* 6.457e-1 4.142e-2 | -2.052 5.739e-3\nNAIVE 2.037e-1_ \u2014 8.423e-3 3.515e-3 2.708e-2 | -0.918 7459e-3\nOn the other hand, for models that are not linear or do not explicitly evolve in terms of e?, e.g\nEGARCH (linear but not evolve in terms of e?), our NSVM (nonlinear and not evolve in term:\nof e?), the closed-form solutions are absent and thus the analytical forecast is not available. We\nwill instead use simulation-based forecast, which uses random number generator to simulate draw:\nfrom the predicted distribution and build up a pre-specified number of paths of the variances at |\nstep ahead. The draws are then averaged to produce the forecast of the next step. For n-step-aheac\nforecast, it requires n iterations of 1-step-ahead forecast to get there.\nNSVM is designed as an end-to-end model for volatility estimation and forecast. It takes the price\nof stocks as input and outputs the distribution of the price at next step. It learns the dynamics using\nRNN, leading to an implicit, highly nonlinear formulation, where only simulation-based forecast is\navailable. In order to obtain reasonably accurate forecasts, the number of draws should be relatively\nlarge, which will be very expensive for computation. Moreover, the number of draws will increase\nexponentially as the forecast horizon grows, so it will be infeasible to forecast several time steps\nahead. We have planned to investigate the characteristics of NSVM\u2019s long-horizontal forecasts and\ntry to design a model specific sampling method for efficient evaluation in the future."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Torben G Andersen and Tim Bollerslev. Answering the skeptics: Yes, standard volatility models dc\nprovide accurate forecasts. International economic review. pp. 885\u2014905. 1998.\nChristopher M Bishop. Pattern recognition. Machine Learning, 128, 2006.\nTim Bollerslev. Generalized autoregressive conditional heteroskedasticity. Journal of econometrics,\n31(3):307\u2014327,. 1986.\nGeorge EP Box, Gwilym M Jenkins, Gregory C Reinsel, and Greta M Ljung. Time series analysi.\nforecasting and control. John Wiley & Sons, 2015.\nKyunghyun Cho, Bart Van Merri\u00e9nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol:\nger Schwenk, and Yoshua Bengio. Learning phrase representations using mn encoder-decode:\nfor statistical machine translation. arXiv preprint arXiv: 1/406. ]078. 2014.\nTimothy Dozat. Incorporating nesterov momentum into adam. 2015.\nRobert F Engle. Autoregressive conditional heteroscedasticity with estimates of the variance of\nunited kingdom inflation. Econometrica: Journal of the Econometric Society, pp. 987-1007,\n1982.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly\nlearning to align and translate. arXiv preprint arXiv: 1409.0473, 2014.\nJustin Bayer and Christian Osendorfer. Learning stochastic recurrent networks. arXiv preprint\narXiv: 1411.7610, 2014.\nJan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio.\nAttention-based models for speech recognition. In Advances in Neural Information Processing\nSystems, pp. 577-585, 2015.\njohn C Cox, Jonathan E Ingersoll Jr, and Stephen A Ross. A theory of the term structure of interest\nrates. Econometrica: Journal of the Econometric Society, pp. 385\u2014407,. 1985.\nRobert F Engle and Kenneth F Kroner. Multivariate simultaneous generalized arch. Econometri\ntheory, 11(01):122-150, 1995.\nMarco Fraccaro, Sgren Kaae Sgnderby, Ulrich Paquet, and Ole Winther. Sequential neural model\nwith stochastic layers. arXiv preprint arXiv: 1605.07571, 2016.\nKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A\nrecurrent neural network for image generation. arXiv preprint arXiv: 1502.04623, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-\nnition. arXiv preprint arXiv: 1512.03385, 2015.\nSteven L Heston. A closed-form solution for options with stochastic volatility with applications to\nbond and currency options. Review of financial studies. 6(2):327\u2014343. 1993.\nGeoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitl\nAndrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural network\nfor acoustic modeling in speech recognition: The shared views of four research groups. [EE\nSienal Processine Magazine. 29(6):82\u201497. 2012.\nJohn C Hull. Options, futures, and other derivatives. Pearson Education India, 2006.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 9(8):\n1735-1780, 1997.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation anc\napproximate inference in deep generative models. arXiv preprint arXiv: 1401.4082. 2014.\nJiirgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85-117.\n2015.\nJosef Stoer and Roland Bulirsch. Introduction to numerical analysis, volume 12. Springer Science\n& Business Media, 2013.\n[lya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks\nIn Advances in neural information processing systems. pp. 3104\u20143112. 2014.\nAaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks\narXiv preprint arXiv: 1601.06759. 2016.\nYue Wu, Jos\u00e9 Miguel Hernandez-Lobato, and Zoubin Ghahramani. Gaussian process volatility\nmodel. In Advances in Neural Information Processing Systems, pp. 1044-1052, 2014.\nWojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization\narXiv preprint arXiv: 1409.2329, 2014.\nMike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. IEEE Transactions\non Signal Processing. 45(11):2673\u20142681. 1997."}, {"section_index": "14", "section_name": "\\ COMPLEMENTARY DISCUSSIONS OF NSVM", "section_text": "In this appendix section we present detailed derivations of NSVM, specifically, the parameters learn-\ning and calibration, and covariance reparameterisation."}, {"section_index": "15", "section_name": "A.1 LEARNING PARAMETERS / CALIBRATION", "section_text": "Given the observations X,, the objective of learning is to maximise the marginal log-likelihood of X\ngiven \u00ae, where the posterior is involved. However, as we have discussed in the previous subsection\nthe true posterior is usually intractable, which means exact inference is difficult. Hence, approximate\ninference is applied instead of rather than exact inference by following (Kingma & Welling, 2013\nRezende et al., 2014). We represent the marginal log-likelihood of X in the following form:\n= Eq (z|X)\n\n2 Eqy(z|x)\n\n))_ po(X, Z) qu(Z|X)\n\n7x = Ey(Zix) [me qu(Z|X) po( Zix)|\n\u2014Ingu(Z|X)] + KL[qu(Z|X)||po(Z|X)]\n\u2014 Inqy(Z|X)] (as KL > 0),\nWe apply the factorisations in Eqs. (10) and (19) to the integrand within expectation of Eq. (30):\nnpo(X, Z) \u2014 Ingu(Z|X) = > [nN (aes w5(@ <1, 21); EE (wt, 2<1))\n\nt\n\u00bb [ Indet 57 + (ae + AGG \u2014 wi)\" (Bi) AF + APG \u2014 ui)\nt\n+ Indet EF + (a, \u2014 w?)' (LP) 1 (a, \u2014 wi?) \u2014 Indet S| + const,"}, {"section_index": "16", "section_name": "A.2 COVARIANCE PARAMETERISATION", "section_text": "As is known, it entails a computational complexity of O(/*) to maintain and update the full-size\ncovariance 7 with M dimensions (Rezende et al., 2014). In the case of very high dimensions, the\nfull-size covariance matrix would be too computationally expensive to afford. Hence, we use insteac\nthe covariance matrices with much fewer parameters for efficiency. The simplest setting is to us\u00a2\ndiagonal precision matrix (i.e. the inverse of covariance matrix) Y~! = D. However, it draw:\nvery strong restrictions on representation of the random variable of interest as the diagonal precisior\nmatrix (and thus diagonal covariance matrix) indicates independence among the dimensions. There:\nfore, the tradeoff becomes low-rank perturbation on diagonal matrix: 3~! = D+ VV\", where\nV ={v,.....vu7e} denotes the perturbation while each v;. is a 1/-dimensional column vector.\nwhere the expectation term E,,(z|x)[Inpo(X, Z) \u2014 Inqy(Z|X)] is referred to as the variational\nlower bound L{q; X, \u00ae, W] of the approximate posterior qy(Z|X,W). The lower bound is essen-\ntially a functional with respect to distribution g and parameterised by observations X and parameter\nsets \u00ae, W of both generative and inference model. In theory, the marginal log-likelihood is max-\nimised by optimisation on the lower bound Lig; X.\u00ae, W] with respect to \u00ae and WV.\ns there is usually no closed-form solution for the expecation (Eq. (30)), we have to estimate th\n\u2018pectation by applying sampling methods to latent variable z; through time in accordance wit\ne causal dependences. We utilise the reparameterisation of z, as shown in Eq. (22) such thi\ne sample the corresponding auxiliary standard variable \u20ac, rather than z; itself and compute th\nilue of z, on the fly. This ensures that the gradient-based optimisation techniques are applicabl\n; the reparameterisation isolates the model parameters of interest from the sampling procedure. B\nimpling NV sample paths, the estimator of the lower bound is defined as the average of paths:\nwhere Aj(Aj)7 = Dg and \u20ac7 ~ N(0, I.) is parameter-independent and considered as constant\nwhen calculating derivatives.\nThe corresponding covariance matrix and its determinant is obtained using Woodbury identity and\n\nmoatriy determinant lemmra:\ny=D'!-pD'viit+v'pD'v) vip\nTo calculate the deviation A for the factorisation of covariance matrix \u00a9 = AA', we first con-\nsider the rank-1 perturbation where K = 1. It follows that V = v is a column vector, and\nI+V'D\"V =1+4v!D>~'visareal number. A particular solution of A is obtain:\nD>? \u2014[y7(1\u2014 Vm)|D7 vv! D>?\nObserve that VV\" = kel DEV, the perturbation of rank JX is essentially the superposition of\nK perturbations of rank 1. Therefore, we can calculate the deviation A iteratively, an algorithm is\nprovided to demonstrate the procedure of calculation. The computational complexity for rank-/\nperturbation remains to be O(M) given K < M.\nAlgorithm | gives the detailed calculation scheme.\nAlgorithm 1 Calculation of rank-/ perturbation of precision matrices\n\nInput: The original diagonal matrix D; The ra KK perturbation V = {v,...,v\u00ab}\nOutput: A such that the factorisation AA\u2019 = =(D+VV\")-t holds\n\n1: A@) = D>?\n\n2,74=0\n\n3: while i < K do\n\nWW = MADAL)W\n\nmi) = (1+ %%)7*\n\nAgs = Aw ya A YIMAWALHDHAW\nA=Ax)\n3: while i < K do\n4 YW) = AWAY\n5S. Mi) = CHO\n\nVIM AWA\n\n\u00a5) Aw)"}, {"section_index": "17", "section_name": "B MORE CASE STUDIES", "section_text": "NSVM obtains \u20142.044, \u20142.609 and \u20141.939 on the stocks corresponding to Fig 4(a), (b) and (c\nrespectively, each of which is better than the that of GARCH (0.589, 0.109 and 0.207 lower or\nNLL).\nThe reason of the drops in Fig 4(b) and (c) seems to be that NSVM has captured the jumps and\ndrops of the stock price using its nonlinear dynamics and modelled the sudden changes as part of\nthe trend: the estimated trend \u201cmu\u201d goes very close to the real observed price even around the jumps\nand drops (see the upper figure of Fig 4(b) and (c) around step 1300 and 1600). The residual (i.e.\ndifference between the real value of observation and the trend of prediction) therefore becomes quite\nsmall, which lead to a lower volatility estimation.\nOn the other hand, for the baselines, we adopt AR as the trend model, which is a relatively simple\nlinear model compared with the nonlinear NSVM. AR would not capture the sudden changes anc\nleave those spikes in the residual; GARCH then took the residuals as input for volatility modelling\nresulting in the spikes in volatility estimation.\nvariance\n\n0.3\n\n0.2\n\n01\n\n0.0\n\n500\n\n1000\ntimestep\n\n1500 201\n\n\u2014 _ garch's prediction\n\u2014 _nsvm's prediction\n\n\u2014 ground truth variance\n\n500\n\n1000\n\n1500 201\n\n}00\n(a) Synthetic time series prediction II. (up) The data and the predicted j.* and bounds ,\neroundtruth data variance and the corresponding prediction from GARCH(1,1) and NSVM.\n\n. (down) The\nvariance\n\n1000\ntimestep\n\n2000\n\n0.4\n\n0.3\n\n0.2\n\n01\n\n0.0\n\n\u2014 ground truth variance\n\u2014 _ garch's prediction\n\u2014 _nsvm's prediction\n\n500\n\n1000\ntimestep\n\n2000\n(b) Synthetic time series prediction IV. (up) The data and the predicted :* and bounds pv\neroundtruth data variance and the corresponding prediction from GARCH(1,1) and NSVM.\n\n*. (down) The\nFigure 3: A case study of synthetic time series prediction.\nvariance\n\n500\n\n1000\n\n1500 2000 2500\ntimestep\n\n1500 2000 2500\nvariance\n\n500\n\n1000\n\ntimestep\n\n1500\n\n2000\n\n2500\n\n\u2014 garch's prediction\n\u2014 _nsvm's prediction\n\n500\n(b) Real-world stock price prediction III. (up) The data and the predicted jz* and bounds jz\u201d + o*. (down) The\nvariance prediction from GARCH(1,1) and NSVM. The prediction of NSVM is more smooth and stable than\nthat of GARCH(1,1), also yielding smaller NLL.\n(c) Real-world stock price prediction IV. (up) The data and the predicted jz* and bounds .* + o*. (down) The\nvariance prediction from GARCH(1,1) and NSVM. The prediction of NSVM is more smooth and stable than\nthat of GARCH(1,1), also yielding smaller NLL.\nFigure 4: A case study of real-world stock time series prediction.\n\u2018a) Real-world stock price prediction II. (up) The data and the predicted jz* and bounds jz* + 0\u201d. (down) The\nvariance prediction from GARCH(1,1) and NSVM. The prediction of NSVM is more smooth and stable than\nhat of GARCH{(1,1), also yielding smaller NLL.\nvariance\n\n0.16\n\ntimestep\n\n0.14\n0.12\n0.10\n0.08\n0.06\n0.04\n0.02\n\ngarch\u2019s prediction\nnsvm's prediction\n\n0.004\n\n500\n\n1000\n\n1500\n\n2000\n\n2500"}]
BycCx8qex
[{"section_index": "0", "section_name": "DRAGNN: A TRANSITION-BASED FRAMEWORK FOR\nDYNAMICALLY CONNECTED NEURAL NETWORKS", "section_text": "Lingpeng Kong\nCarnegie Mellon University\nPittsburgh, PA\nIn this work, we present a compact, modular framework for constructing new\nrecurrent neural architectures. Our basic module is a new generic unit, the Transi-\ntion Based Recurrent Unit (TBRU). In addition to hidden layer activations, TBRUs\nhave discrete state dynamics that allow network connections to be built dynami-\ncally as a function of intermediate activations. By connecting multiple TBRUs,\nwe can extend and combine commonly used architectures such as sequence-to-\nsequence, attention mechanisms, and recursive tree-structured models. A TBRU\ncan also serve as both an encoder for downstream tasks and as a decoder for\nits own task simultaneously, resulting in more accurate multi-task learning. We\ncall our approach Dynamic Recurrent Acyclic Graphical Neural Networks, or\nDRAGNN. We show that DRAGNN is significantly more accurate and efficient\nthan seq2seq with attention for syntactic dependency parsing and yields more ac-\ncurate multi-task learning for extractive summarization tasks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "To apply deep learning models to structured prediction, machine learning practitioners must address\ntwo primary issues: (1) how to represent the input, and (2) how to represent the output. The seg2seq\nencoder/decoder framework (Kalchbrenner & Blunsom| 2013} {Cho et al. 2014} Sutskever et al.|\nproposes solving these generically. In its simplest form, the encoder network produces a\nfixed-length vector representation of an input, while the decoder network produces a linearization\nof the target output structure as a sequence of output symbols. Encoder/decoder is state of the art\nfor several kev tasks in natural lancuage processing. such as machine translation (Wu et al.|/2016).\nHowever, fixed-size encodings become less competitive when the input structure can be explicitly\nmapped to the output. In the simple case of predicting tags for individual tokens in a sentence, state-\nof-the-art taggers learn vector representations for each input token and predict output tags from\n\nthose (Ling et al. 2015} Huang et al.| 2015} Andor et al.| 2016). When the input or output is a\n\nsyntactic parse tree, networks that explicitly operate over the compositional structure of the network\n\ntypically outperform generic representations (Dyer et al 2015 [2016).\ngnificantly im\n\nImplictly learned mappings via attention mechanisms can si prove the performance\n\nof sequence-to-sequence (Bahdanau et al} 2015} |Vinyals et al. 2015). but require runtime that\u2019s\n\nquadratic in the input size.\nIn this work, we propose a modular neural architecture that generalizes the encoder/decoder concept\nto include explicit structure. Our framework can represent sequence-to-sequence learning as well as\nmodels with explicit structure like bi-directional tagging models and compositional, tree-structured\nmodels. Our core idea is to define any given architecture as a series of modular units, where con-\nnections between modules are unfolded dynamically as a function of the intermediate activations\nproduced by the network. These dynamic connections represent the explicit input and output struc-\nture produced by the network for a given task.\n{chrisalberti, andor,bogatyy, djweiss}@google .com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We build on the idea of transition systems from the parsing literature 2006), which linearize\nstructured outputs as a sequence of (state, decision) pairs. Transition-based neural networks have\n\nrecently been applied to a wide variety of NLP problems; [Dyer et al.|(2015);/Lample et al.] (2016);\nTransition Based Recurrent Unit (TBRU)\n\nBro IM Tagging \\o TDM)\n\nRLACK= LONI \\c TRAY)\n\nnetwork activations\n\nDiscrete Recurrence fen Network |\n~ + {0\nstate Input embeddings Cell H\n\nEncoder/Decoder (2 TBRU) Y1 Y2 Y3 Y4 Y5\n\nY1 Y2 Y3 Y4 Y5\n\nY1\n\nY2 Y3 Y4 Y5\nFigure 1: High level schematic of a Transition-Based Recurrent Unit (TBRU), and common network\narchitectures that can be implemented with multiple TBRUs. The discrete state is used to compute\nrecurrences and fixed input embeddings, which are then fed through a network cell. The network\npredicts an action which is used to update the discrete state (dashed output) and provides activations\nthat can be consumed through recurrences (solid output). Note that we present a slightly simplifiec\n\nversion of Stack-LSTM (Dyer et al.|[2015) for clarity.\nKiperwasser & Goldberg (2016); Zhang et al. (2016); Andor et al.|(2016), among others. We gen\n\neralize these approaches with a new basic module, the Transition-Based Recurrent Unit (TBRU\nwhich produces a vector representation for every transition state in the output linearization (Figur\n{ip. These representations also serve as the encoding of the explicit structure defined by the state:\nFor example, a TBRU that attaches two sub-trees while building a syntactic parse tree will also pre\nduce the hidden layer activations to serve as an encoding for the newly constructed phrase. Multip!\nTBRUs can be connected and learned jointly to add explicit structure to multi-task learning setup\nand share representations between tasks with different input or output spaces (Figure]2).\nThis inference procedure will construct an acyclic compute graph representing the network archi\ntecture, where recurrent connections are dynamically added as the network unfolds. We therefor\ncall our approach Dynamic Recurrent Acyclic Graphical Neural Networks, or DRAGNN.\nDRAGNN has several distinct modeling advantages over traditional fixed neural architectures. Un:\nlike generic seq2seq, DRAGNN supports variable sized input representations that may contain ex:\nplicit structure. Unlike purely sequential RNNs, the dynamic connections ina DRAGNN can spar\narbitrary distances in the input space. Crucially, inference remains linear in the size of the input\nin contrast to quadratic-time attention mechanisms. Dynamic connections thus establish a compro:\nmise between pure seq2seq and pure attention architectures by providing a finite set of long-rang\u00a2\ninputs that \u2018attend\u2019 to relevant portions of the input space. Unlike recursive neural networks (Soche\nfet al.|[2010} 2011) DRAGNN can both predict intermediate structures (such as parse trees) and uti\nlize those structures in a single deep model, backpropagating downstream task errors through the\nintermediate structures. Compared to models such as Stack-LSTM and SPINN\n\n[Bowman et al.|(2016), TBRUs are a more general formulation that allows incorporating dynamicall;\nstructured multi-task learning (Zhang & Weiss\\|2016) and more varied network architectures.\nIn sum, DRAGNN is not a particular neural architecture, but rather a formulation for describing\nneural architectures compactly. The key to this compact description is a new recurrent unit\u2014the\nTBRU\u2014which allows connections between nodes in an unrolled compute graph to be specified\ndynamically in a generic fashion. We utilize transition systems to provide succinct, discrete repre-\nsentations via linearizations of both the input and the output for structured prediction. We provide <\nstraightforward way of re-using representations across NLP tasks that operate on different structures.\nWe demonstrate the effectiveness of DRAGNN on two NLP tasks that benefit from explicit struc\nture: dependency parsing and extractive sentence summarization (Filippova & Altun| |2013). Firs\nwe show how to use TBRUs to incrementally add structure to the input and output of a \u201cvanilla\nseq2seq dependency parsing model, dramatically boosting accuracy over seq2seq with no additiona\ncomputational cost. Second, we demonstrate how the same TBRUs can be used to provide structure!\nintermediate syntactic representations for extractive sentence summarization. This yields better ac\ncuracy than is possible with the generic multi-task seq2seq (Dong et al] 2075} [Luong et al] OTE\napproach. Finally, we show how multiple TBRUs for the same dependency parsing task can b\n\nstacked together to produce a single state-of-the-art dependency parsing model.\nTransition Based Recurrent Unit (TBRU) Bi-LSTM Tagging (3 TBRU)\n\nY1\nnetwork activations\nDiscrete Recurrence fen Network a\u201c\n~~ state Input embeddings Cell\nEncoder/Decoder (2 TBRU) Y1 Y2 Y3 Y4 Y5\n\nStack-LSTM (2 TBRU)\nY2 Y3 Y4 Y5 Y1 Y2 Y3 Y4 Y5\nSummarization\n\nRight-to-left LSTM TBRU\n\nDRAGNN w/ Intermediate representations\n\nSummarization\n\nDynamic\nunrolled links\n\nIntermediate representation\n\nExtractive summarization TBRU\n\nUniformed man Taughed] |\nprop |*|_keer |*|_keer | |\n\nUniformed\nFigure 2: Using TBRUs to share fine-grained, structured representations. Top left: A high level viev\nof multi-task learning with DRAGNN in the style of multi-task seq2seq (Luong et al.|[2016). Botton\nleft: Extending the \u201cstack-propagation\u201d [Zhang & Weiss] (2016) idea to included dependency pars\u00ab\ntrees as intermediate representations. Right: Unrolled TBRUs for each setup for a input fragmen\n\u201cUniformed man laughed\u201d, utilizing the transition systems described in Section|4]\nWe use transition systems to map inputs x into a sequence of output symbols, d; ...d,,. For the pur-\nposes of implementing DRAGNN, transition systems make explicit two desirable properties. First,\nwe stipulate that the output symbols represent modifications of a persistent, discrete state, which\nmakes book-keeping to construct the dynamic recurrent connections easier to express. Second, tran-\nsition systems make it easy to enforce arbitrary constraints on the output, e.g. the output should\nproduce a valid tree.\nFormally, we use the same setup as (2016), and define a transition system T = {S, A, t}\nA set of states S(x).\nA special start state st \u20ac S().\nA set of allowed decisions A(s, x) for all s \u20ac S.\n\nA transition function t(s. d. x) returning a new state s\u2019 for ar\nFor brevity, we will drop the dependence of x in the functions given above. Throughout this worl\nwe will use transition systems in which all complete structures for the same input x have the sam\nnumber of decisions n(z) (or n for brevity), although this is not necessary.\nWe now formally define how to combine transition systems with recurrent networks into what we\ncall a transition based recurrent unit (TBRU). A TBRU consists of the following:\n@ ASC OF States OL).\ne A special start state st \u20ac S(x).\ne A set of allowed decisions A(s, x) for all s \u20ac S.\n\ne A transition function \u00a2(s, d, x) returning a new state s\u2019 for any decision d \u20ac A(s,:\n\nsyaiity we will dran the deneandenre af \u00bb in the functiane aiven ahnuve Thranchant thic wrt\nA complete structure is then a sequence of decision/state pairs (51, d1) .. . (Sn, dn) such that s; = st,\nd; \u20ac A(s;) fori =1...n, and 5:41 = t(s;, d;). We will now define recurrent network architectures\nthat operate over these linearizations of input and output structure.\ne A transition system 7,\ne An input function m(s) that maps states to fixed-size vector representations, for exam\nan embedding lookup operation for features from the discrete state:\nm(s): SH R*\nr Dependency Parse: d=Right arc incorrect) + ater\n\nT+ Tz}\n\nH i 7\" =I 1 on Monday\n{ { f\u00b0 flower H\n\nisms] | fester] i fester] 5... estar = ZARA 0 bm |\n\nCel H Cel H Cel Cell Bob gave Aloe a\u2019 are co\u201d ake Pay |\n\n: : oh d= Shit conect) '\n1 1 jave) (flower) | Monday\nae ae (Sie\n: F\n\nC2) &\nDependency Parse:\n\nd = Right arc (incorrect)\n\nTy + Te 4 hi | Batter\nism] | [estmsmue ist MP ismim | 22 5 \u2014 i\nCel H Cel Cel Cell ook Que Alco a\u201d prey fers Monday H\nH \u2018Transition state: : H\n1 \u2018Stack 1 Buffer d = Shift (correct) '\n| 9 (Gas) | on Moncey | naay\nwe ale ope | oe\u2019 hile \u00ab\n\n7\nFigure 3: Left: TBRU schematic. Right: Dependency parsing example. For the given gold depen-\ndency parse tree and a arc-standard transition state with two sub-trees on the stack is shown. From\n\nthis state, two possible actions are also shown (Shift and Right arc). To reproduce the tree, the Shift\naction should be taken.\nT sequentially tags each input token, where s; = {1,...,d;\u20141}, and A is the set of po\ntags. We call this the tagger transition system.\n\nm(s;) = x;, the word embedding for the next token to be tagged.\n\nr(s;) = {i \u2014 1} to connect the network to the previous state.\n\nRNN is a single instance of the LSTM cell.\nExample 2. Parsey McParseface. The open-source syntactic parsing model o: (2016\ncan be defined in our framework as follows:\nInference with TBRUs. Given the above, inference in the TBRU proceeds as follows:\n1. Initialize s; = st.\n2. Fori=1,..., 72:\n(a) Update the hidden state: h; + RNN(m(s;), {hy | 7 \u20ac r(si)}).\n\n(b) Update the transition state: d; \u2014 argmax,e 4.) whi, siz < t(j, di)\nA schematic overview of a single TBRU is presented in Figure [3] By adjusting RNN, r, and T.\nTBRUs can represent a wide variety of neural architectures.\na, a a ae\n\nr(s): Sy P{l,...,i-1},\nwhere P is the power set. Note that in general |r(s)| is not necessarily fixed and can vary\nwith s. We use r to specify state-dependent recurrent links in the unrolled computation\ngraph.\ne ARNN cell that computes a new hidden representation from the fixed and recurrent inputs:\nh 2. RNN (m/e) Sh. lj crfle)l)\nr(s):S# P{l,\nh, + RNN(m(s), {h; | i \u20ac r(s)}).\nExample 1. Sequential tagging RNN. Let the input x = {x),...,x,} be a sequence of word\nembeddings, and the output be a sequence of tags d;,...,d,. Then we can model a simple LSTM\ntagger as follows:\nT is the arc-standard transition system (Figure [3), so the state contains all words an\npartially built trees on the stack as well as unseen words on the buffer.\n\nm(s;) is the concatenation of 52 feature embeddings extracted from tokens based on thei\npositions in the stack and the buffer.\n\nr(s;) = {} is empty, as this is a feed-forward network.\n\nRNN is a feed-forward multi-layer perceptron (MLP).\nWhile TBRUs are a useful abstraction for describing recurrent models, the primary motivation for\nthis framework is to allow new architectures by combining representations across tasks and compo-\nsitional structures. We do this by connecting multiple TBRUs with different transition systems vi:\nthe recurrence function r(s). We formally augment the above definition as follows:\nExample 3. \u201cInput\u201d transducer TBRUs via no-op decisions. We find it useful to define TBRUs\neven when the transition system decisions don\u2019t correspond to any output. These TBRUs, which we\ncall no-op TBRUs, transduce the input according to some linearization. The simplest is the shift-\nonly transition system, in which the state is just an input pointer s; = {7}, and there is only one\ntransition which advances it: \u00a2(s;,-) = {i + 1}. Executing this transition system will produce <\nhidden representation h; for every input token.\nExample 4. Encoder/decoder networks with TBRUs. We can reproduce the encoder/decoder\nframework for sequence tagging by using two TBRUs: one using the shift-only transition system to\nencode the input, and the other using the tagger transition system. For input x = {x1,...,Xn}, we\nconnect them as follows:\nWe observe that the tagger TBRU starts at step n after the shift-only TBRU finishes, that y; is ;\nfixed embedding vector for the output tag j, and that the tagger TBRU has access to both the fina\nencoding vector h,, as well as its own previous time step h,,.;_1.\ne Left to right: T = shift-only, m(s;) = x;, r(s;) = {i \u2014 1}.\ne Right to left: T = shift-only, m(8n4i) = Xn\u2014i, T(Sn4i) = {n +i-1}.\ne Tagger: T = tagger, m(san+i) = {}, r(Sansi) = {i, 2n \u2014 i}.\ne Left to right: T = shift-only, m(s;) = x;, r(s;) = {i \u2014 1}.\ne Right to left: T = shift-only, m(8n4i) = Xn\u2014i, T(Sn4i) = {n +i-1}.\ne Tagger: T = tagger, m(S2n4i) = {}, r(S2n4i) = {i,2n \u2014 i}.\nWe observe that the network cell in the tagger TBRU takes recurrences only from the bi-directional\nrepresentations, and so is not recurrent in the traditional sense. See Figure|1|for an unrolled example.\nExample 5. Multi-task bi-directional tagging. Here we observe that it\u2019s possible to add addi-\ntional annotation tasks to the bi-directional TBRU stack from Example 4 simply by adding more\ninstances of the tagger TBRUs that produce outputs from different tag sets, e.g. parts-of-speech vs.\nmorphological tags. Most important, however, is that any additional TBRUs have access to all three\n\nearlier TBRUs. This means that we can support the \u201cstack-propagation\u201d (Zhang & Weiss} |2016)\n\nstyle of multi-task learning simply by changing r for the last TBRU:\ne Traditional wae task: (ean) = {i,2n \u2014 i}\ne Stack-prop: r(s3n+i) ={ 4 , 2n-i, 2n+i }\n\nLeft-to- Tent Richt-to-left Tagger TBRU\nTraditional mn task: * RUssn4s) = = i 2n \u2014 i}\nStack-prop: r(s \u00bb 2n-1, Anti\nprop: r(s3n+4i) ={ 4 , }\n\nLeft-to-1 faeright Right-to-left_ Tagger TBRU\nRemark: the raison d\u2019\u00e9tre of DRAGNN. This example highlights the primary advantage of ou\nformulation: a TBRU can serve as both an encoder for downstream tasks and as a decoder for it.\nown task simultaneously. This idea will prove particularly powerful when we consider syntacti\nparsing, which involves compositional structure over the input. For example, consider a no-o}\nTBRU that traverses an input sequence x),...,X,, in the order determined by a binary parse tree\nthis transducer can implement a recursive tree-structured network in the style of\nwhich computes representations for sub-phrases in the tree. In contrast, with DRAGNN, we cat\n1. We execute a list of T TBRU components, one at a time, so that each TBRU advances a\nglobal step counter. Note that for simplicity, we assume an earlier TBRU finishes all of its\nsteps before the next one starts execution.\n\n2. Each transition state from the 7\u2019th component s7 has access to the terminal states from\nevery prior transition system, and the recurrence function r(s\u201d) for any given component\ncan pull hidden activations from every prior one as well.\nExample 4. Bi-directional LSTM tagger. With three TBRUs, we can implement a simple bi-\ndirectional tagger. The first two run the shift-only transition system, but in opposite directions. The\nfinal TBRU runs the tagger transition system and concatenates the two representations:\nUnrolled graph (incomplete): Recurrent inputs:\n\nSUBTREE(s, Sp) SUBTREE(s,S,) | INPUT(s)\ngave flower {on Monday\nBob Alice a __ pretty i\nStack | Buffer\n\nftt of ft ff t f\n\nTBRU1 Bob gave Alice a pretty flower on Monday\nFigure 4: Detailed schematic for the compositional dependency parser used in our experiments.\nThe first TBRU consumes each input word right-to-left; the second uses the arc-standard transition\nsystem. Note that each \u201cShift\u201d action causes the TBRU1\u2014>TBRU2 link to advance. The dynamic\nrecurrent inputs to the given state are highlighted; the stack representations are obtained from the\nlast \u201cReduce\u201d action to modify each sub-tree.\nuse the arc-standard parser directly to produce the parse tree as well as encode sub-phrases int\nrepresentations.\nFor a given parser state s;, we compute two types of recurrence:\nExample 7. Extractive summarization pipeline with parse representations. To model extrac-\ntive summarization, we follow (2016) and use a tagger transition system with two tags:\n\u201cKeep\u201d and \u201cDrop.\u201d However, whereas (2016) use discrete features of the parse tree,\nwe can utilize the SUBTREE recurrence function to pull compositional, phrase-based representa-\ntions of tokens as constructed by the dependency parser. This model is outlined in Figure|2] A full\nspecification is given in the Appendix."}, {"section_index": "3", "section_name": "3.2 HOW TO TRAIN A DRAGNN", "section_text": "Given a list of TBRUs, we propose the following learning procedure. We assume training data\nconsists of examples x along with gold decision sequences for one of the TBRUs in the DRAGNN.\n'This composition function is similar to that in the constituent parsing SPINN model\nbut with several key differences. Since we use TBRUs, we compose new representations for \u201cShift\u201d actions a:\nwell as reductions, we take inputs from other recurrent models, and we can utilize subtree representations in\ndownstream tasks.\nSUBTREE(s, Sp) SUBTREE(s,S,) | INPUT(s)\n, ave flower {on Monday\n\u201c (VN Nb\nBob Alice a __ pretty i\nOO wax | Buffer\n\nrpp OFF\n\nBRU 1 Bob gave Alice a pretty flower on Monday\nExample 6. Compositional representations from arc-standard dependency parsing. We use\nthe arc-standard transition system ( 6) to model dependency trees. The system maintains\ntwo data structures as part of the state s: an input pointer and a stack (Figure Bp. Trees are built\nbottom up via three possible attachment decisions. Assume that the stack consists of S = {A, B},\nwith the next token being C. We use So and Sj to refer to the top two tokens on the stack. Then the\ndecisions are defined as:\ne Shift: Push the next token on to the stack: S = {A, B, C}, and advance the input pointer\ne Left arc + label: Add an arc A <jqp\u00a2; B, and remove A from the stack: S = {B}.\ne Right arc + label: Add an arc A \u2014),5-; B, and remove B from the stack: S = {A}.\n\u00a9 ripur(si) = {INPUT(s;)}, where INPUT returns the index of the next input token.\n\u00a9 Ysrack(Si) = {SUBTREE(s;, So), SUBTREE(s,$1)}, where SUBTREE(S,1) is a functiot\nreturning the index of the last decision that modified the i\u2019th token:\nWe show an example of the links constructed by these recurrences in Figure 4] and we investigate\nvariants of this model in Section|4| This model is recursively compositional according to the decision\ntaken by the network: when the TBRU at step s; decides to add an arc_A + B for state, the\nactivations h; will be used to represent that new subtree in future decisions\nParsing TBRU recurrence, r(s;) C yl,...,2 +25 Parsing Accuracy (%)\n\nInput links Recurrent edges News Questions Runtime\nL {n+i-1} 27.3 70.1 O(n)\n{n} {SUBTREE(s;, So), SUBTREE(s;,51)} 36.0 75.6 O(n)\nAttention {n+i-1} 76.1 84.8 O(n?)\nAttention {SUBTREE(s;,So),SUBTREE(s;,51)} 89.0 91.9 O(n?)\nINPUT(s;) {n+i-1} 87.1 89.7 O(n)\n\nINPUT(s;) {SUBTREE(s;, So), SUBTREE(s;,51)} 90.9 92.1 O(n)\nTable 1: Dynamic links enable much more accurate, efficient linear-time parsing models on the\nTreebank Union dev set. We vary the recurrences r to explore utilizing explicit structure in the\nparsing TBRU. Utilizing the explicit INPUT(s;) pointer is more effective and more efficient than <\nquadratic attention mechanism. Incorporating the explicit stack structure via recurrent links further\nimproves performance.\nL(x, dN 41:n4ni9) = SO log P(N 4; | din, @yyan isi)\n\na\nwhere @ are the combined parameters across all TBRUs. We observe that this objective is locally\n\nnormalized (Andor et al.||2016), since we optimize the probabilities of the individual decisions in\n\nthe gold sequence.\nThe remaining question is where do the decisions d, ... dy come from. There are two options here:\nthey can either come as part of the gold annotation (e.g. if we have joint tagging and parsing data), or\nthey will be predicted by unrolling the previous components (e.g. when training stacked extractive\nsummarization model, the parse trees will be predicted by the previously trained parser TBRU).\nWhen training a given TBRU, we unroll an entire input sequence and then use backpropagatior\nthrough structure 6) to optimize (Ip. To train the whole system on a set of C\ndatasets, we use a similar strategy to ; we sample a target task\nc,1 << C, from a pre-defined ratio, and take a stochastic optimization step on the objective ot\nthat task\u2019s TBRU. In practice, task sampling is usually preceded by a deterministic number of pre-\n\ntraining steps, allowing, for example, to schedule a certain number of tagger training steps before\nrunning any parser training steps."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we evaluate three aspects of our approach on two NLP tasks: English dependency\nparsing and extractive sentence summarization. For English dependency parsing, we primarily use\nthe the Union Treebank setup from|Andor et al]2079). By evaluating on both news and questions\ndomains, we can separately evaluate how the model handles naturally longer and shorter form text.\nOn the Union Treebank setup there are 93 possible actions considering all arc-label combinations.\nFor extractive sentence summarization, we use the dataset of|Filippova & Altun|{2013).\n\nnews collection is used to heuristically generate compression instances. The final corpus contains\nabout 2.3M compression instances, but since we evaluated multiple tasks using this data, we sub-\nsampled the training set to be comparably sized to the parsing data (~60K training sentences). The\ntest set contains 160K examples. We implement our method in TensorFlow, using mini-batches\nof size 4 and following the averaged momentum training and hyperparameter tuning procedure of\n\nWeiss et al.|(2015).\nWe explore the impact of different types of recurrences on dependency parsing in Table i} In\nthis setup, we used relatively small models: single-layer LSTMs with 256 hidden units, taking\nNote that, at a minimum, we need such data for the final TBRU. Assuming given decisions d, ... dy\nfrom prior components |... T\u20141, we define a log-likelihood objective to train the T\u2019th TBRU along\nits gold decision sequence d*,,,,...,d*,,.., conditioned on prior decisions:\nModel Structure Multi-task? A(%) F1(%) LAS (%)\nRight-to-left |\u00bb Summarize N 28.93 79.75 -\nRightto-ett| [Lett-to-rght >| summarize 29.51 80.03 -\nRight-toet _\u00bb| Parse ay Summarize 30.07 80.31 89.42\nRight-to-let |\u2014\u00bb| Parse a Summarize 30.56 80.74 89.13\nTable 2: Single- vs. multi-task learning with DRAGNN on extractive summarization. \u201cA\u201d is full\nsentence accuracy of the extraction model, \u201cFl\u201d is per-token F1 score, and \u201cLAS\u201d is labeled parsing\naccuracy on the Treebank Union News dev set. Both multi-task models that utilize the parsing date\noutperform the single-task approach, but the model that uses parses as an intermediate representatior\nin the vein of Zhang & Weiss] (2016) (Figure[2) makes better use of the data. Note that the locally\nnormalized model from Andor et al. (2016) obtains 30.50% accuracy and 78.72% F1 on the test se\nwhen trained on 100x more data.\n32-dimensional word or output symbol embeddings as input to each cell. In each case, the pars-\ning TBRU takes input from a right-to-left shift-only TBRU. Under these settings, the pure en-\ncoder/decoder seq2seq model simply does not have the capacity to parse newswire text with any\ndegree of accuracy, but the TBRU-based approach is nearly state-of-the-art at the same exact com-\nputational cost. As a point of comparison and an alternative to using input pointers, we also im-\nplemented an attention mechanism within DRAGNN. We used the dot-product formulation from\nParikh tl] 2016, where r(s;) in the parser takes in all of the shift-only TBRU\u2019s hidden states and\nRNN aggregates over them.\nWe evaluate our approach on the summarization task in Table[2] We compare two single-task LSTM\n\ntagging baselines against two multi-task approaches: an adaptation of (2016) and the\nstack-propagation idea of (2016). In both multi-task setups, we use a right-to-\n\nleft shift-only TBRU to encode the input, and connect it to both our compositional arc-standara\ndependency parser and the \u201cKeep/Drop\u201d summarization tagging model.\nIn both setups we do not follow seq2seq, but utilize the INPUT function to connect output deci-\nsions directly to input token representations. However, in the stack-prop case, we use the SUBTREE\nfunction to connect the tagging TBRU to the parser TBRU\u2019s phrase representations directly (Figure\n. We find that allowing the compressor to directly use the parser\u2019s phrase representations signif:\nicantly improves the outcome of the multi-task learning setup. In both setups, we pretrained the\nparsing model for 400K steps and tuned the subsequent ratio of parser/tagger update steps using <\ndevelopment set."}, {"section_index": "5", "section_name": "4.3. DEEP STACKED BI-DIRECTIONAL PARSING", "section_text": "Here we propose a continuous version of the bi-directional parsing model of/Attardi & Dell\u2019 Orlet\nfirst, the sentence is parsed in the left-to-right order as usual; then a right-to-left transitio\nsystem analyzes the sentence in reverse order using addition features extracted from the left-to-rig]\nparser. In our version, we connect the right-to-left parsing TBRU directly to the phrase represer\ntations of the left-to-right parsing TBRU, again using the SUBTREE function. Our parser has th\nsignificant advantage that the two directions of parsing can affect each other during training. Du\ning each training step the right-to-left parser uses representations obtained using the predictions \u00ab\nthe left-to-right parser. Thus, the right-to-left parser can backpropagate error signals through th\nleft-to-right parser and reduce cascading errors caused by the pipeline.\nZhang & Weiss p01\nTable 3: Deep stacked parsing compared to state-of-the-art on PTB. x indicates that additional re-\nsources beyond the Penn Treebank are used. Our model is roughly comparable to an ensemble of\nmultiple Stack-LSTM models, and the most accurate without any additional resources.\nOur final model uses 5 TBRU units. Inspired by|Zhang & Weiss| (2016), a left-to-right POS tagging\nTBRU provides the first layer of representations. Next, we run two shift-only TBRUs, one in each\n\ndirection, to provide representations to the parsers. Finally, we connect the left-to-right parser to the\nright-to-left parser using links defined via the SUBTREE function. The result (Table 3) is a State-of-\nthe-art dependency parser, yielding the highest published accuracy for a model trained solely on the\nPenn Treebank with no additional resources."}, {"section_index": "6", "section_name": "5 CONCLUSIONS", "section_text": "We presented a compact, modular framework for describing recurrent neural architectures. We eva!\nuated our dynamically structured model and found it to be significantly more efficient and accurat\nthan attention mechanisms for dependency parsing and extractive sentence summarization in bot\nsingle- and multi-task setups. While we focused primarily on syntactic parsing, the framework prc\nvides a general means of sharing representations between tasks. There remains low-hanging fru\nstill to be explored: in particular, our approach can be globally normalized with multiple hypothese\nin the intermediate structure. We also plan to push the limits of multi-task learning by combir\ning many different NLP tasks, such as translation, summarization, tagging problems, and reasonin\ntasks, into a single model."}, {"section_index": "7", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Kuzman Ganchev, Michael Collins, Dipanjan Das, Slav Petrov, Aliaksei Severyn, Chris\nDyer, and Noah Smith for their useful feedback and discussion while preparing this draft.\nDaniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev,\nSlav Petrov, and Michael Collins. Globally normalized transition-based neural networks. In\n\nProceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp.\n2442-2452, 2016.\n\nGiuseppe Attardi and Felice Dell\u2019Orletta. Reverse revision and linear tree combination for depen-\ndency parsing. In Proceedings of Human Language Technologies: The 2009 Annual Conference\nof the North American Chapter of the Association for Computational Linguistics, Companion\nVolume: Short Papers, pp. 261-264. Association for Computational Linguistics, 2009.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointh\nlearning to align and translate. JCLR, 2015.\nDev Test\nUAS LAS UAS_ LAS\n93.08 90.89 92.8 90.8\nLe po Paso) 94.01 91.93 93.72 91.83\n(Above, but with pretrained word2vec)* 94.07 92.06 94.09 92.12\nBi-LSTM, graph-based (Kiperwasser & Goldberg} |2016) - - 93.10 91.00\n(Dyer et al.|/2015) - = 93.10 90.90\n20 Stack LSTMs (Kuncoro et al.|/2016)* = - 94.51 92.57\nGlobally normalized, transition-based (Andor et al. (2016)* - - 94.61 92.79\nSamuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, an\nChristopher Potts. A fast unified model for parsing and sentence understanding. ACL, 2016.\nKatja Filippova and Yasemin Altun. Overcoming the lack of parallel data in sentence compression.\nIn EMNLP, pp. 1481-1491. Citeseer, 2013.\nNal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. EMNLP, 2013.\nEliyahu Kiperwasser and Yoav Goldberg. Simple and accurate dependency parsing using bidirec-\ntional lstm feature representations. ACL, 2016.\nGuillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer\nNeural architectures for named entity recognition. NAACL-HTL, 2016.\nJiwei Li, Minh-Thang Luong, Dan Jurafsky, and Eudard Hovy. When are tree structures necessary\nfor deep learning of representations? EMNLP, 2015.\nMinh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task\nsequence to sequence learning. JCLR, 2016.\nJoakim Nivre. Inductive dependency parsing. Springer, 2006.\nAnkur P Parikh, Oscar Tackstr\u00e9m, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention\nmodel for natural language inferencfne. EMNLP., 2016.\nRichard Socher, Christopher D. Manning, and Andrew Y. Ng. Learning continuous phrase represen-\ntations and syntactic parsing with recursive neural networks. In NJPS-2010 Deep Learning and\nUnsupervised Feature Learning Workshop, 2010.\nRichard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. Dy-\nnamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in\nNeural Information Processing Systems. pp. 801-809. 2011.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.\nIn Advances in neural information processing systems. pp. 3104\u20143112. 2014.\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations\nfrom tree-structured long short-term memory networks. ACL, 2015.\nDaxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. Multi-task learning for mul-\ntiple language translation. In Proceedings of the 53rd Annual Meeting of the ACL and the 7th\nInternational Joint Conference on Natural Language Processing, pp. 1723-1732, 2015.\nChris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. Transition-based\ndependency parsing with stack long short-term memory. pp. 334\u2014-343, 2015.\nAdhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A Smith. Distilling\nan ensemble of greedy dependency parsers into one mst parser. EMNLP, 2016.\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey,\nMaxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google\u2019s neural machine trans-\nlation system: Bridging the gap between human and machine translation. arXiv preprint\narXiv: 1609.08144, 2016.\nMeishan Zhang, Yue Zhang, and Guohong Fu. Transition-based neural word segmentation. In\nProceedings of the 54nd Annual Meeting of the Association for Computational Linguistics, 2016.\nYuan Zhang and David Weiss. Stack-propagation: Improved representation learning for syntax. In\nProc. ACL, 2016."}]
HJIY0E9ge
[{"section_index": "0", "section_name": "A SIMPLE YET EFFECTIVE METHOD TO PRUNE\nDENSE LAYERS OF NEURAL NETWORKS", "section_text": "Mohammad Babaeizadeh, Paris Smaragdis & Roy H. Campbell\n{mb2,paris, rhc}@illinois.edu.edu\nNeural networks are usually over-parameterized with significant redundancy in\nthe number of required neurons which results in unnecessary computation and\nmemory usage at inference time. One common approach to address this issue\nis to prune these big networks by removing extra neurons and parameters while\nmaintaining the accuracy. In this paper, we propose NoiseOut, a fully automated\npruning algorithm based on the correlation between activations of neurons in the\nhidden layers. We prove that adding additional output neurons with entirely random\ntargets results into a higher correlation between neurons which makes pruning by\nNoiseOut even more efficient. Finally, we test our method on various networks\nand datasets. These experiments exhibit high pruning rates while maintaining the\naccuracy of the original network."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Neural networks and deep learning recently achieved\n\ncomputer vision (Krizhevsky et al.|(2012);|He et a\n\nnatural language processing (Mikolov et al.\nUsing large and oversized networks in these tasks is\n\na)\n\n)). A rule of thumb for obtaining usefu\nrs that can fit the training data\nusually obvious and therefore the size of the neural n\n\n(Heaton|(2008)) which do not guarantee an optimal si\n\nto overcome overfitting is to choose an over-sized net\n\n(2015)), speech recognition (Graves et a\n\n). However, tl\n\nstate-of-the-art solutions to many problems ir\n\n2013))\nind reinforcement learning (Silver et al.\n\nacommon practice. Such oversized networks\n\noverfit on the training dataset while having poor generalization on the testing data (Sabc\n\ngeneralization is to use the smallest number:\n\n). Unfortunately, this optimal size is not\networks is determined by a few rules-of-thumt\nze for a given problem. One common approach\nwork and then apply regularization (\nhese techniques do not reduce the number o\n\nparameters and therefore do not resolve the hich demand of resources at test time.\nAnother method is to start with an oversized network and then use pruning algorithms to remove\nredundant parameters while maintaining the network\u2019s accuracy (Augasta & Kathirvalavakumar\n}). These methods need to estimate the upper-bound size of a network, a task for which there are\nadequate estimation methods (King & Hu}2009)). If the size of a neural network is bigger than what\nis necessary, in theory, it should be possible to remove some of the extra neurons without affecting\nits accuracy. To achieve this goal, the pruning algorithm should find neurons which once removed\nresult into no additional prediction errors. However, this may not be as easy as it sounds since all the\nneurons contribute to the final prediction and removing them usually leads to error.\nIt is easy to demonstrate this problem by fitting an oversized network on a toy dataset. Figure [I\nshows a two-dimensional toy dataset which contains two linearly separable classes. Hence, only one\nhidden neuron in a two-layer perceptron should be enough to classify this data and any network with\nmore than one neuron (such as the network in Figure[2}a) is an oversized network and can be pruned\nHowever, there is no guarantee that removing one of the hidden neurons will maintain the network\u2019s\nperformance. As shown in the example in Figure{I| removing any of the hidden neurons results into a\nmore compact, but under-performing network. Therefore, a more complicated process is required for\npruning neural networks without accuracy loss."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "2.0\n\n\u20141.0\n\n-1.5\n\n.\ni\n\n.\nni\n\ni\n\n0\n\u20142.0\n\n-15\n\n-1.0\n\n-0.5\n\n0.0\nXo\nFigure 1: Effect of pruning on accuracy. Bold\nline represents discriminator learned by a 2-2-1\nMLP (FigureP2}a) on a toy data set. Dash line and\ndotted line show the results after pruning one of\nthe hidden neurons. As it can be seen removing a\nhidden neuron will result into accuracy drop.\nOur goal in this paper is two-fold. First, we introduce NoiseOut, a pruning method based on the\ncorrelation between activations of the neurons. Second, we propose an approach which enforces the\nhigher correlation between activations of neurons. Since the effectiveness of NoiseOut hinges on\nhigh correlations between neuron activations, the combination of these two methods facilitates more\naggressive pruning.\nOptimal Brain Damage (LeCun et al.]([989)) and Optimal Brain Surgeon (Hassibi & Stork|(1993))\nprune networks based on the Hessian of the loss function. It is shown that such pruning is more\neffective and more accurate than earlier magnitude-based pruning such as weight decay (Hanson\n& Pr (1989). However, the necessary second-order derivatives require additional computational\nresources.\nIn this section, we describe the details of the proposed method called NoiseOut. First, we show how\nthis method can prune a single neuron and then how it can prune a full network, one neuron at a time\nFigure 2: a) a simple 2-2-1 MLP; b) the same\nnetwork with one additional noise output. All\nthe hidden units have a linear activation while\nthe output neurons use sigmoid as activation\nfunction. The gray neuron is a noise output\nwhich changes its target in each iteration.\nRecently, replacing the fully connected layers with other types of layers has been utilized to reduced\nthe number of parameters in a neural network. Deep fried convnets (Yang et al.| (2015)) replaces these\nlayers with kernel methods while the GoogLenet (2015)) and Network in Network\narchitecture (Lin et al.|(2013)) replace them with global average pooling. Alternatively, [Han et al.\nproposed a pruning method which learns the import connections between neurons, pruning\nthe unimportant connections, and then retraining the remaining sparse network.\nBesides pruning, other approaches have been proposed to reduce the computation and memory\nrequirements of neural networks. HashNets({Chen et al.|(2015)) reduce the storage requirement of\nneural networks by randomly grouping weights into hash buckets. These techniques can be combined\nwith pruning algorithms to achieve even better performance. As an example, Deep Compression(|H:\n(2016)) proposed a three stage pipeline: pruning, trained quantization, and Huffman coding, to\nreduce the storage requirement of neural networks.\nThe key idea in NoiseOut is to remove one of the two neurons with strongly correlated activations.\nThe main rationale behind this pruning is to keep the signals inside the network as close to the original\n\nnetwork as possible. To demonstrate this, assume there exists u, v,/ such that \\o(n, n?)| =ls\n\nno \u2014 + 8B. where pO is the activation of i\u201c\u201d neuron in the /\u00a2\u201d layer. By definition:\nAlt) => ni wr + pt) > 20 w!\") + nw), 4 no) w?), + gtd\nine\n= Ss rw? + ah) wl! t bur), t Ow! a + ot?)\n\n= Ss rw 4 + Ai) (ow 4 + Wy, k) t (Bw, t yeh)\nideal case. In this ideal scenario, removing one of the neurons results into no change in accuracy since\nthe final output of the network will stay the same. In non-ideal cases, when the highest correlated\nneurons are not strongly correlated, merging them into one neuron may alter the accuracy. However,\ncontinuing the training after the merge may compensate for this loss. If this does not happen, it\nmeans that the removed neuron was necessary to achieve the target accuracy and the algorithm cannot\ncompress the network any further without accuracy degradation.\nNoiseOut follows the same logic to prune a single neuron using the following steps:\n. For each i, j,1 calculate p(h;\u2019, h;\u2019)\n. Find u,v,l = arg max |p( (rn, hY)|\nuvluxv\n. Calculate a, 8 := arg min (rn? \u2014 ahi) \u2014 8)\na8\n. Remove neuron u in layer /\n. For each neuron k in el +1:\n\n- Update the weight wl! ) = w, + aw\u201d,\n\n- Update the bias oh) = ot) 4 + Bw)\nThe key element for successful pruning of neural networks using NoiseOut is the strong correlatio\nbetween activation of the neurons. Essentially, a higher correlation between these activations mean\nmore efficient pruning. However, there is no guarantee that back-propagation results in correlate\nactivations in a hidden layer. In this section, we propose a method to encourage higher correlatio\nby adding additional output nodes, called noise outputs. The targets for noise outputs will random!\nchange in each iteration based on a predefined random distribution. We show that adding nois\noutputs to the output layer intensifies the correlation between activation in the hidden layers whic\nwill subsequently make the pruning task more effective."}, {"section_index": "3", "section_name": "3.2.1 EFFECT OF ADDING NOISE OUTPUTS TO THE NETWORK", "section_text": "To demonstrate the effect of adding noise outputs to the network, let us reconsider the toy example\ndescribed previously in Figure[]] this time with some additional noise outputs in the output layer as\nshown in Figure[2}b. The result of training this network has been shown in Figure|3] As seen in this\nfigure, the activation of the two hidden neurons has become highly correlated, and each hidden unit\nconverged to the optimal discriminant by itself. This means that either of the extra neurons in the\nhidden layer can be removed without loss of accuracy.\nnetwork as possible. Io demonstrate this, assume there exists u, v,/ such that |p(ha\u2019, Aw\u2019 )| = 1 >\nThis means that neuron wu can be removed without affecting any of the neurons in the next layer,\nsimply by adjusting the weights of v and neurons\u2019 biases. Note that max [o(n, ni) = lisan\nZT\n\na) descriminators learned by hidden\nneurons in Figure 2-a\n\nb) descriminators learned by hidden\nneurons in Figure 2-b\n\nc) result after pruning\n\nHidden Neuron 1\n= - Hidden Neuron 2\n\nr \u2014\u2014\n\nHidden Neuron 1\n-2 == Hidden Neuron 2\n\n2\n\n-20 15-10-05 00 05 10 15\nTo\n\n2.0\n\n-20 -15 -10-05 00 05 10 15\nTo\n\n2.0\n\n-20 -15 -10 -05 00 05 10 15 2.0\n\nTo\nThis claim can be proved formally as well. The key to this proof is that neural networks are\ndeterministic at inference time. In other words, the network generates a constant output for a\nconstant input. For noise output, since the target is independent of the input, the training algorithm\n\nfinds an optimal constant value that minimizes the expected error for every input. This objective can\nbe presented as:\nm\n\nminC(f(%W), IY, \u00a5)) = min (C(fm(X3W),\u00a5) + 0(fi(XW) Y))\n\ni=0\nwhere W is the adjustable parameters (i.e. weights), X is the input, Y is the target, Y; is the targ\u00e9\nof noisy outputs at iteration i, C is the cost function, m is the number of iterations. f;(X;W) an\n\nfi (X;W) are the outputs of the neural network in the original and noise outputs respectively, ;\niteration 7. Note that Y; changes in each iteration according to a random distribution P..\nThe first part of Equation [2]represents the common objective of training a neural network, while the\nsecond part has been added because of the noise outputs. It is possible to adjust the effect of noise\noutputs based on Ps. For instance by adding more noise outputs (in the case of Binomial distribution)\nor adjusting the variance (for Gaussian distribution). Another way of this adjustment is introducing a\nnew multiplier for the second part of the cost. Although Y; changes in each iteration, the constant\nvalue that the network infers for any given input would be the same e.g. 6, due to the independent of\nPy from X. Therefore:\nmem anelsesiry [\u00a5,\u00a5i])\n\nm\n\nlis ays\n= 1 r(x: W) - Y)+ min =(fi(X;W) - \u00a5;)\"\nwore WQ),We) 4 2\nF 2 . 1s 2\n= X;W)-Y)\u00b0 + =(f(X;W)-90\nwating FOGW)\u2014Y)\"+ min 5(f(XsW) \u2014 8)\nFigure 3: Comparison between discriminators learned with and without noise neurons. As it can be\nseen with noise neurons the activation of hidden neurons is more correlated; a) discriminators defined\nby each hidden neuron in Figure[2}a; b) discriminators defined by each hidden neuron in Figure [2}b:\nc) final discriminator after pruning one hidden neuron in Figure|2}b.\nfin(X;W) = JixnO\nwhere J} \u00bb is the matrix of ones of size 1 x n (n is the number of samples in the dataset). The actual\nvalue of 6 can be estimated for any given network architecture. As an example, let W, W) and\nW) be the weights of the first hidden layer, the output layer and the noisy neurons in a 2-2-1 MLP\nnetwork respectively (Figure 2}b). Assuming linear activation in the hidden neurons and mean square\nerror (MSE) as the cost function, Equation|2]can be rewritten as:\nabs(correlation)\n\nabs(correlation)\n\nmean correlation\n\ncorrelation distribution\n\n0.95\n10 a\n\n0.90 = v 5\n0.85 =e | |\n0.80 B06 Y |\n075 \u2014 No_Noise S04 ! |\n\n\u2014 4 } | |\n0.70 Constant 02 |\n\n\u2014 Binomial | |\n0.65 |\n\n\u2014 Gaussian 0.0 !\n0.60\n\n0 20 40 60 80 100 No_Noise Constant Binomial Gaussian\n1.00\n09s 10 -\n0.90 _ 08 I\n0.85 s Y I\n$06 y 4 it\n0.80 2\n0.75 3\u00b0 ( j\n0.70 02 Y\n0.65\n0.0\n0.60\n0 100 200 300 400 500 No_Noise Constant Binomial Gaussian\nepoch epoch\nIn this particular case, 6 can be calculated using derivatives of the cost functions in Equation[3\nThis means that in this particular network with MSE as the cost function, the final error will be\nminimized when the network outputs the expected value of targets in noise outputs (E[Ps-]) for any\ngiven input.\nTo demonstrate how outputting a constant value affects the weights of a network, let\u2019s consider the\nnetwork in Figure|2}b. In this case, the output of noisy output will be:\nF(X;W) = xwows =hPwl?} + nPul} = 6\n\nQ) _\n=>h; (0-\u2014 ro we, 1)\nw a A)\n\nwo!\n21 ;\nPD yy = 880 | \u2014 PD nD\nWii\nThe same results can be achieved empirically. Since the output of the noise outputs will converge to\nE [Py] , it seems that there may not be any difference between different random distributions with\nthe same expected value. Therefore, we tested different random distributions for E [Py] with the\nsame E [Pry | , on the network shown in Figure|2}b. These noise distributions are as follows:\nFigure 4: Effect of noise outputs to the correlation of neurons activation in the hidden layers. The top\nrow shows the correlation of two hidden neurons in the network of Figure[2]while the bottom row is\nthe correlation between the two neurons on the first hidden layer of a 6 layer MLP (2-2-2-2-2-2-1).\nThe left column represents the mean correlation during training of the network for 100 times. In right\ncolumn, yellow is the distribution of correlations at the end of the training and small red line shows\nthe median. As it can be seen in these graphs, adding noise outputs improves the correlation between\nneurons in the hidden layer.\n= aisle\n\nall\n\nal (X;W) -\n\n(f\n1\n2\n=E[f\n\ny\n\nF(X:W) i (\n\n[f(XsW)] = E[Py]\n\n+ 5(FxW) 6)\"\n\n- f(X;W)) =0\nEquation 6 means that the activation of the hidden neurons has a correlation of | or -1. For more\nthan two neurons it can be shown that the output of one neuron will be a linear combination of other\nneurons which means the claim still holds.\nAlgorithm 1 NoiseOut for pruning hidden layers in neural networks\na Dee er er\u201d ee\n\n1: procedure TRAIN(X, Y) > X is input, Y is expected output\n2: W + initialize_weights()\n\n3: for each iteration do\n\n4: Yn < generate_random_noise() D> generate random expected values\n5: Y\u2019 \u00a9 concatenate(Y, Yn)\n\n6: W + back_prop(X,Y\")\n\n7: while cost(W) < threshold do\n\n8: A,B \u2014 find_most_correlated_neurons(W, X)\n\n9: a, B + estimate_parameters(W, X, A, B)\n\n10: W' < remove_neuron(W, A)\n\n11: W' < adjust_weights(W\u2019, B, a, 8)\n\n12: Ww+ew\u2019'\n\n13: return W\nAs it can be seen in the top row Figure [4] in a regular network with no noise unit (shown as\n\nNo_Noise), the correlation between the output of hidden neurons PO pO does not go higher\neur)\n\nthan 0.8, while in the existence of a noise output this value approaches to one, rather quickly. This\n\nmeans that the two hidden neuron are outputting just a different scale of the same value for any given\n\ninput. In this case, NoiseOut easily prunes one of the two neurons.\nThe same technique can be applied to correlate the activation of the hidden neurons in networks\nwith more than two layers. The bottom row of Figure[4|shows the correlation between the activation\nof two hidden neurons in the first layer of a six layer MLP (2-2-2-2-2-2-1). As it can be seen in\nthis figure, adding noise outputs helped the neurons to achieve higher correlation compared to a\nnetwork with no noise output. Binomial noise acts chaotic at the beginning due to its sudden change\nof expected values in the noise outputs while Gaussian noise improved the correlation the best in\nthese experiments.\nAlgorithm [I] shows the final NoiseOut algorithm. For the sake of readability, this algorithm has\nbeen shown for networks with only one hidden layer. But the same algorithm can be applied to\nnetworks with more that one hidden layer by performing the same pruning on all the hidden layers\nindependently. It can also be applied to convolutional neural networks that use dense layers, in which\nwe often see over 90% of the network parameters (Cheng et al.](2015))."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "To illustrate the generality of our method we test it on a core set of common network architectures.\nincluding fully connected networks and convolutional neural networks with dense layers. In all of\nthese experiments, the only stop criteria is the accuracy decay of the model. We set the threshold\nGaussian P(x) = N(0.1, 0.4) Normal distribution with mean of 0.1 and standard devi\ntion of 0.4. This noise distribution is appropriate for regression tasks with MSE cost.\n\nBinomial P, (\u00ab) = B(1,0.1) Binomial distribution with 1 trial and success probability \u00ab\n0.1. We chose binomial distribution since it generates random classification labels and\nappropriate for networks that have to produce binary labels.\n\nConstant Py. (x) = 6(\u00ab \u2014 0.1) In this case, the target of the noise outputs is the constai\nvalue of 0.1. This is used as an expected-value \u201c\u2018shortcut\" so that we can examine a stochast:\nvs. a deterministic approach.\n\nNo Noice no noice ontniut for comnaricon\nThis algorithm is simply repeating the process of removing a single neuron, as described in the\nprevious section. The pruning ends when the accuracy of the network drops below some given\nthreshold. Note that the pruning process is happening while training.\nTable 1: Results of pruning Lenet-300-100 on MNIST. The accuracy of all the compressed network\nare the same as the original network.\nNoise Layer 1 Layer 2 Removed Compressiot\n\nMethod Outpus Neurons Neurons Parameters Parameters Rate\nGround Truth - 300 100 266610 - -\n\nNo_Noise - 23 14 15989 94.00% 16.67\nGaussian 512 20 9 15927 94.02% 16.73\nConstant 512 20 7 15105 94.33% 17.65\nBinomial 512 19 6 11225 95.78% 23.75\nNo_Noise - 13 12 10503 96.06% 20.89\nGaussian 1024 16 7 12759 95.21% 18.58\nConstant 1024 18 7 14343 94.62% 17.61\nBinomial 1024 19 7 15135 94.32% 25.38\nTable 2: Pruning Lenet-5 on MNIST. The accuracy of all the compressed networks are the same a:\nthe original network.\nTable[T]and Table[2]show the result of pruning Lenet-300-100 and Lenet-5 (LeCun et al.|(1998)) o1\nMNIST dataset. Lenet-300-100 is a fully connected network with two hidden layers, with 300 anc\n100 neurons each, while Lenet-5 is a convolutional network with two convolutional layers and on\ndense layer. These networks achieve 3.05% and 0.95% error rate on MNIST respectively (LeCui\n\netal 1998)). Note that in Lenet-5 over 98% of parameters are in the dense layer and pruning then\ncan decrease the model size significantly.\nTo test the effect of pruning of deeper architectures, we prune the network described in Table\non SVHN data set.This model which has over 1 million parameters, achieves 93.39% and 93.84%\naccuracy on training set and test set respectively. As it can be seen in Table} NoiseOut pruned more\nthan 85% of the parameters from the base model while maintaining the accuracy.\nDense\n\nMethod Noise Layer Parameters Removed Compression\n\nNeurons Parameters Rate\nNeurons\n\nGround Truth - 512 605546 - -\n\nNo_Noise - 313 374109 38.21% 1.61\n\nGaussian 512 3 13579 97.75% 44.59\n\nConstant 512 33 48469 91.99% 12.49\n\nBinomial 512 26 40328 93.34% 15.01\nfor this criteria to match the original accuracy; therefore all the compressed network have the same\naccuracy as the original network. For each experiment, different random distributions have been used\nfor Ps, to demonstrate the difference in practice.\nAs it can be seen in these tables, NoiseOut removed over %95 of parameters with no accuracy\ndegradation. Astonishingly, pruned Lenet-5 can achieve 0.95% error rate with only 3 neurons in\nthe hidden layer which reduce the total number of weights in Lenet-5 by a factor of 44. Figure|6]\ndemonstrates the output of these 3 neurons. This graph has been generated by outputting the activation\nof hidden layer neurons for 1000 examples randomly selected from MNIST dataset. Then, the data\nhas been sorted by the target class. As it can be seen in this figure, the three neurons in the hidden\nlayer efficiently encoded the output of convolutional layers to the expected ten classes. Obviously,\nthese values can be utilized by the softmax laver to perform the final classification.\nTable 3: Pruning the reference network in Table|4\non SVHN dataset.\nDense\n\nMethod Layer Parameters Removed\nParameters\nNeurons\nGround Truth 1024 1236250 -\nNo_Noise 132 313030 74.67%\nGaussian 4 180550 85.39%\nConstant 25 202285 83.63%\n\nBionomial 17 194005 84.30%\nFigure 5: Pruning Lenet-300-100 and Lenet-5 on MNIST data set with various accuracy thresholds.\nx axis represents the total number of parameters in the pruned network (including weights in the\nconvolutional layers), while 1 axis shows the accuracy of the model on test and training dataset.\nTo explore the effect of NoiseOut on the test accuracy, we pruned Lenet-300-100 and Lenet-5 or\nMNIST with multiple accuracy thresholds, using Gaussian distribution as the target for noise outputs\nIn each one of these experiments, we measured both the training and test accuracy. As expected, the\nresults which have been shown in Figure[5] indicate that lower accuracy thresholds result into more\npruned parameters. However, the gap between training and testing threshold stays the same. This\nshows that pruning the network using NoiseOut does not lead to overfitting."}, {"section_index": "5", "section_name": "4.3. RELATION TO DROPOUT AND REGULARIZATION", "section_text": "Table 4: Base model architecture for SVHN\nwith 1236250 parameters.\nDense\n\nAethod Layer Parameters Removed\n\nParameters\nNeurons\n\nsround Truth 1024 1236250 -\n\nNo_Noise 132 313030 74.67%\n\nyaussian 4 180550 85.39%\n\nYonstant 25 202285 83.63%\n\nsionomial 17 194005 84.30%\n\nPruning results of LeNet-300-100 on MNIST\n\n1.00}\n\u00b0\n2 8\n3\n.\u00b0\n0.98} 3\n\u00b0\n\u00b0\n8\n3\n> .\n\u00a7 os6}\n3 7? 5X\n< x\nxX\nxx\n0.94 x\nx \u00ae\nx\n7 \u2014\n0.92 x eee Training\nxxx Testing\n\n10000 15000 20000 25000 30000\nParameters\n\naccuracy\n\n\u201cee le i\nconv3 32 |3x3] 9246\npooll - 2x2 -\nconv4 48 | 3x3] 13872\nconvS 48 | 3x3] 20784\nconv6 48 | 3x3] 20784\npool2 - 2x2 -\nconv7 64 | 3x3] 27712\nconv8 64 | 3x3} 36928\nconv9 64 | 3x3} 36928\npool3 - 2x2 -\ndense 1024 - 1049600\nsoftmax| 10 - 10250\n\n1.005\n\nPruning results of LeNet-5 on MNIST\n\n1.000\n\n0.995\n\n0.990\n\n0.985 | \u00b0\n\n0.980 \u00b0\n\n0.975\n\n0.970\nx\n\neee Training\nxxx Testing\n\n0.965 n L L \"\n12000 14000 16000 18000 20000 22000 24000\n\nParameters\nThe key point in successful pruning with NoiseOut is a higher correlation between neurons. This\ngoal might seem to be in contradiction with techniques designed to avoid overfitting such as Dropout\nand Regularization. To investigate this, we pruned Lenet-5 in the presence and absence of these\nfeatures and demonstrated the results in Figure[7] As it can be seen in this figure, Dropout helps the\npruning process significantly, while L2-regularizations causes more variance. It seems that preventing\nco-adaptation of neurons using Dropout also intensifies the correlation between them, which helps\nNoiseOut to remove even more redundant neurons without accuracy loss.\nuounon\nFigure 6: Activation of neurons in a\npruned Lenet-5 with only 3 neurons left\nin the dense layer. The x-axis has been\npopulated by 100 random samples from\neach class of MNIST, sorted by class.\ny-axis shows the neuron ID. Note that\ntanh is the activation function in the hid-\nden layer."}, {"section_index": "6", "section_name": "5 CONCLUSION", "section_text": "In this paper, we have presented NoiseOut, a simple but effective pruning method to reduce the\nnumber of parameters in the dense layers of neural networks by removing neurons with correlated\nactivation during training. We showed how adding noise outputs to the network could increase the\ncorrelation between neurons in the hidden layer and hence result to more efficient pruning. The\nexperimental results on different networks and various datasets validate this approach, achieving\nsignificant compression rates without loss of accuracy."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "M Gethsiyal Augasta and T Kathirvalavakumar. Pruning algorithms of neural networks\u2014a comparative study\nCentral European Journal of Computer Science, 3(3):105\u2014115, 2013.\nSong Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neure\nnetwork. In Advances in Neural Information Processing Systems, pp. 1135-1143, 2015.\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with prunins\ntrained quantization and huffman coding. arXiv preprint arXiv: 1510.00149, 2016.\nStephen Jos\u00e9 Hanson and Lorien Y Pratt. Comparing biases for minimal network construction with back-\npropagation. In Advances in neural information processing systems, pp. 177\u2014185, 1989.\n<aiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXi\npreprint arXiv: 1512.03385, 2015.\nJeff Heaton. Introduction to neural networks with Java. Heaton Research, Inc., 2008\nNeuron Count\n\n400\n\n300\n\n200\n\n100\n\n\u2014 NoiseOut\n\nNoiseOut + DropOut\nNoiseOut + L2-Regularization\nFigure 7: Effect of Dropout and L2-regularization on\nNoiseOut. The y-axis represents the number of remain-\ning neurons in the dense layer. Note that more than\none neuron can be removed in each epoch. In each\ncurve, the bold line is the median of 10 runs and colored\nbackground demonstrates the standard deviation.\nWenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing neural\nnetworks with the hashing trick. arXiv preprint arXiv: 1504.04788, 2015.\nBabak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain surgeon.\nMorgan Kaufmann, 1993.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neurg\nnetworks. In Advances in neural information processing systems, pp. 1097-1105, 2012.\nYann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to documen\nrecognition. Proceedings of the IEEE, 86(11):2278\u20142324, 1998.\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv: 1312.4400, 2013.\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in\nvector space. arXiv preprint arXiv:1301.3781, 2013.\nAndrew Y Ng. Feature selection, 1 1 vs. 1 2 regularization, and rotational invariance. In Proceedings of th\ntwenty-first international conference on Machine learning, pp. 78. ACM, 2004.\nRussell Reed. Pruning algorithms-a survey. Neural Networks, IEEE Transactions on, 4(5):740\u2014747, 1993\nDevin Sabo and Xiao-Hua Yu. A new pruning algorithm for neural network dimension analysis. In Neural\nNetworks, 2008. IJCNN 2008.(IEEE World Congress on Computational Intelligence). IEEE Internationa\nJoint Conference on, pp. 3313-3318. IEEE, 2008.\nHong-Jie Xing and Bao-Gang Hu. Two-phase construction of multilayer perceptrons using information theory\nNeural Networks, IEEE Transactions on, 20(4):715-721, 2009.\nYann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage.\nIn NIPs, volume 89, 1989.\nZichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep\nfried convnets. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1476-1483,\n2015."}]
r1S083cgx
[{"section_index": "0", "section_name": "SEQUENCE GENERATION WITH A PHYSIOLOGICALLY\nPLAUSIBLE MODEL OF HANDWRITING AND\nRECURRENT MIXTURE DENSITY NETWORKS", "section_text": "Daniel Berio*!, Memo Akten*!.\nFrederic Fol Leymarie!, Mick Grierson!, and R\u00e9jean Plamondon2\nThe purpose of this study is to explore the feasibility and potential benefits o\nusing a physiological plausible model of handwriting as a feature representation\nfor sequence generation with recurrent mixture density networks. We build on\nrecent results in handwriting prediction developed by Graves (2013), and we\nfocus on generating sequences that possess the statistical and dynamic qualities 0\nhandwriting and calligraphic art forms. Rather than model raw sequence data, we\nfirst preprocess and reconstruct the input training data with a concise representation\ngiven by a motor plan (in the form of a coarse sequence of \u2018ballistic\u2019 targets) and\ncorresponding dynamic parameters (which define the velocity and curvature o\nthe pen-tip trajectory). This representation provides a number of advantages, such\nas enabling the system to learn from very few examples by introducing artificial\nvariability in the training data, and mixing of visual and dynamic qualities learne\nfrom different datasets."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recent results have demonstrated that, given a sufficiently large training data-set\nLong Short-Term Memory (LSTM) (Hochreiter & Schmidhuber}/1997) Recurrent Mixture Density\nNetworks (RMDNs) [1999) are capable of learning and generating convincing synthetic\nhandwriting sequences. In this study we explore a similar network architecture combined with a1\nintermediate feature representation, given by the parameters of a physiologically plausible model o\nhandwriting: the Sigma Lognormal model (Plamondon|{1995}{Plamondon et al.|{2014).\nIn the work by Graves (2013) and subsequent derivations, the RMDN operates on raw sequences of\npoints recorded with a digitizing device. In our approach we preprocess the training data using ar\nintermediate representation that describes a form of \u201cmotor program\u201d coupled with a sequence of\ndynamic parameters that describe the evolution of the pen tip. By doing so, we use a representatior\nthat is more concise (i.e. lower in dimensionality), meaningful (i.e. every data point is a high level\nsegment descriptor of the trajectory), and is resolution independent.\nThis project stems from the observation that human handwriting results from the orchestration o\na large number of motor and neural subsystems, and is ultimately produced with the execution o\ncomplex and skillful motions. As such we seek a representation that abstracts the complex task o\ntrajectory formation from the neural network, which is then rather focused on a higher level task o\nmovement planning. Note that for the scope of this study, we do not implement text-to-handwritin;\nsynthesis (2013), but rather focus on the task of generating sequences that possess th\u00ab\nstatistical and dynamic qualities of handwriting, which can be expanded to calligraphy, asemi:\n\nhandwriting, drawings and graffiti (Berio & Leymarie||2015} (Berio et al.|/2016)). In particular, we\n\nfocus on two distinct tasks: (1) learning and generating motor plans and (2) given a motor plan\n+Goldsmiths College, University of London. Department of Computing.\nEcole Polytechnique de Montr\u00e9al, Canada.\nThe remainder of this paper is organised as follows: in Section 2] after briefly summarising the\nbackground context, we then briefly describe the Sigma Lognormal model and RMDNs; in Section\nwe present the data preprocessing step and the RMDN models that build up our system; in Section\nwe propose various applications of the system, including learning handwriting representations from\nsmall datasets and mixing styles.\nOur study is grounded on a number of notions and principles that have been observed in the general\nstudy of human movement as well as in the handwriting synthesis/analysis field (known as Grapho-\n\nnomics 1986)). The speed profile of aiming movements is typically characterised by a\n\u201cbell shape\u201d that is variably skewed depending on the rapidity of the movement\n\nNagasaki 1989} Plamondon et al. 2013p. Complex movements can be described by the superimpo-\nsition of a discrete number of \u201cballistic\u201d units of motion, which in turn can each be represented by\nthe classic bell shaped velocity profile and are often referred to as strokes. A number of methods\nsynthesise handwriting through the temporal superimposition of strokes, the velocity profile of which\nis modelled with a variety of functions including sinusoidal functions (Morasso & Mussa Ivaldi|\n(1995), Beta functions (Lee & Cho}|1998{[Bezine et al.[[2004}.\nand lognormals (Plamondon et al.|!2009). In this study we rely on a family of models known as the\nKinematic Theory of Rapid Human Movements, that has been developed by Plamondon et al. in an\nextensive body of work since the 1990\u2019s (Plamondon||1995}/Plamondon et al.}/2014). [Plamondon\nshow that if we consider that a movement is the result of the parallel and hierarchical\ninteraction of a large number of coupled linear systems, the impulse response of such a system to a\ncentrally generated command asymptotically converges to a lognormal function. This assumption is\nattractive from a modelling perspective because it abstracts the high complexity of the neuromuscular\nsystem in charge of generating movements with a relatively simple mathematical model which further\n\nprovides state of the art reconstruction of human velocity data (Rohrer & Hogan\\|2006}|/Plamondon\njet al.][2013).\nA number of methods have used neural inspired approaches for the task of handwriting trajectory\n\nformation (Schomaker]|1992}|Bullock et al.| 1993}|Wada & Kawato} 1993). Similarly to our proposed\n\nmethod, (2012) train a neural on a preprocessed dataset where the raw input data is\nreconstructed in the form of handwriting model parameters. use a sequence\nof neural networks to learn the motion of two orthogonal mass spring systems from images of\nhandwritten digits for classification purposes. With a similar motivation to ours, |Plamondon &\nuse a Self Organising Map (SOM) to learn a sequence of ballistic targets, which\ndescribe a coarse motor plan of handwriting trajectories. Our method builds in particular on the work\nof (2013), who describes a system that uses a recurrent mixture density networks (RMDNs)\n(Bishop][1994) extended with a LSTM architecture (Hochreiter & Schmidhuber} 1997), to generate\n\nsvnthetic handwriting in a variety of styles."}, {"section_index": "2", "section_name": "2.1 SIGMA LOGNORMAL MODEL", "section_text": "On the basis of Plamondon\u2019s Kinematic Theory (Plamondon| {1995}, the Sigma Lognormal (A)\nmodel (Plamondon & Dyioua||2006) describes complex handwriting trajectories via the vectorial\nmposition of a discrete\n\nsuperi number of strokes. With the assumption that curved handwriting\nmovements are done by rotating the wrist, the curvilinear evolution of strokes is described with a\ncircular arc shape. Each stroke is charactersied by a variably assymmetric \"bell shape\" speed profile,\nwhich is described with a (3 parameter) lognormal function. The planar evolution of a trajectory\nis then described by a sequence of virtual targets {v;};_;', which define \u201cimaginary\u201d (i.e. not\nnecessarily located along the generated trajectory) loci at which each consecutive stroke is aimed.\nThe virtual targets provide a low level description of the motor plan for the handwriting trajectory. A\nsmooth trajectory is then generated by integrating the velocity of each stroke over time. The trajectory\nsmoothness can be defined by adjusting the activation-time offset of a given stroke with respect to the\npredicting the corresponding dynamic parameters that determine the visual and dynamic qualities of\nthe pen trace. We then go on to show that this modular workflow can be exploited in ways such as:\nmixing of dynamic qualities between data-sets (a form of handwriting \u201cstyle transfer\u201d ) as well as\nlearning from small datasets (a form of \u201cone shot learning\u2019\u2019).\n(d)\n(\u00a9) ,\n\n(b) *\n\na)\n\nVs\nFigure 1: A sequence of virtual targets and the corresponding NA trajectory. (a), the virtual targets and the\ncorresponding stroke aiming directions. (b), the virtual targets and the corresponding circular arcs. (c), a possible\ntrajectory generated over the given sequence of virtual targets. While the generated trajectory might appeai\nsimilar to a polynomial curve such as a B-Spline, it also describes a smooth and physiologically plausible\nvelocity profile (d).\nprevious stroke, which is denoted with Ato;; a smaller time offset (i.e. a greater overlap betweer\nlognormal components) will result in a smoother trajectory (Fig.|1| c). The curvature of the trajectory\ncan be varied by adjusting the central angle of each circular arc, which is denoted with 6;. Equations\nand further details for the SA model can be found in Appendix[A]\nA sequence of virtual targets provides a very sparse spatial description or \u201cmotor plan\u201d for the\ntrajectory evolution. The remaining stroke parameters, Ato; and 6;, define the temporal, dynamic\nand geometric features of the trajectory and we refer to those as dynamic parameters."}, {"section_index": "3", "section_name": "2.2. RECURRENT MIXTURE DENSITY NETWORKS", "section_text": "Mixture Density Networks (MDN) were introduced by (1994) in order to model and predic\nthe parameters of a Gaussian Mixture Model (GMM), i.e. a set of means, covariances and mixtur\nweights. showed that MDNs could be to model temporal data using RNNs. The\nauthor used Recurrent Mixture Density Networks (RMDN) to model the statistical properties o\nspeech, and they were found to be more successful than traditional GMMs. usec\nLSTM RMDNs to model and synthesise online handwriting, providing the basis for extensions to th\nmethod, also used in|Ha et al. (2016 ;[Zhang et al.|(2016). Note that in the case of a sequential mode!\nthe RMDN outputs a unique set of GMM parameters for each timestep t, allowing the probabilit\ndistribution to change with time as the input sequence develops. Further details can be found i\n\nAppendix{C. 1]"}, {"section_index": "4", "section_name": "3. METHOD", "section_text": "We operate on discrete and temporally ordered sequences of planar coordinates. Similarly to{Graves\nfrre most of our results come from experiments made on the IAM online handwriting database\n(Marti & Bunke| (2002). However, we have made preliminary experiments with other datasets, such\nas the Graffiti Analysis Database (L as well as limited samples collected in our laboratory\nfrom a user with a digitiser tablet.\nAs a first step, we preprocess the raw data and reconstruct it in the form of NA model parameters\nSection We then train and evaluate a number of RMDN models for two distinct tasks:\nWe then exploit the modularity of this system to conduct various experiments, details of which cat\nfound in Section[4]\n1.\n\n2.\n\nVirtual target prediction. We use the V2V-model for this task. Given a sequence of virtual\ntargets, this model predicts the next virtual target.\n\nDynamic parameter prediction. For this task we trained and compared two model ar-\nchitectures. Given a sequence of virtual targets, the task of these models is to predict the\ncorresponding dynamic parameters. The V2D-model is condititioned only on the previous\nvirtual targets, whereas the A2D-model is conditioned on both the previous virtual targets\nand dvnamic parameters.\nA number of methods have been developed by Plamondon et. al in order to reconstruct UA-model\nparameters from digitised pen input data (O\u2019Reilly & Plamondon| {2008} |Plamondon et al} 2014!\n. These methods provide the ideal reconstruction of model parameters, given a\nhigh resolution digitised pen trace. While such methods are superior for handwriting analysis and\nbiometric purposes, we opt for a less precise method that is less sensitive to\nsampling quality and is aimed at generating virtual target sequences that remain perceptually similar\nto the original trace. We purposely choose to ignore the original dynamics of the input, and base the\nmethod on a geometric input data only. This is done in order to work with training sequences that\nare independent of sampling rate, and in sight of future developments in which we intend to extract\nhandwriting traces from bitmaps, inferring causal/dynamic information from a static input as humans\nare capable of (Edelman & Flash\\{1987}|Freedberg & Gallese} |2007).\nOur method operates on a uniformly sampled input contour, which is then segmented in correspon-\ndence with perceptually salient key points: loci of curvature extrema modulated by neighbouring\ncontour segments (Brault & Plamondon| {1993} {Berio & Leymarie| O15), which gives an initial\nestimate of each virtual target v;. We then (i) fit a circular arc to each contour segment in order to\nestimate the 6; parameters and (ii) estimate the Ato; parameters by analysing the contour curvature in\nthe region of each key point. Finally, (iii) we iteratively adjust the virtual target positions to minimise\nthe error between the original trajectory and the one generated by the corresponding SA parameters.\nFor Further details on the SA parameter reconstruction method, the reader is referred to Appendix|B]\noriginal\n\n\u2014 reconstructed\n\u00bb 45 Ne Clore or Ne Cotton.\nby woe ts o Tog ge rite. Po tee Se ne\n\u00e9\u00e9 et .\noriginal\nreconstructed\n\n\u00a9 |\nFigure 2: NA parameter reconstruction. (a) The original and reconstructed trajectories. (b) The reconstructed\nvirtual targets. Note that the virtual targets define a shape that is perceptually similar to the input. (c) Aligned\nand scaled speed profiles of the original (gray) and reconstructed (black) trajectories. Although the dynamic\ninformation in the input is ignored (due to uniform sampling), the two speed profiles show similarities in number\nand relative-height of peaks."}, {"section_index": "5", "section_name": "3.2 DATA AUGMENTATION", "section_text": "(a)\nThe V2V-model is conditioned on a history of virtual targets and given a new virtual target it predicts\nthe next virtual target (hence the name V2V). Note that each virtual target includes the corresponding\noriginal\n\u2014 reconstructed\n\u00bb dy Me Clore oc Mr Cotton, -=\n, tel on *: ett \"fae eae\nby woe ts o Tog ge rite. Po tee Se ne\nae : \u2018\noriginal\nreconstructed\nWe can exploit the NA parameterisation to generate many variations over a single trajectory, which\nwe visually consistent with the original, and with a variability that is similar to the one that would be\n\nseen in multiple instances of handwriting made by the same writer (Fig. Bp (Djioua & Plamondon|\n2008a} Fischer et al.| 2014 Berio & Leymarie| 2015p. Given a dataset of n training samples, we\n\n\u2018andomly perturb the virtual target positions and dynamic parameters of each sample n, times, which\nesults in a new augmented dataset of size n + n x np where legibility and trajectory smoothness is\nnaintained across samples. This would not be possible on the raw online dataset, as perturbations for\nach data-point would eventually result in a noisy trajectory.\npen state \u2014 up (not touching the paper) or down (touching the paper). Repeatedly feeding the\npredicted virtual target back into the model at every timestep allows the model to synthesise sequence:\nof arbitrary length. The implementation of this model is very similar to the handwriting predictior\ndemonstrated by (2013), although instead of operating directly on the digitised pen positions\nwe operate on the much coarser virtual target sequences which are extracted during the preprocessing\nstep. The details of this model can be found in Appendix|C.3]\nThe goal of these models is to predict the corresponding dynamic parameters (Ato;, 0;) for a given\nsequence of virtual targets. We train and compare two model architectures for this task. The V2D\nmodel is conditioned on the history of virtual targets, and given a new virtual target, this mode!\npredicts the corresponding dynamic parameters (Ato;,;) for the current stroke (hence the name\nV2D). Running this model incrementally for every stroke of a given virtual target sequence allows us\nto predict dynamic parameters for each stroke. The implementation of this model is very similar tc\nthe V2V-model, and details can be found in Appendix|C.4]\nAt each timestep, the V2D model outputs and maintains internal memory of a probability distribution\nfor the predicted dynamic parameters. However, the network has no knowledge of the parameters that\nare sampled and used. Hence, dynamic parameters might not be consistent across timesteps. This\nproblem can be overcome by feeding the sampled dynamic parameters back into the model at the\nnext timestep. From a human motor planning perspective this makes sense as, for a given drawing\nstyle, when we decide the curvature and smoothness of a stroke we will take into consideration the\nchoices made in previously executed strokes.\nThe A2D model predicts the corresponding dynamic parameters (Ato;, 6;) for the current stroke\nconditioned on a history of both virtual targets and dynamic parameters (i.e. all SA parameters\n- hence the name A2D). We use this model in a similar way to the V2D model, whereby we run\nit incrementally for every stroke of a given virtual target sequence. However, internally, at every\ntimestep the predicted dynamic parameters are fed back into the model at the next timestep along\n\nwith the virtual target from the given sequence. The details of this implementation can be found in\nAppendix|C.5\nPredicting Virtual Targets. In a first experiment we use the V2V model, trained on the prepro-\ncessed IAM dataset, to predict sequences of virtual targets. We prime the network by first feeding it a\nsequence from the test dataset. This conditions the network to predict sequences that are similar to\nthe prime. We can see from the results (Fi that the network is indeed able to produce sequences\nthat capture the statistical qualities of the priming sequence, such as overall incline, proportions, and\noscillation frequency. On the other hand, we observe that amongst the generated sequences, there are\noften patterns which do not represent recognisable letters or words. This can be explained by the\nhigh variability of samples contained in the [AM dataset, and by the fact that our representation is\nvery concise, with each data-point containing high significance. As a result, the slightest variation in\na prediction is likely to cause a large error in the next. To overcome this problem, we train a new\nmodel with a dataset augmented with 10x variations as described in Section Due to our limited\ncomputing resources[] we test this method on 1/10th of the dataset, which results in a new dataset\nwith the same size as the original, but with a lower number of handwriting specimens with a number\nof subtle variations per specimen. With this approach, the network predictions maintain statistical\nsimilarity with the priming sequences, and patterns emerge that are more evocative of letters of the\nalphabet or whole words, with fewer unrecognizable patterns (Fig. Ap. To validate this result, we also\ntest the model\u2019s performance training it on 1/10th of the dataset, without data augmentation, and the\nresults are clearly inferior to the previous two models. This suggests that the data augmentation step\nis highly beneficial to the performance of the network.\n\"We are thus not able to thoroughly test the large network architectures that would be necessary to train o1\nthe whole augmented dataset.\nFigure 4: Predicting virtual targets. (a) Virtual targets from the test set (not seen during V2V training) used\nto prime the V2V models. (b) Sequences generated with the V2V model. (c) Sequences generated with the\naugmented V2V model. Note that the non-augmented V2V model produces more undesired \u2018errors\u2019. This is\nmore visibly noticable when rendered with dynamic parameters (Fig. [6).\nPredicting Dynamic Parameters. We first evaluate the performance of both the V2D and A2D\nmodels on virtual targets extracted from the test set. Remarkably, although the networks have not\nbeen trained on these sequences, both models predict dynamic parameters that result in trajectories\nthat are readable, and are often similar to the target sample. We settle on the A2D model trained on a\n3x augmented dataset, which we qualitatively assess to produce the best results (Fig. [5).\nFigure 5: Dynamic parameter prediction. (a) Virtual targets from samples in the test set (not seen during training)\n(b) The original trajectories provided for comparison. (c) Trajectories reconstructed using predicted dynamic\nparameters. (d) Trajectories reconstructed with random dynamic parameters provided for comparison.\nWe then proceed with applying the same A2D model on virtual targets generated by the V2V model:\nprimed on the test set. We observe that the predictions on sequences generated with the augmentec\ndataset are highly evocative of handwriting and are clearly different depending on the primins\nsequence (Fig. (6 c), while the predictions made with the non-augmented dataset are more likely t\nresemble random scribbles rather than human readable handwriting (Fig. (6 b). This further confirm:\nthe utility of the data augmentation step.\n() wemlrg, SL | tes\nphase gaqtanay Sem kag\n0 LO. casein OFF wre\n\u00a9) A ue Roe yrmenrrlat sn o\nMPre oom the Hye Re\n\noptyblowereamtecsre G@oxs Wa\n\nluo \u2018toyst N\n* \u201cThe Whe Sot\n\nlve ng 9\n\nMes Cey rales,\n\nAp\nCal fu lutvge CU AL\n\nLe rectdAsthlowe Ys\n\nNYA Greoet tau yes wa\n\n7 wang ay werale\nPont HOR a hes Ree \u201c|\n\nPho, pas ky\nVIG Aap eo\nve SMOMED Se een 1,\n\nRENN Da\n\nVeer Aa Rove\nFigure 6: Trajectories reconstructed with dynamic parameters predicted for generated virtual targets from Fig. |4\nusing (b) non-augmented V2V, (c) augmented V2V.\n(a) Lyk Wriesiles pve Vacs -\n\ntee A Whee\n\nMure son ay ero ey\naT oy een FF mr\n\n(\u00a9) A vr Doe Preiceoabel \u00bb\n\"Ne wom the WV WEAR\n\u2018 appyblonee rites, ens Ww\n\nke ic tw Vnirnrme\nA pnd N\\\nAse le it\n\nWel bubnme dnc Zz\n\nNAN thyze ryt es\n2 ateih stil iZ\n\nWe wes able Yoh pnt ks,\n\nWen\nLAA Wes vem wit\n\n7 valk \u201cGing aku\nToray a SAL ys Don we\n\n> Tabs\n\naed\n\nkk\nVE ys\n\noa\nv Swhep Late YM $n 1\n\n2 OA Bite METAR apy.\n(a)\n\n(b)\n\n\u00a9\n\npyit= wiieslles sie Reel\n\nleg = hea\n\nLema Sart\nMage sen qeeey i\nwe ONY enn -eF mr\n\n[Avr Roe prreedcooathel y\nThe pwam Hh AVR A\n\n\u2018 Lorrie, eur Wh\nhy\n\nkis Ha wr\ntold par,\n\u201corp Whe ot\nsid byl ner ang Z\n\nVt\nNAN thyze ryt es\nZL rbgtstYnlawZ py\n\nWe wes able Yoh pnt ks,\n\nNYY teen Gitta Yue wh\n\n7 \"wie \u201cGing aku -\nPrva SH pi Dom on\n\nNTA mag ty\n\nVE Wye\neee Lae Youn 1,\n2 MOA Dien\n\nVERY K aoe\n(a) eyh= Wiseiles whe \\yaela\n\n(b) geide unisgiles onto loads -\n\n(\u00a9) grice unissiles ante hack\n\ngvide wiesiles gute backs\n\n(@) pre anriasne* gure\n\nloner -\n\n\u201c the Venir ism.\nis the Chairman\nis He Chairman\nis he Chosrencu.\nCs c Pn (Arair van\n\nree\n\nAble\n\nable\n\nable\n\nbe glee\nwe yeh\n\npoh\n\nGer\n\nme\n\not\n\nom\n\nme\n\nme\n\ndws\n\ndewns,\n\ndewes,\n\nbes,\n\nuns,\n() wemlrg, SL | tes\nphase gaqtanay Sem kag\n0 LO. casein OFF wre\n\u00a9) A ue Roe yrmenrrlat sn o\nMPre oom the Hye Re\n\noptyblowereamtecsre G@oxs Wa\n\nluo \u2018tah N,\n* \u201cfhe Whe ue\n\nlve ng 9\n\nMes Cey rales,\n\nAp\nCal fu lutvge CU AL\n\nLe rectdAsthlowe Ys\n\nNYA Greoet tau yes wa\n\n7 wang ay werale\nPont HOR a hes Ree \u201c|\n\nPho, pas ky\nVIG Aap eo\nve SMOMED Se een 1,\n\nRENN Da\n\nVeer Aa Rove\nUser defined virtual targets. The dynamic parameter prediction models can also be used i1\ncombination with user defined virtual target sequences (Fig. [7)- Such a method can be used t\nquickly and interactively generate handwriting trajectories in a given style, by a simple point and clic]\nprocedure. The style (in terms of curvature and dynamics) of the generated trajectory is determinec\nby the data used to train the A2D model, and by priming the A2D model with different samples, we\ncan apply different styles to the user defined virtual targets.\nFigure 7: Dynamic parameters generated over user specified virtual targets for the word \u2018Res\u2019, using the A2D\nmodel trained on the IAM database.\nOne shot learning. In a subsequent experiment, we apply the data augmentaion method describec\nin Section 5 7}t0 enable both virtual target and dynamic prediction models to learn from a smal!\ndataset of calligraphic samples recorded by a user using a digitiser tablet. We observe that with <\nlow number of augmentations (50x) the models generate quasi-random outputs, and seem to learr\nonly the left to right trend of the input. With higher augmentation (700), the system generate:\noutputs that are consistent to the human eye with the input data (Fig. [p. We also train our model:\nusing only a single sample (augmented 7000 x) and again observe that the model is able to reproduce\nnovel sequences that are similar to the input sample (Fig. [9). Naturally, the output is a form ot\nrecombination of the input, but this is sufficient to synthesise novel outputs that are qualitatively\nsimilar to the input. It should be noted that we are judging the performance of the one-shot learnec\nmodels qualitatively, and we may not be testing the full limits of how well the models are able tc\ngeneralise. On the other hand, these results, as well as the \u201cstyle transfer\u201d capabilities exposed ir\nfollowing section suggest a certain degree of generalisation.\nFigure 8: Training with small (n = 4) datasets. (a) Training set with 4 samples. (b) Output of the networks\nwhen using 50x data augmentation. (c) Output of the networks with 700 data augmentation.\n, ve eA brown Fi\nOx yn red\n1 kro wn Fog Aer\neh vy UW to\n\n\"RETRO ROAR, ATR\nFigure 9: Training with single training samples. For each row: (a) Training sample (augmented x 7000). (\nOutput of combined V2V/A2D models primed on the training sample. (c) Output without priming.\nStyle Transfer. Here, with a slight abuse of terminology, we utilise the term \"style\" to refer to th\ndynamic and geometric features (such as pen-tip acceleration and curvature) that determine the visuz\nqualities of a handwriting trajectory. Given a sequence of virtual targets generated with the V2\"\nmodel trained on one dataset, we can also predict the corresponding dynamic parameters with the A2I\nmodel trained on another. The result is an output that is similar to one dataset in lettering structure, bu\npossesses the fine dynamic and geometric features of the other. If we visually inspect Fig. [10] we ca\nsee that both the sequence of virtual targets reconstructed by the dataset preprocessing method, an\nthe trajectory generated over the same sequence of virtual targets with dynamic parameters learne\nfrom a different datasets, are both readable. This emphasises the importance of using perceptuall\nsalient points along the input for estimating key-points in the data-set preprocessing step (Sectio\n\n3 Th.\n@ and Er ow an fer tha Wor kis\nony) Man VA he BIWN We ty\n\u201ctie vat brow Rae FACTO EK\nfe ed brown Fox \u201cQe Ca ss\n\nALES RRP AORN\n\nL\\, rN iANo ,\n\nChe ed loro wn\nBye FON bhp\nBe, ek lorown\nthe CA larpwn\n\nBy Ye hyn\n\nPox\ntre\nPow\nFox\n\nbox"}, {"section_index": "6", "section_name": ") CONCLUSIONS AND FUTURE WORK", "section_text": "We have presented a system that is able to learn the parameters for a physiologically plausible model\nof handwriting from an online dataset. We hypothesise that such a movement centric approach\nis advantageous as a feature representation for a number of reasons. Using such a repres\nprovides a performance that is similar to the handwriting prediction demonstrated by |G:\nand{Ha et al.|(2016), with a number of additional benefits. These include the ability to: (i) capture\nboth the geometry and dynamics of a hand drawn/written trace with a single representation, (ii)\nexpress the variability of different types of movement concisely at the feature level, (iii) demonstrate\ngreater flexibility for procedural manipulations of the output, (iv) mix \u201cstyles\u201d (applying curvature\nand dynamic properties from one example, to the motor plan of another), (v) learn a generative\nmodel from a small number of samples (n < 5), (vi) generate resolution independent outputs.\nThe reported work provides a solid basis for a number of different future research avenues. As a firs\nextension, we plan to implement the label/text input alignment method described in Graves\u2019 origina\nwork that should allow us to synthesise readable handwritten text and also to provide a more thoroug.\ncomparison of the two methods. Our method strongly relies on an accurate reconstruction of th\ninput in the preprocessing step. Improvements should target especially parts of the latter method tha\ndepend on user tuned parameters, such as the identification of salient points along the input (whic!\nrequires a final peak detection pass), and measuring the sharpness of the input in correspondenc\nwith salient points.\nFurthermore, we can perform the same type of operation within a single dataset, by priming the A2D\nmodel with the dynamic parameters of a particular training example, while feeding it with the virtual\ntargets of another. To test this we train both (V2V, A2D) models on a corpus containing 5 samples of\nthe same sentence written in different styles and then augmented 1400 (Fig.|11). We envision the\nutility of such as system in combination with virtual targets interactively specified by a user.\nFigure 10: Style transfer mixing training sets. (a) The priming sequence from the V2V dataset (IAM). (b) A2D\nis trained on a different, single user specified sample. (c) The virtual targets from (a) rendered with the dynamic\nparameters predicted form the A2D model from (b).\nWOES \u2014\n\u201ctyp vod \u2018roi -Kox, actor\u2019\nSHRACTHONEK PRES\nte re lorown Fox QEAVes\n\nAGL RASA ROTEL\n\nL\\, rm NWA Neo , -\n\nChe ed loro wn\nEe TON byw\nBe, eA loro win\nChe Eh larpwn\n\nBy Ye hyn\n\nPox\ntre\nPow\nFox\n\nDx\nFigure 11: Style transfer using priming. The leftmost column shows the entire training set consisting of 5 user\ndrawn samples. The top row (slightly greyed out) shows the virtual targets for two of the training examples.\nEach cell in the table shows the corresponding virtual targets rendered using the dynamic parameters predicted\nwith the A2D model primed with the sample in the corresponding row.\nJames Bergstra and Yoshua Bengio. Random Search for Hyper-Parameter Optimization. Journal of Machine\nLearning Research, 13:281\u2014305, 2012.\nJ-J Brault and Rejean Plamondon. Segmenting handwritten signatures at their perceptually important points.\nIEEE Transactions on Pattern Analysis and Machine Intelligence, 15(9):953\u2014957, 1993.\nJoeri De Winter and Johan Wagemans. Perceptual saliency of points along the contour of everyday objects:\nlarge-scale study. Perception & Psychophysics, 70(1):50\u201464, 2008.\nShimon Edelman and Tamar Flash. A model of handwriting. Biological cybernetics, 57(1-2):25\u201436, 1987.\nAnath Fischer, Rejean Plamondon, Colin O\u2019Reilly, and Yvon Savaria. Neuromuscular representation and\nsynthetic generation of handwritten whiteboard notes. In Frontiers in Handwriting Recognition (ICFHR)\n2014 14th International Conference on. pp. 222\u2014227. IEEE. 2014.\nTamar Flash and Amir A Handzel. Affine differential geometry analysis of human arm movements. Biological\ncybernetics, 96(6):577\u2014601, 2007.\nDavid Freedberg and Vittorio Gallese. Motion, emotion and empathy in esthetic experience. Trends in cognitiv\nsciences, 11(5):197\u2014203, 2007.\nFelix A Gers, Jiirgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with LSTM.\nNeural Computation, 12(10):2451\u20142471, 2000.\nMartin Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL\n\nhttp://tensorflow.org/, Software available from tensorflow.org.\nDaniel Bullock, Stephen Grossberg, and Christian Mannes. A neural network model for cursive script production\nBiological Cybernetics, 70(1):15\u201428, 1993.\nSylvain Calinon. A tutorial on task-parameterized movement learning and retrieval. Intelligent Service Robotics,\n9(1):1-29, 2016.\nJacob Feldman and Manish Singh. Information along contours and object boundaries. Psychological review,\n112(1):243, 2005.\nAlex Graves. Supervised Sequence Labelling with Recurrent Neural Networks. PhD thesis, 2008.\nAlex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv: 1308.0850, 2013.\nDavid Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv: 1609.09106, 2016.\nHenry SR Kao, Rumjahn Hoosain, and GP Van Galen. Graphonomics: Contemporary research in handwriting.\nElsevier, 1986.\nFrancesco Lacquaniti, Carlo Terzuolo, and Paolo Viviani. The law relating the kinematic and figural aspects of\ndrawing movements. Acta psychologica, 54(1):115\u2014130, 1983.\nF Lestienne. Effects of inertial load and velocity on the braking process of voluntary limb movements. Experi\nmental Brain Research, 35(3):407-418, 1979.\nXiaolin Li, Marc Parizeau, and R\u00e9jean Plamondon. Segmentation and reconstruction of on-line handwritten\nscripts. Pattern recognition, 31(6):675\u2014684, 1998.\nHenry W Lin and Max Tegmark. Why does deep and cheap learning work so well? \u2014 arXiv preprint\narXiv: 1608.08225, 2016.\nVinod Nair and Geoffrey E Hinton. Inferring motor programs from images of handwritten digits. In Advances in\nneural information processing systems, pp. 515\u201422, 2005.\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. I\nInternational Conference on Machine Learning ICML, volume 28, pp. 1310\u20141318, 2013.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735\u20141780,\n1997.\nJo-Hoon Lee and Hwan-Gue Cho. The beta-velocity model for simulating handwritten korean scripts. Ir\nElectronic Publishing, Artistic Imaging, and Digital Typography, pp. 252\u2014264. Springer, 1998.\nR\u00e9jean Plamondon and Claudio M Privitera. A neural model for generating and learning a rapid movemen\nsequence. Biological cybernetics, 74(2):117\u2014130. 1996.\nBrandon Rohrer and Neville Hogan. Avoiding spurious submovement decompositions II. Biological cybernetics\n94(5):409-14, 2006.\nDavid A Rosenbaum, Loukia D Loukopoulos, Ruud GJ Meulenbroek, Jonathan Vaughan, and Sascha I\nEngelbrecht. Planning reaches by evaluating stored postures. Psychological review, 102(1):28, 1995.\nLambert Schomaker. A neural oscillator-network model of temporal pattern generation. Human movement\nscience, 11(1):181-192, 1992.\nMike Schuster. Better Generative Models for Sequential Data Problems: Bidirectional Recurrent Mixture\nDensity Networks. In Advances in Neural Information Processing Systems (NIPS), pp. 589-595, 1999.\nIlya Sutskever. Training Recurrent neural Networks. PhD thesis, University of Toronto, 2013.\nP Viviani and C Terzuolo. Trajectory determines movement dynamics. Neuroscience, 7(2):431-437, 1982.\nXu-Yao Zhang, Fei Yin, Yan-Ming Zhang, Cheng-Lin Liu, and Yoshua Bengio. Drawing and Recognizing\nChinese Characters with Recurrent Neural Network. arXiv preprint arXiv: 1606.06539. 2016."}, {"section_index": "7", "section_name": "A SIGMA LOGNORMAL MODEL", "section_text": "The Sigma Lognormal model (Plamondon & Djioua\\|2006) describes complex handwriting trajectorie:\n\nvia the vectorial superimposition of lognormal strokes. The corresponding speed profile A; (t) assume:\na variably asymmetric \"bell shape\" which is described with a 3 parameter lognormal function\nR. Plamondon et al. Recent developments in the study of rapid human movements with the kinematic theory.\nPattern Recognition Letters, 35:225-35, 2014.\n%\u00e9jean Plamondon and Moussa Djioua. A multi-level representation paradigm for handwriting stroke generation\nHuman Movement Science, 25(4):586\u2014607, 2006.\nXejean Plamondon, Moussa Djioua, and Christian O\u2019Reilly. Recent Developments in the Study of Rapid Human\nMovements with the Kinematic Theory. Traitement Du Signal, 26:377\u2014394, 2009. ISSN 0765-0019.\nRejean Plamondon, Christian O\u2019Reilly, Celine Remi, and Theresa Duval. The Lognormal Handwriter: Learning,\nPerforming and Declining. Frontiers in Psychology, 4(945), 2013. ISSN 1664-1078.\n\u2018reek Stulp and Olivier Sigaud. Many regression algorithms, one unified model: A review. Neural Networks, 69\n60-79, 2015.\nTamas Varga, Daniel Kilchhofer, and Horst Bunke. Template-based Synthetic Handwriting Generation for the\nTraining of Recognition Systems. In Proc. of 12th Conf. of the International Graphonomics Society, pp.\n206-211, 2005.\nA(t) {In(\u00a2 = tos) ~ ww)\n\n1\n- ex]\noiV2n(t \u2014 toi) P ( 20;\nwhere to; defines the activation time of a stroke and the parameters jx; and 0; determine the shape\nof the lognormal function. i; is referred to as log-time delay and is biologically interpreted as\nthe rapidity of the neuromuscular system to react to an impulse generated by the central nervous\n\nsystem (Plamondon et al.|{2003); o; is referred to as log-response time and determines the spread and\n\nasymmetry of the lognormal.\nThe curvilinear evolution of strokes is described with a circular arc shape, which results in\nwhere 6; is the central angle of the circular arc that defines the shape of the i\u201d stroke.\nm1\n\ne=nt ff drAi(t) 3) Bi(r) (vina \u2014 vi),\n\ni=1\n\nh(6;)cosd;(t) \u2014h(;)sind;(t) ey if |sin\u00e9\n@,(t) = cane my ene E and nay= {3 \u00b0 otherw\nwhich scales the extent of the stroke based on the ratio between the perimeter and the chord length of\nthe circular arc.\ntor = ti; \u2014 et 87 ti=ty-1y+At ti)\nFigure 12: Lognormals with varying \"skeweness\" parameter a and corresponding values for .,a. As a \u2014 (\nthe lognormal approaches a Gaussian.\nThe planar evolution of a trajectory is defined by a sequence of virtual targets {v;};_ |\", where a\n\ntrajectory with m virtual targets will be characterised by m \u2014 1 circular arc strokes. A NA trajectory,\nparameterised by the virtual target positions, is given by\nIntermediate parameterisation. In order to facilitate the precise specification of timing and profile\nshape of each stroke, we recur to an intermediate parametrisation that takes advantage of a few known\nproperties of the lognormal 2 ) in order to define each stroke with (i) a\ntime offset At; with respect to the previous stroke, (ii) a stroke duration 7; and (iii) a shape parameter\na;, which defines the skewedness of the lognormal. The corresponding UA parameters {to;, 14:, 7: }\ncan be then computed with:\nIn(1 + a;),\nOo, =\nlL \u2014.\u2014\n=\n-m(\n=\n> \u2014 30\n= 8\n=)\nwhere \u00a2; is the onset time of the lognormal stroke profile. As a approaches 0, the shape of the\n\nlognormal converges to a Gaussian, with mean t,; + ee (the mode of the lognormal) and standard\ndeviation d.\n76\n\nGaussian\na=0.1 4\na=0.2\na=0.3\na=0.4\na=0.5\na=0.6\na=0.3 15 0=0.26\na=0.4 .46 0=0.34\na=0.5 72 0=0.41\na=0.6 94 o=0.47\n\na=0.7 14\u00b0 0=0.53\na=0.8 33 0=0.59\na=0.9 0=0.64\na=1 =-1.66 0=0.69"}, {"section_index": "8", "section_name": ". RECONSTRUCTING A PARAMETERS FROM AN ONLINE DATASET", "section_text": "The SA parameter reconstruction method operates on a input contour uniformly sampled at a fixec\ndistance which is defined depending on the extent of the input, where we denote the kth sampled poin\nalong the input with p[k]. The input contour is then segmented in correspondence with perceptually\nsalient key points, which correspond with loci of curvature extrema modulated by neighbouring\ncontour segments (Brault & Plamondon| {1993} [Berio & Leymarie}/2015). The proposed approact\nshares strong similarities with previous work done for (i) compressing online handwriting data with <\ncircular-arc based segmentation and (ii) for generating synthetic data for handwriting\n\nrecognisers (Varga et al. [2005)p. The parameter reconstruction algorithm can be summarised with the\nfollowing steps:\nThe details for each step are highlighted in the following paragraphs.\nEstimating input key-points. Finding significant curvature extrema (which can be counted as\nconvex and concave features for a closed/solid shape) is an active area of research, as relying on\ndiscrete curvature measurements remains challenging. We currently rely on a method described by\nFeldman & Singh] and supported experimentally by/De Winter & Wagemans| (2008): first\nwe measure the turning angle at each position of the input p|k] and then compute a smooth version\n\nof the signal by convolving it with a Hanning window. We assume that the turning angles have\n9.0040\n\nTurning angle surprisal\nFigure 13: Input key-point estimation. Left, the (smoothed) turning angle surprisal signal and the key-points\nestimated with peak detection. Right, the corresponding key-points along the input trajectory.\nbeen generated by a random process with a Von Mises distribution with mean at 0 degrees, which\ncorresponds with giving maximum probability to a straight line. We then measure the surprisal (i.e\n\nthe negative logarithm of the probability) for each sample as defined by|Feldman & Singh] (2005).\n\nwhich normalised to the {0. 1] range simplifies to:\nwhere 6[k\u2019] is the (smoothed) turning angle. The first and last sample indices of the surprisal signal\ntogether with its local maxima results in m key-point indices {2;}. The corresponding key-points\nalong the input contour are then given by {p [2;]}.\n\u00a9 Pie 7 KEY-polnts Hi Wie Wipul COMtouUr.\ne Fit a circular arc to each contour segment defined between two consecutive key-points\n(defining individual strokes), and obtain an estimate of each curvature parameter 0;.\ne For each stroke compute the corresponding At; parameter by analysing the curvature signal\nin the region of the corresponding key-point.\ne Define an initial sequence of virtual targets with m positions corresponding with each input\nkey-point.\ne Repeat the following until convergence or until a maximum number of iterations is reached\n(2015):\n\u2014 Integrate the A trajectory with the current parameter estimate.\n\u2014 Identify m key-points in the generated trajectory.\n\n\u2014- Move the virtual target positions to minimise the distance between the key-points of\nthe generated trajectory and the key-points on the input contour.\n1 \u2014 cos(6[k}),\nEstimating stroke curvature parameters. For each section of the input contour defined between\ntwo consecutive key-points, we estimate the corresponding stroke curvature parameter 0; by first\ncomputing a least square fit of a circle to the contour section. We then compute the internal angle of\nthe arc supported between the two key-points, which is equal 26;, i.e. two times the corresponding\ncurvature parameter 0;.\nFigure 14: Fitting circles (dotted red) and circular arcs (red) to the input.\nEstimating stroke time-overlap parameters. This step is based on the observation that a smalle:\nvalues of Ato;, i.e. a greater time overlap between strokes, result in smoother trajectories. Or\nthe contrary, a sufficiently large value of Ato; will result in a sharp corner in proximity of the\ncorresponding virtual target. We exploit this notion, and compute an estimate of the Ato; parameters\nby examining the sharpness of the input contour in the region of each key-point.\nTo do so we examine the previously computed turning angle surprisal signal, in which we can observe\nthat sharp corners in the contour correspond with sharper peaks, while smoother corners correspond\nwith smooth peaks with a larger spread. By treating the surprisal signal as a probability density\nfunction, we can then use statistical methods to measure the shape of each peak with a mixture of\nparametric distributions, and examine the shape of each mixture component in order to get an estimate\nof the corresponding sharpness along the input contour. To do so we employ a variant of Expectation\nMaximisation (EM) (Dempster et al.|[1977) in which we treat the distance along the contour as a\nrandom variable weighted by the corresponding signal amplitude normalised to the [0, 1] range. Once\nthe EM algorithm has converged, we treat each mixture component as a radial basis function (RBF)\ncentred at the corresponding mean, and use linear regression as in Radial Basis Function Networks\n(Stulp & Sigaud| 2015) to fit the mixture parameters to the original signal (Calinon| (Calinon| 2016). Finally\nwe generate an estimate of sharpness \\; (bounded in the [0, 1] range) for each key point using as a\nlogarithmic function of the mixture parameters and weights. The corresponding Ato; parameters are\nthen given by\n1003]\n\neoooL\n\nSharpness GMM\n\nso\n\nBefore nudge\n00s Sharpness GMM Before nudge\n\n*\n1003] \\\n\n. .\n00005 7 Eo 3 70 3 % 70\nFigure 15: Sharpness estimation. Left, the GMM components estimated from the turning angle surprisal signal\nRight, the NA trajectory generated before the final iterative adjustment step. Note that at this stage the virtual\ntarget positions correspond with the estimated input key-points.\nAt; = Atmin + (Atmaxz \u2014 Atmin)XAi_;\nwhere At,,,;7, and At,,qx are user specified parameters that determine the range of the Ato; estimates.\nNote that we currently utilise an empirically defined function for this task. But in future steps, we\nintend to learn the mapping between sharpness and mixture component parameters from synthetically\nsamples generated with the NA model (for which Atp;, and consequently \\;, are known).\nIteratively estimating virtual target positions. The loci along the input contour corresponding\nwith the estimated key-points provide an initial estimate for a sequence of virtual targets, where\neach virtual target position is given by v; = p[Z;]. Due to the trajectory-smoothing effect produced\nby the time overlaps, the initial estimate will result in a generated trajectory that is likely to have\na reduced scale with respect to the input we wish to reconstruct . In order to\nproduce a more accurate reconstruction, we use an iterative method that shifts each virtual target\ntowards a position that will minimise the error between the generated trajectory and the reconstructed\ninput. To do so, we compute an estimate of m output key-points {\u20ac (z;)} in the generated trajectory\nwhere 22, ..., 2, are the time occurrences at which the influence of one stroke exceeds the previous\nThese will correspond with salient points along the trajectory (extrema of curvature) and can be easily\ncomputed by finding the time occurrence at which two consecutive lognormals intersect. Similarly to\nthe input key-point case, \u20ac(z1) and \u20ac(z:m) respectively denote the first and last points of the generated\ntrajectory. We then iteratively adjust the virtual target positions in order to move each generated\nECONSETUCTION\nkey-point \u20ac(z;) towards the corresponding input key-point p[2;] with:\nThe iteration continues until the Mean Square Error (MSE) of the distances between every pair p [\nand \u20ac(z;) is less than an experimentally set threshold or until a maximum number of iterations is\nreached (Fig. {[T6p. This method usually converges to a good reconstruction of the input within few\niterations (usually < 5). Interestingly, even though the dynamic information of the input is discarded,\nthe reconstructed velocity profile is often similar to the original (in number of peaks and shape),\nwhich can be explained by the extensively studied relationships between geometry and dynamics of\n\nmovement trajectories (Viviani & Terzuolo| 1982} [Lacquaniti et al. Viviani & Schneider]\nBlach & Handvel\n\nI0N07\nIn order to increase the expressive generative capabilities of our networks, we train them to model\nparametric probability distributions. Specifically, we use Recurrent Mixture Density Networks that\noutput the parameters of a bivariate Gaussian Mixture Model.\nIf a target variable z, can be expressed as a bivariate GMM, then for kX Gaussians we can us\na network architecture with output dimensions of 6/. This output vector would then consist o\n(fur \u20ac Rs 6, \u20ac R*, pr \u20ac IR* mt, \u20ac R*), which we use to calculate the parameters of th\nFigure 16: Final trajectory reconstruction step. Left, iterative adjustment of virtual target positions. Right, the\nfinal trajectory generated with the reconstructed dynamic parameters.\nv; \u2014 vu; + p[2;] \u2014 \u20ac (zi),"}, {"section_index": "9", "section_name": "GMM via (Graves]|2013)", "section_text": "el\npk = fk : means for k\u2019th Gaussian, pt \u20ac IR?\nof = exp(6*) : standard deviations for k\u2019th Gaussian, of \u20ac IR?\npt = tanh(A*) : correlations for k\u2019th Gaussian, pf \u20ac (\u20141, 1)\nK\na = softmax(7*) : mixture weight for k\u2019th Gaussian , Ss _\nk\npk = fi* : means for k\u2019th Gaussian, pe \u20ac IR?\nk\n\no; = exp(6*) : standard deviations for k\u2019th Gaussian, of eR?\npr = tanh(p*) : correlations for k\u2019th Gaussian, pr \u20ac (-1,1)\n\nK\na = softmax(7*) : mixture weight for k\u2019th Gaussian , Ss Tk -\nWe can then formulate the probability distribution function P; at timestep t as\nOut = arg max Pr(S | 0)\n\nSs\n= arg max Il Pr(g | x, 9).\n(2,9)\nSince the logarithm is a monotonic function, a common method for maximizing this likelihood is\nminimizing its negative logarithm, also known as the Negative Log Likelihood (NLL), Hamiltonian or\nsurprisal (Lin & Tegmark|{2016). We can then define our cost function J as\nFor a bivariate RMDN, the objective function can be formulated by substituting eqn. in place of\nPr(g | x, @) in eqn. (18).\nInput At each timestep i, the input to the V2V model is 2; \u20ac IR\u00ae, where the first two elements\nare given by Av; (the relative position displacement for the 7\u2019th stroke, i.e. between the 7\u2019th virtual\ntarget and the next), and the last element is u; \u20ac {0, 1} (the pen-up state during the same stroke).\nGiven input a; and its current internal state (c;,h;), the network learns to predict x41, by learning\nthe parameters for the Probability Density Function (PDF) : Pr(a;41 | a, c;, h;). With a slight abuse\nof notation, this can be expressed more intuitively as Pr(a;+, | @;,@j_1, ...,\u00aei_\u00bb) where n is the\nmaximum sequence length.\ncas\nP= So miN(z | ml.of, of), where\nk\n\nN@|n.0.0) ! -aa\n\na|p,o, = exp | \u2014\n\nHeo 2ro102\\/1 \u2014 p? 2(1 \u2014 p?)\n\nZ (ei \u2014 ya)? , (2-2)? \u2014 2p(@1 \u2014 Me) (2 \u2014 p12)\no? o3 102\n\nF and\nIf we let @ denote the parameters of a network, and given a training set S of input-target pairs\n(x \u20acX,% \u20ac Y), our training objective is to find the set of parameters 877, which has the maximum\nlikelihood (ML). This is the @ that maximises the probability of training set S and is formulated as\n\n(Graves||2008)\nOutput We express the predicted probability of Av; as a bivariate GMM as described in Section\nC.1} and uw; as a Bernoulli distribution. Thus for A\u2019 Gaussians the network has output dimensions\nof (6K + 1) which, in addition to eqn. (i), contains \u00e9; which we use to calculate the pen state\n\nprobability via 2013)\nei\n\n\u201cT+ exp(\u00e9;)\n\ni \u20ac (0,1)\nArchitecture We use Long Short- Tem Ca aT POD) on Soe DET (Hochreiter & Schmidhuber}|1997) networks with\ninput, output and forget gates (Gers et al.|/2000), and we use Dropout relation as described by\n\n(2014). We employ both a a au and a random search (Bergstra & Bengio|/2012) on\nvarious ee in the ranges: sequence length {64, 128}, number of hidden recurrent layers\n{1, 2, 3}, dimensions per hidden layer {64, 128, 256, 400, 512, 900, 1024}, number of Gaussians {5\n10, 20}, dropout keep probability {50%, 70%, 80%, 90%, 95%} and peepholes { with, without}.\nFor comparison we also tried a deterministic architecture whereby instead of outputing a probability\ndistribution, the network outputs a direct prediction for x;41. As expected, the network was unable\nto learn this function, and all sequence of virtual targets synthesized with this method simply travel in\na repeating zig-zag line.\nTraining We use a form of Truncated Backpropagation Through Time (BPTT) 2013\nwhereby we segment long sequences into overlapping segments of maximum length n. In this case\n\nlong-term dependencies greater than length n are lost, however with enough overlap the network can\neffectively learn a sliding window of length n timesteps. We shuffle our training data and reset the\ninternal state after each sequence. We empirically found an overlap factor of 50% to perform well\nthough further studies are needed to confirm the sensitivity of this figure.\nWe use dynamic unrolling of the RNN, whereby the number of timesteps to unroll to is not set at\ncompile time, in the architecture of the network, but unrolled dynamically while training, allowing\nvariable length sequences. We also experimented with repeating sequences which were shorter than\nthe maximum sequence length n, to complete them to length n. We found that for our case they\nproduced desirable results, with some side-effects which we discuss in later sections.\nWe split our dataset into training: 70%, validation: 20% and test:10% and use the Adam optimizer\n(Kingma & Bal (2014) with the recommended hyperparameters. To prevent exploding gradients we\n\nclip gradients by their global L2 norm as described in 2013). We tried thresholds ot\nboth 5 and 10, and found 5 to provide more stability.\nWe formulate the loss function J to minimise the Negative Log Likelihood as described in Sectiot\nsing the probability density functions described in eqn. and eqn.\nInput The input to this network at each timestep i is identical to that of the V2V-model, x; \u20ac IR\u00ae\nwhere the first two elements are Av; (normalised relative position displacement for the 7\u2019th stroke).\nand u; \u20ac {0,1} (the pen state during the same stroke). Given input a; and its current internal state\n(c;, h;), the network learns to predict the dynamic parameters (Ato;, 0;) for the current stroke 7, by\nlearning the parameters for Pr(Ato;, 4; | @:,\u00a2;,h;). Again with an abuse of notation, this can be\n\nexpressed more intuitively as Pr(Ato;, 6; | @;,@i\u20141, ..., Zin) where n is the maximum sequence\nlength.\nTraining We use the same procedure for training as the V2V-model.\nArchitecture We explored very similar architecture and hyperparamereters as the V2V-model, but\nfound that we achieved much better results with a shorter maximum sequence length. We trained a\nnumber of models with a variety of sequence lengths {3, ..., 8, 13, 16, 21, 32}.\nInput The input to this network a; \u20ac IR? at each timestep / is slightly different to the V2V anc\nV2D models. Similar to the V2V and V2D models, the first two elements are Av; (normalisec\nrelative position displacement for the i\u2019th stroke), and the third element is u; \u20ac {0, 1} (the pen state\nduring the same stroke). However in this case the final two elements are the dynamic parameters fo!\nthe previous stroke (Ato;\u20141,6;\u20141), normalized to zero mean and unit standard deviation.\nOutput The output of this network is identical to that of the V2D model.\nTraining We use the same procedure for training as the V2V-model."}, {"section_index": "10", "section_name": "C.6 MODEL SELECTION", "section_text": "We evaluated and batch rendered the outputs of many different architectures and models at differen\ntraining epochs, and settled on models which were amongst those with the lowest validation erro1\n\nbut also produced visibily more desirable results. Once we picked the models, the results displayec\nare not cherry picked.\nThe preprocessed IAM dataset contains 12087 samples (8460 in the training set) with maximum\nsequence length 305, minimum 6, median 103 and mean 103.9. For the V2V/V2D/A2V model:\ntrained on the IAM database we settle on an architecture of 3 recurrent layers, each with size 512, <\nmaximum sequence length of 128, 20 Gaussians, dropout keep probability of 80% and no peepholes\nFor V2V we used L2 normalisation on Av; input, and for A2D/V2D we used\nWe also tried a number of different methods for normalising and representing Av; on the input to the\nmodels. We first tried normalising the components individually to have zero mean and unit standard\ndeviation. We also tried normalising uniformly on L2 norm again to have zero mean and unit standar\u00a2\ndeviation. Finally, we tried normalised polar coordinates, both absolute and relative.\nGiven input a; and its current internal state (c;, h;), the network learns to predict the dynamic param.\neters (Ato;, 0;) for the current stroke i, by learning the parameters for Pr(Ato;, 6; | @;, c;, hi). Again\nwith an abuse of notation, this can be expressed more intuitively as Pr(Ato;, 6; | @i, @i\u20141, ---, Li-n)\nwhere n is the maximum sequence length.\n1itecture We explored very similar architecture and hyperparamereters as the V2D model\nFor the augmented one-shot learning models we used similar architectures, but found that 2 recurrent\nlayers each with size 256 was able to generalise better and produce more interesting results that both\ncaptured the prime inputs without overfitting.\nInput (online handwriting data)\n\nZA-model parameter extraction\n\nV\n\nArtificial variability with parameter perturbations\n\n(optional)\n\nVirtual targets\n\nPreprocessed input\n\nDynamic parameters (At,,, 0)\n\n~\n\ntraining\n\nV2V model\n\ninput seed\n\nsynthesize virtual target from\nseed virtual targets\n\nparameters | action plan\n\ntraining\n\nV2D/A2D models\n\npredict model parameters\nfor synthesized virtual targets\n\nSy spe lw > seam a\n\nan P NX fil yon Awe MEAN\n\npier aes mo\n\\\n\nv\n\nY\n\nSauash vackats. Acary oo\nup Wee ly qn! bree Sherer\npln | Oddtont\u2019 tee\n\ngenerate trajectories with\nrandom model parameters\n\npredict model parameters for\nuser-drawn action plan\n\nJini spevass tennr by\n\nAp Ao Yan 2 OO\n\nVa mY\n\nNT\nZA-model parameter extraction\nV2V model\nFigure 17: Schematic overview of the system.\nV2D/A2D models\n\u2014\n\npredict model parameters\nfor synthesized virtual targets"}]
ryMxXPFex
[{"section_index": "0", "section_name": "DISCRETE VARIATIONAL AUTOENCODERS", "section_text": "Jason Tyler Rolfe\nProbabilistic models with discrete latent variables naturally capture datasets com-\nposed of discrete classes. However, they are difficult to train efficiently, since\nbackpropagation through discrete variables is generally not possible. We present\nanovel method to train a class of probabilistic models with discrete latent variables\nusing the variational autoencoder framework, including backpropagation through\nthe discrete latent variables. The associated class of probabilistic models com-\nprises an undirected discrete component and a directed hierarchical continuous\ncomponent. The discrete component captures the distribution over the discon-\nnected smooth manifolds induced by the continuous component. As a result, this\nclass of models efficiently learns both the class of objects in an image, and their\nspecific realization in pixels, from unsupervised data; and outperforms state-of-\nthe-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101\nSilhouettes datasets."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Unsupervised learning of probabilistic models is a powerful technique, facilitating tasks such a:\ndenoising and inpainting, and regularizing supervised tasks such as classification (Hinton et al.\n2006; Salakhutdinov & Hinton, 2009; Rasmus et al., 2015). Many datasets of practical interest are\nprojections of underlying distributions over real-world objects into an observation space; the pixel:\nof an image, for example. When the real-world objects are of discrete types subject to continuou:\ntransformations, these datasets comprise multiple disconnected smooth manifolds. For instance\nnatural images change smoothly with respect to the position and pose of objects, as well as scene\nlighting. At the same time, it is extremely difficult to directly transform the image of a person to one\nof a car while remaining on the manifold of natural images.\nIt would be natural to represent the space within each disconnected component with continuous vari.\nables, and the selection amongst these components with discrete variables. In contrast, most state-\nof-the-art probabilistic models use exclusively discrete variables \u2014 as do DBMs (Salakhutdinov &\nHinton, 2009), NADEs (Larochelle & Murray, 2011), sigmoid belief networks (Spiegelhalter & Lau:\nritzen, 1990; Bornschein et al., 2016), and DARNs (Gregor et al., 2014) \u2014 or exclusively continuous\nvariables \u2014 as do VAEs (Kingma & Welling, 2014; Rezende et al., 2014) and GANs (Goodfellow\net al., 2014).! Moreover, it would be desirable to apply the efficient variational autoencoder frame:\nwork to models with discrete values, but this has proven difficult, since backpropagation througt\ndiscrete variables is generally not possible (Bengio et al., 2013; Raiko et al., 2015).\nWe introduce a novel class of probabilistic models, comprising an undirected graphical model de-\nfined over binary latent variables, followed by multiple directed layers of continuous latent variables\nThis class of models captures both the discrete class of the object in an image, and its specific con.\ntinuously deformable realization. Moreover, we show how these models can be trained efficiently\nusing the variational autoencoder framework, including backpropagation through the binary laten\nvariables. We ensure that the evidence lower bound remains tight by incorporating a hierarchica\napproximation to the posterior distribution of the latent variables, which can model strong corre-\nlations. Since these models efficiently marry the variational autoencoder framework with discrete\nlatent variables, we call them discrete variational autoencoders (discrete VAEs).\n'Spike-and-slab RBMs (Courville et al., 2011) use both discrete and continuous latent variables."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "L(x, 0, \u00a2) = log p(x|@) \u2014 KL[q(z|z, \u00a2)||p(z|x, )],\nL(x,0,) = \u2014KL[q(2|x, 9)||p(z|8)] + Ey [log p(z|z, 8)] .\na\n\nKL term autoencoding term\nIn many cases of practical interest, such as Gaussian q(2z|:\ncan be computed analytically. Moreover, a low-variance stochastic approximation to the gradient\nof the autoencoding term can be obtained using backpropagation and the reparameterization trick,\nso long as samples from the approximating posterior q(z|) can be drawn using a differentiable,\ndeterministic function f(z, \u00a2, p) of the combination of the inputs, the parameters, and a set of input-\nand parameter-independent random variables p ~ D. For instance, samples can be drawn from a\nGaussian distribution with mean and variance determined by the input, N (m(x, @), v(a, \u00a2)), using\n\na . _ \u2014 - ee a\n0\n7 Eatele.0) log pele, 0] \u00a9 55 rd a6 2 tog pel f(e,p,9):8).\nwhere F is the conditional-marginal cumulative distribution function (CDF) defined by:\nHowever, this generalization is only possible if the inverse of the conditional-marginal CDF exists\nand is differentiable.\nA formulation comparable to Equation 3 is not possible for discrete distributions, such as restricted\nBoltzmann machines (RBMs) (Smolensky, 1986):\n>This problem remains even if we use the quantile function, F;'(p) = inf 42 ER: LE = 00 p(z\u2019) > py\n\nhe derivative of which is either zero or infinite if p is a discrete distribution.\nConventionally, unsupervised learning algorithms maximize the log-likelihood of an observed\ndataset under a probabilistic model. Even stochastic approximations to the gradient of the log-\nlikelihood generally require samples from the posterior and prior of the model. However, sampling\nfrom undirected graphical models is generally intractable (Long & Servedio, 2010), as is sampling\nfrom the posterior of a directed graphical model conditioned on its leaf variables (Dagum & Luby,\n1993).\nIn contrast to the exact log-likelihood, it can be computationally efficient to optimize a lower bound\non the log-likelihood (Jordan et al., 1999), such as the evidence lower bound (ELBO, L(x, 6, \u00a2);\nHinton & Zemel, 1994):\nwhere q(z|, ) is a computationally tractable approximation to the posterior distribution p(z|, 0).\nWe denote the observed random variables by x, the latent random variables by z, the parameters of\nthe generative model by 0, and the parameters of the approximating posterior by \u00a2. The variational\nautoencoder (VAE; Kingma & Welling, 2014; Rezende et al., 2014; Kingma et al., 2014) regroups\nthe evidence lower bound of Equation 1 as:\nThe reparameterization trick can be generalized to a large set of distributions, including nonfactorial\napproximating posteriors. We address this issue carefully in Appendix A, where we find that an\nanalog of Equation 3 holds. Specifically, D; is the uniform distribution between 0 and 1, and\n1-20) _ i .el= | Wetb\"2)\n\npP(z) = Zz, z\nwhere z \u20ac {0,1}\", Zp is the partition function of p(z), and the lateral connection matrix W is\ntriangular. Any approximating posterior that only assigns nonzero probability to a discrete domain\ncorresponds to a CDF that is piecewise-contant. That is, the range of the CDF is a proper subset\nof the interval {0, 1]. The domain of the inverse CDF is thus also a proper subset of [0, 1], and its\nderivative is not defined, as required in Equations 3 and 4.2"}, {"section_index": "3", "section_name": "1.2 RELATED WORK", "section_text": "Recently, there have been many efforts to develop effective unsupervised learning techniques by\nouilding upon variational autoencoders. Importance weighted autoencoders (Burda et al., 2016)\nHamiltonian variational inference (Salimans et al., 2015), normalizing flows (Rezende & Mohamed\n2015), and variational Gaussian processes (Tran et al., 2016) improve the approximation to the pos:\nerior distribution. Ladder variational autoencoders (Sgnderby et al., 2016) increase the power of the\narchitecture of both approximating posterior and prior. Neural adaptive importance sampling (Dt\not al., 2015) and reweighted wake-sleep (Bornschein & Bengio, 2015) use sophisticated approxi\nmations to the gradient of the log-likelihood that do not admit direct backpropagation. Structurec\nvariational autoencoders use conjugate priors to construct powerful approximating posterior distri\noutions (Johnson et al., 2016).\nPrior efforts by Makhzani et al. (2015) to use multimodal priors with implicit discrete variables\ngoverning the modes did not successfully align the modes of the prior with the intrinsic clusters\nof the dataset. Rectified Gaussian units allow spike-and-slab sparsity in a VAE, but the discrete\nvariables are also implicit, and their prior factorial and thus unimodal (Salimans, 2016). Graves\n(2016) computes VAE-like gradient approximations for mixture models, but the component models\nare assumed to be simple factorial distributions. In contrast, discrete VAEs generalize to powerful\nmultimodal priors on the discrete variables, and a wider set of mappings to the continuous units.\nThe generative model underlying the discrete variational autoencoder resembles a deep belief net-\nwork (DBN; Hinton et al., 2006). A DBN comprises a sigmoid belief network, the top layer of\nwhich is conditioned on the visible units of an RBM. In contrast to a DBN, we use a bipartite Boltz-\nmann machine, with both sides of the bipartite split connected to the rest of the model. Moreover, all\nhidden layers below the bipartite Boltzmann machine are composed of continuous latent variables\nwith a fully autoregressive layer-wise connection architecture. Each layer j receives connections\nfrom all previous layers i < j, with connections from the bipartite Boltzmann machine mediated by\na set of smoothing variables. However, these architectural differences are secondary to those in the\ngradient estimation technique. Whereas DBNs are traditionally trained by unrolling a succession of\nRBMs, discrete variational autoencoders use the reparameterization trick to backpropagate through\nthe evidence lower bound.\n2 BACKPROPAGATING THROUGH DISCRETE LATENT VARIABLES BY ADDING\nCONTINUOUS LATENT VARIABLES\nWhen working with an approximating posterior over discrete latent variables, we can effectively\nsmooth the conditional-marginal CDF (defined by Equation 5 and Appendix A) by augmenting the\nlatent representation with a set of continous random variables. The conditional-marginal CDF over\nthe new continuous variables is invertible and its inverse is differentiable, as required in Equations 3\nand 4. We redefine the generative model so that the conditional distribution of the observed variables\ngiven the latent variables only depends on the new continuous latent space. This does not alte1\n3Strictly speaking, the prior contains a bipartite Boltzmann machine, all the units of which are connected tc\nthe rest of the model. In contrast to a traditional RBM, there is no distinction between the \u201cvisible\u201d units and the\n\u201chidden\u201d units. Nevertheless, we use the familiar term RBM in the sequel, rather than the more cumbersome\n\u201cfully hidden bipartite Boltzmann machine.\u201d\nIn the following sections, we present the discrete variational autoencoder (discrete VAE), a hierar-\nchical probabilistic model consising of an RBM,\u201d followed by multiple directed layers of continuous\nlatent variables. This model is efficiently trainable using the variational autoencoder formalism, as\nin Equation 3, including backpropagation through its discrete latent variables.\nIt is easy to construct a stochastic approximation to the gradient of the ELBO that admits both\ndiscrete and continuous latent variables, and only requires computationally tractable samples. Un-\nfortunately, this naive estimate is impractically high-variance, leading to slow training and poor\nperformance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the\nbaseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih\n& Gregor, 2014; Williams, 1992; Mnih & Rezende, 2016), which we discuss in greater detail in\nAppendix B.\nq(z = 1|2, @)\n\n=\u00a9)\n\n\u20ac\n\n\u201c1\nFcic.g) (P)\n\nQO \u00a9 \u00a9\n\nP(z|\u00a2, d)\n\nO-@)\nOO\n\n(a) Approximating posterior q(\u00a2, z|x) (b) Prior p(x, \u00a2, z) (c) Autoencoding term\nFigure 1: Graphical models of the smoothed approximating posterior (a) and prior (b), and the\nnetwork realizing the autoencoding term of the ELBO from Equation 2 (c). Continuous latent vari.\nables \u00a2; are smoothed analogs of discrete latent variables z;, and insulate z from the observed vari.\nables x in the prior (b). This facilitates the marginalization of the discrete z in the autoencoding term\nof the ELBO, resulting in a network (c) in which all operations are deterministic and differentiabl\ngiven independent stochastic input p ~ U{0, 1).\nthe fundamental form of the model, or the KL term of Equation 2; rather, it can be interpreted a\nadding a noisy nonlinearity, like dropout (Srivastava et al., 2014) or batch normalization with a smal\nminibatch (Ioffe & Szegedy, 2015), to each latent variable in the approximating posterior and th\nprior. The conceptual motivation for this approach is discussed in Appendix C.\nSpecifically, as shown in Figure la, we augment the latent representation in the approximating pos:\nterior with continuous random variables \u00a2,4 conditioned on the discrete latent variables z of the\nRBM:\n(6, zl.) = r(\u00a2|z) -a(z|a,\u00a2), wher\n\nr((lz) =] r(Gila).\n\na\nThe support of r(\u00a2|z) for all values of z must be connected, so the marginal distribution\nq(\u00a2\\a, 6) = >, r(\u00a2|z) - \u00a2(z|a, &) has a constant, connected support so long as 0 < q(z|a,\u00a2) < 1.\nWe further require that r(\u00a2|z) is continuous and differentiable except at the endpoints of its support,\n\nso the inverse conditional-marginal CDF of q(\u00a2|x, \u00a2) is differentiable in Equations 3 and 4, as we\ndiscuss in Appendix A.\nAs shown in Figure 1b, we correspondingly augment the prior with C:\np(x\\C, z,0) = p(a|C, 6).\nThe smoothing distribution r(\u00a2|z) transforms the model into a continuous function of the distri\nbution over z, and allows us to use Equations 2 and 3 directly to obtain low-variance stochastic\napproximations to the gradient.\nGiven this expansion, we can simplify Equations 3 and 4 by dropping the dependence on z and\napplying Equation 16 of Appendix A, which generalizes Equation 3:\n1 a) \u201c1\nHo Fleer (21Pibjew (08) -\npxu(0,1)\"\n\nte)\nFg naGzle.2) [log p(x|\u00a2, 2, 8)]\n\u201cWe always use a variant of z for latent variables. This is zeta, or Greek z. The discrete latent variables z\ncan conveniently be thought of as English z.\nIf the approximating posterior is factorial, then each F; is an independent CDF, without conditioning\nor marginalization.\nAs we shall demonstrate in Section 2.1, Fotis \u00a2)(e) is a function of g(z = 1|x,\u00a2), wher\n\nq(z = 1|z, \u00a2) is a deterministic probability value calculated by a parameterized function, such a!\na neural network. The autoencoder implicit in Equation 8 is shown in Figure Ic. Initially, input\nis passed into a deterministic feedforward network g(z = 1|2, }), for which the final nonlinearity i\nthe logistic function. Its output g, along with an independent random variable p ~ U(0, 1], is passe\n\ninto the deterministic function Fite, 4) (p) to produce a sample of \u00a2. This \u00a2, along with the origina\n\ninput , is finally passed to log p (x|\u00a2, 0). The expectation of this log probability with respect to p i\nthe autoencoding term of the VAE formalism, as in Equation 2. Moreover, conditioned on the inpu\nand the independent p, this autoencoder is deterministic and differentiable, so backpropagation cat\nbe used to produce a low-variance, computationally-efficient approximation to the gradient.\nAs a concrete example consistent with sparse coding, consider the spike-and-exponential transfor\nmation from binary z to continuous C:\noo, if\u00a2;=0 Fuejeao(C\u2019) <1\n= : (\u00a2i|2i=0)\nr(Gilzi = 0) = 0 otherwise r(Gile\nBc eS |S BC\n&5, f0<SGsl 0, = -<\n=)ayer 7 (Gi|20=1) Soil. > a1\nm(Gle = 1) = fi otherwise \" e 0\nFy(clx,g)(6\") = (1 \u2014 a(z = Ua, 6) - Feeiter=oy (6) + (2 = Ua. 6) - Freercry (C\n\nef 4\n= q(z = 1\\z,\u00a2)- Boi 1) +1.\nTo evaluate the autoencoder of Figure 1c, and through it the gradient approximation of Equation 8\nwe must invert the conditional-marginal CDF Fi\u00a2j..4):\nFol (p) = 4 -log | (2#2=*) - (e? -1) +1], ifp>1-\u2014q\nacle.d) 0, otherwise\nOther expansions to the continuous space are possible. In Appendix D.1, we consider the case where\nboth r(\u00a2;|z; = 0) and r(\u00a2;|z; = 1) are linear functions of \u00a2; in Appendix D.2, we develop a spike-\nand-slab transformation; and in Appendix E, we explore a spike-and-Gaussian transformation where\nthe continuous \u00a2 is directly dependent on the input x in addition to the discrete z.\n5In the limit 8 \u2014+ 00, \u00a2; = 2; almost surely, and the continuous variables \u00a2 can effectively be removed from\nthe model. This trick can be used after training with finite 8 to produce a model without smoothing variables \u00a2.\nwhere we use the substitution q\\z = |r, Q) 7 q lo sunpiny hotation. For all values of the inde-\n\npendent random variable p ~ U[0, 1], the function Fela, 6) (p) rectifies the input q(z = 1|x, \u00a2) if\n\nq < 1\u2014 pp ina manner analogous to a rectified linear unit (ReLU), as shown in Figure 2a. It is\nalso quasi-sigmoidal, in that F'~ is increasing but concave-down if q > 1 \u2014 p. The effect of p on\nF~! is qualitatively similar to that of dropout (Srivastava et al., 2014), depicted in Figure 2b, or the\nnoise injected by batch normalization (loffe & Szegedy, 2015) using small minibatches, shown in\nFigure 2c.\nFigure 2: Inverse CDF of the spike-and-exponential smoothing transformation fo:\np \u20ac {0.2,0.5,0.8}; 6 = 1 (dotted), 8 = 3 (solid), and 6 =5 (dashed) (a). Rectified linea\nunit with dropout rate 0.5 (b). Shift (red) and scale (green) noise from batch normalization; with\nmagnitude 0.3 (dashed), \u20140.3 (dotted), or 0 (solid blue); before a rectified linear unit (c). In al\ncases, the abcissa is the input and the ordinate is the output of the effective transfer function. The\nnovel stochastic nonlinearity F\u2019, wel, \u00a2) (p) from Figure 1c, of which (a) is an example, is qualitativel;\n\nsimilar to the familiar stochastic nonlinearities induced by dropout (b) or batch normalization (c).\n3, ACCOMMODATING EXPLAINING-AWAY WITH A HIERARCHICAL\nAPPROXIMATING POSTERIOR\nWhen a probabilistic model is defined in terms of a prior distribution p(z) and a conditional dis-\ntribution p(x|z), the observation of x often induces strong correlations in the posterior p(z|x) due\nto phenomena such as explaining-away (Pearl, 1988). Moreover, we wish to use an RBM as the\nprior distribution (Equation 6), which itself may have strong correlations. In contrast, to maintain\ntractability, many variational approximations use a product of independent approximating posterior\ndistributions (e.g., mean-field methods, but also Kingma & Welling (2014); Rezende et al. (2014)).\na(21,G1s-++ 2k Cele, 6) = TT r(Gjlzs)-a(zilGccj.,\u00a2) where\n1<j<k\n\n093 (Gi 5.8.4) oz\n\nqzj\\Ci<j,2, b)\n2; \u20ac {0,1}\", and 9; (\u00a2:<;,x,\u00a2) is a parameterized function of the inputs and preceding \u00a2;, such as\na neural network. The corresponding graphical model is depicted in Figure 3a, and the integration\nof such hierarchical approximating posteriors into the reparameterization trick is discussed in Ap-\npendix A. If each group z; contains a single variable, this dependence structure is analogous to that\nof a deep autoregressive network (DARN; Gregor et al., 2014), and can represent any distribution.\nHowever, the dependence of z; on the preceding discrete variables z;<; is always mediated by the\ncontinuous variables \u00a2;<;.\nThis hierarchical approximating posterior does not affect the form of the autoencoding term in Equa-\ntion 8, except to increase the depth of the autoencoder, as shown in Figure 3b. The deterministic\nprobability value q(z; = 1\u00a2:<;,2,) of Equation 10 is parameterized, generally by a neural net-\nwork, in a manner analogous to Section 2. However, the final logistic function is made explicit in\nEquation 10 to simplify Equation 12. For each successive layer j of the autoencoder, input x and all\nprevious (;<; are passed into the network computing g(z = 1|C;<;, x, d). Its output g;, along with an\n\u00a9The continuous latent variables \u00a2 are divided into complementary disjoint groups C1,\nT 77\n7\n\n,\nr\u00a30.3 47,\n\ne205 \u00ab-(1+03) /7\n| | no noise oa,\np< 05\nL L L f L L L re\n0 0.2 04 0.6 0.8 -1 -0.56 0 0.5 -1 -05 0 O58 1\n(a) Spike-and-exp, 3 \u20ac {1,3,5} (b) ReLU with dropout (c) ReLU with batch norm\nTo accommodate strong correlations in the posterior vor gtale). while maintaining tractability, we\nintroduce a hierarchy into the approximating posterior q(z|:\nSpecifically, we divide the latent variables z of the RBM into disjoint groups, Zz Shs and\ndefine the approximating posterior via a directed acyclic graphical model over these groups:\nq(23 = 1Gi<s, 2, \u00a2)\n\n-1\naa(CslGves.0.6) P)\n\nP(x\\C, o)\n\n(a) Hierarch approx post g(\u00a2. z|x) (b) Hierarchical ELBO autoencoding term\nFigure 3: Graphical model of the hierarchical approximating posterior (a) and the network realizing\nthe autoencoding term of the ELBO (b) from Equation 2. Discrete latent variables z; only depenc\non the previous z;<; through their smoothed analogs \u00a2;<;. The autoregressive hierarchy allows the\napproximating posterior to capture correlations and multiple modes. Again, all operations in (b) are\ndeterministic and differentiable given the stochastic input p.\nndependent random variable p ~ U{0, 1], is passed to the deterministic function F\u2019, UG Ieve5 0,6) (f\ni<ay\n\n0 produce a sample of \u00a2j. Once all \u00a2; have been recursively computed, the full \u00e9 along with th\n\nvriginal input x is finally passed to log p (z|\u00a2, 0). The expectation of this log probability with respec\n\nO pis again the autoencoding term of the VAE formalism, as in Equation 2.\nIn Appendix F, we show that the gradients of the remaining KL term of the ELBO (Equation 2) can\nbe estimated stochastically using:\n(a) OE, (z, 9) OE,(z,0)\ngg KE alle] = Eacerle.#) [ [Eaten 70 Ey(z\\a) | 5p\n(a) + Og l-z_@\n\u201c KL [ally] = E, |(g(a,\u00a2) \u20140)' -4 27 -w- \u201c4\nFKL lie] = Bp [(gla.\u00a2) \u2014)\" 54-27 we. (T= 0 SE),\nIn particular, Equation 12 is substantially lower variance than the naive approach to calculate\n-2. KL [q||p], based upon REINFORCE.\n4 MODELLING CONTINUOUS DEFORMATIONS WITH A HIERARCHY OF\nCONTINUOUS LATENT VARIABLES\nWe can make both the generative model and the approximating posterior more powerful by addins\nadditional layers of latent variables below the RBM. While these layers can be discrete, we focus ot\ncontinuous variables, which have proven to be powerful in generative adversarial networks (Goodfel\nlow et al., 2014) and traditional variational autoencoders (Kingma & Welling, 2014; Rezende et al.\n2014). When positioned below and conditioned on a layer of discrete variables, continuous variable:\ncan build continuous manifolds, from which the discrete variables can choose. This complement:\nthe structure of the natural world, where a percept is determined first by a discrete selection of th\ntypes of objects present in the scene, and then by the position, pose, and other continuous attribute:\nof these objects.\nSpecifically, we augment the latent representation with continuous random variables 3,\u2019 and define\nboth the approximating posterior and the prior to be layer-wise fully autoregressive directed graphi-\ncal models. We use the same autoregressive variable order for the approximating posterior as for the\n\u2018We always use a variant of z for latent variables. This is Fraktur z, or German z.\nFigure 4: Graphical models of the approximating posterior (a) and prior (b) with a hierarchy o\ncontinuous latent variables. The shaded regions in parts (a) and (b) expand to Figures 3a and It\nrespectively. The continuous latent variables 3 build continuous manifolds, capturing properties lik\u00ab\nposition and pose, conditioned on the discrete latent variables z, which can represent the discret\ntypes of objects in the image.\nprior, as in DRAW (Gregor et al., 2015), variational recurrent neural networks (Chung et al., 2015),\nthe deep VAE of Salimans (2016), and ladder networks (Rasmus et al., 2015; Sgnderby et al., 2016).\nWe discuss the motivation for this ordering in Appendix G.\nThe directed graphical model of the approximating posterior and prior are defined by:\n= Il 4 (3m|31<m, 2%, 9)\n\nO0<m<n\n\n= [I PGmlsrcm.d\n\nO0<m<n\nThe full set of latent variables associated with the RBM is now denoted by 39 = {21, C1, ---, 2% Ge}:\nHowever, the conditional distributions in Equation 13 only depend on the continuous \u00a2;. Each 3m>1\ndenotes a layer of continuous latent variables, and Figure 4 shows the resulting graphical model.\nL(x, 0,) = Eq(;\\2,4) [log p(x|3, )] \u2014 SP Ea (siemle.d) [KL [4(3m|3t<m,X;%)||P(3m|3r<m: 4\n\nm\nTf both q(3ml3icm+ 2+) and p(3m|3cm;9) are Gaussian, then their KL divergence has a simple\nclosed form, which is computationally efficient if the covariance matrices are diagonal. Gradients\ncan be passed through the q(3:<m|x,\u00a2) using the traditional reparameterization trick, described in\nSection 1.1."}, {"section_index": "4", "section_name": "5 RESULTS", "section_text": "Discrete variational autoencoders comprise a smoothed RBM (Section 2) with a hierarchical approx\nmating posterior (Section 3), followed by a hierarchy of continuous latent variables (Section 4). W\nyarameterize all distributions with neural networks, except the smoothing distribution r(\u00a2|z) dis\ncussed in Section 2. Like NVIL (Mnih & Gregor, 2014) and VAEs (Kingma & Welling, 2014\nXezende et al., 2014), we define all approximating posteriors q to be explicit functions of x, witl\nparameters @ shared between all inputs x. For distributions over discrete variables, the neural net\nvorks output the parameters of a factorial Bernoulli distribution using a logistic final layer, as it\nSquation 10; for the continuous 3, the neural networks output the mean and log-standard deviatiot\nyf a diagonal-covariance Gaussian distribution using a linear final layer. Each layer of the neu\nal networks parameterizing the distributions over z, 3, and x consists of a linear transformation\nexorore CQ CL)\nDep BASs\n\n(a) Approx post w/ cont latent vars g(3. C. zla) (b) Prior w/ cont latent vars p(x. 3. C. z)\nThe hierarchical structure of Section 4 is very powerful, and overfits without strong regularizatior\nof the prior, as shown in Appendix H. In contrast, powerful approximating posteriors do not induc\u00a2\nsignificant overfitting. To address this problem, we use conditional distributions over the inpu\np(z|\u00a2,@) without any deterministic hidden layers, except on Omniglot. Moreover, all other neura\nnetworks in the prior have only one hidden layer, the size of which is carefully controlled. Or\nstatically binarized MNIST, Omniglot, and Caltech-101, we share parameters between the layers o:\nthe hierarchy over 3. We present the details of the architecture in Appendix H.\nWe train the resulting discrete VAEs on the permutation-invariant MNIST (LeCun et al., 1998), Om-\nniglot\u00ae (Lake et al., 2013), and Caltech-101 Silhouettes datasets (Marlin et al., 2010). For MNIST,\nwe use both the static binarization of Salakhutdinov & Murray (2008) and dynamic binarization.\nEstimates of the log-likelihood? of these models, computed using the method of (Burda et al., 2016)\nwith 104 importance-weighted samples, are listed in Table 1. The reported log-likelihoods for dis-\ncrete VAEs are the average of 16 runs; the standard deviation of these log-likelihoods are 0.08, 0.04,\n0.05, and 0.11 for dynamically and statically binarized MNIST, Omniglot, and Caltech-101 Silhou-\nettes, respectively. Removing the RBM reduces the test set log-likelihood by 0.09, 0.37, 0.69, and\n0.66.\nMNIST (dynamic binarization)\n\nMNIST (static binarization)\n\nLL ELBO LL\nDBN -84.55 HVI -88.30 -85.51\nIWAE -82.90 DRAW -87.40\nLadder VAE -81.74 NAIS NADE -83.67\nDiscrete VAE -80.15 Normalizing flows -85.10\nVariational Gaussian process -81.32\nDiscrete VAE -84.58 -81.01\nOmniglot Caltech-101 Silhouettes\nLL LL\nIWAE -103.38 IWAE -117.2\nLadder VAE -102.11 RWS SBN -113.3\nRBM -100.46 RBM -107.8\nDBN -100.45 NAIS NADE -100.0\nDiscrete VAE -97.43 Discrete VAE -97.6\nTable 1: Test set log-likelihood of various models on the permutation-invariant MNIST, Omniglot,\nand Caltech-101 Silhouettes datasets. For the discrete VAE, the reported log-likelihood is estimated\nwith 104 importance-weighted samples (Burda et al., 2016). For comparison, we also report perfor-\nmance of some recent state-of-the-art techniques. Full names and references are listed in Appendix I.\nWe further analyze the performance of discrete VAEs on dynamically binarized MNIST: the larges\nof the datasets, requiring the least regularization. Figure 5 shows the generative output of a discret\nVAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held con\nstant across each sub-row of five samples, and variation amongst these samples is due to the layer:\nof continuous latent variables. Given a multimodal distribution with well-separated modes, Gibb:\nsampling passes through the large, low-probability space between the modes only infrequently. A!\na result, consistency of the digit class over many successive rows in Figure 5 indicates that the RBM\nprior has well-separated modes. The RBM learns distinct, separated modes corresponding to th\ndifferent digit types, except for 3/5 and 4/9, which are either nearby or overlapping; at least tens o\n*We use the partitioned, preprocessed Omniglot dataset of Burda et al. (2016), available from\ngithub.com/yburda/iwae/tree/master/datasets/OMNIGLOT.\n\u00b0The importance-weighted estimate of the log-likelihood is a lower bound, except for the log partition\nfunction of the RBM. We describe our unbiased estimation method for the partition function in Appendix H.1.\nbatch normalization (loffe & Szegedy, 2015) (but see Appendix H.2), and a rectified-linear point-\nwise nonlinearity (ReLU). We stochastically approximate the expectation with respect to the RBM\nprior p(z|@) in Equation 11 using block Gibbs sampling on persistent Markov chains, analogous to\npersistent contrastive divergence (Tieleman, 2008). We minimize the ELBO using ADAM (Kingma\n& Ba, 2015) with a decaying step size.\nNAXBGYUMEBAAADYMKANYD\nANRWENINKAGK A Hd\nNEA ANKXGBHVUBAGA\nTANGKIANDABNX ON\nANARXKBAAANABNG\n\nNABYAN\nNAAXAS\nSAKAAN\nNAN BAAR\nAdnrdare\n\nOMaHNOhHHYUhb_OY\nhLonngub hub bb nw\nYMUMRDLYbWWNo bd MY\nAMOMMOWNS Wo hn\nMON MO AL HPHLYL Lah\n\nHn hoy\nQo hmm\nAMO} AMS\nONMHMM\nrams HO\n\nTOoRFnTTrTrTRrPAFAKRKRACHA\n\noyoororvrvrr 8 oT ITN\nro7yrrooTrtPwearwarre\nYroauyerrersryerIx Tn\n\nReXrne\nNeacnes\nBRKeerer\n\nTE ARATRTTTXTURTAARARLHSA\n\nSR\u00aeSIPSVHS3II9VMNSQWNAIEAND\nWSBOFTDGHUYUOOSIIWVAYAAAG\nMW HBGwo VBS VSOVWVNWIVISMKNGAXAN\nNO WVWVYIVYUX DHYVre I BAKAVNS\nINVHDHVIVSYD SY HFSGNARAADN\nMFO SCIBHHIVSV GeXYrHYSSHSWOUDWHOVUNS\nCoO OSU GV UY FSW GDHOOCIOVWYWY\nWOBHBOSNHNSHHBVSHOHVSIw\u00b0LOYwWVH38\nWOH IwHHGVWwWVvIvw DAS 05TH\nQBWOGNVVOFIGHGIYNHVOIVISSYs\nSNS NNT KH RYN Ne ~ Nr RENN\n~~ ENS NN BH RHR HTN EN ENN\na a a aS\nSwe RN er er ee NY rN\n-SNN TOTO N OO TNS\n\nMON \u00a9 MMH & WD & B& & 0 Od\nMW & | &| bo vv oe Dy Oe O O&\nNMDOUN MM) KLRAWDAWN\nPOM Dm amy h wow QeH\nMme \u00a9 YM HH Ly & Oo 00 & &\n\nAgaorbo\u201da\nwo pe oe tg h\nDAehbsBYOm\nFP POND\nOR\u2019 MMH\nFigure 5: Evolution of samples from a discrete VAE trained on dynamically binarized MNIST, using\npersistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM\nbetween successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM,\nbut independent continuous latent variables, and shows the variation induced by the continuous\nlayers as opposed to the RBM. The long vertical sequences in which the digit ID remains constant\ndemonstrate that the RBM has well-separated modes, each of which corresponds to a single (or\noccasionally two) digit IDs, despite being trained in a wholly unsupervised manner.\nFigure 6: Log likelihood versus the number of iterations of block Gibbs sampling per minibatch (a)\nthe number of units in the RBM (b), and the number of layers in the approximating posterior ove!\nthe RBM (c). Better sampling (a) and hierarchical approximating posteriors (c) support better per:\nformance, but the network is robust to the size of the RBM (b).\nthousands of iterations of single-temperature block Gibbs sampling is required to mix between the\nmodes. We present corresponding figures for the other datasets, and results on simplified architec-\ntures, in Appendix J.\nThe large mixing time of block Gibbs sampling on the RBM suggests that training may be con-\nstrained by sample quality. Figure 6a shows that performance!\u00ae improves as we increase the num-\nber of iterations of block Gibbs sampling performed per minibatch on the RBM prior: p(z|@) in\nEquation 11. This suggests that a further improvement may be achieved by using a more effective\nsampling algorithm, such as parallel tempering (Swendsen & Wang, 1986).\n\u201cAll models in Figure 6 use only 10 layers of continuous latent variables, for computational efficiency.\n| |\nfo fo\nSs S\ni we\n\nLog likelihood\n\n|\n[oa]\nS\nuw\n\n(a) B\n\n1\n\n10\n\n100\n\nlock Gibbs iterations\n\n8 16\n\nL L\n32 64 128\n\n(b) Num RBM units\n\n1 2 4 8\n(c) RBM approx post layers\nCommensurate with the small number of intrinsic classes, a moderately sized RBM yields the best\nperformance on MNIST. As shown in Figure 6b, the log-likelihood plateaus once the number of\nunits in the RBM reaches at least 64. Presumably, we would need a much larger RBM to model a\ndataset like Imagenet, which has many classes and complicated relationships between the elements\nof various classes.\nThe benefit of the hierarchical approximating posterior over the RBM, introduced in Section 3, is\napparent from Figure 6c. The reduction in performance when moving from 4 to 8 layers in the\napproximating posterior may be due to the fact that each additional hierarchical layer over the ap-\nproximating posterior adds three layers to the encoder neural network: there are two deterministic\nhidden layers for each stochastic latent layer. As a result, expanding the number of RBM approx.\nimating posterior layers significantly increases the number of parameters that must be trained, anc\nincreases the risk of overfitting.\nWe avoid this problem by symmetrically projecting the approximating posterior and the prior into <\ncontinuous space. We then evaluate the autoencoding term of the evidence lower bound exclusively\nin the continous space, marginalizing out the original discrete latent representation. At the same\ntime, we evaluate the KL divergence between the approximating posterior and the true prior in the\noriginal discrete space; due to the symmetry of the projection into the continuous space, it does no\ncontribute to the KL term. To increase representational power, we make the approximating posterio!\nover the discrete latent variables hierarchical, and add a hierarchy of continuous latent variable:\nbelow them. The resulting discrete variational autoencoder achieves state-of-the-art performance or\nthe permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets."}, {"section_index": "5", "section_name": "ACKNOWLEDGEMENTS", "section_text": "Zhengbing Bian, Fabian Chudak, Arash Vahdat helped run experiments. Jack Raymond provided\nthe library used to estimate the log partition function of RBMs. Mani Ranjbar wrote the cluster\nmanagement system, and a custom GPU acceleration library used for an earlier version of the code.\nWe thank Evgeny Andriyash, William Macready, and Aaron Courville for helpful discussions; and\none of our anonymous reviewers for identifying the problem addressed in Appendix D.3."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in\nNeural Information Processing Systems, pp. 3084-3092, 2013.\nYoshua Bengio, Nicholas L\u00e9onard, and Aaron Courville. Estimating or propagating gradients\nthrough stochastic neurons for conditional computation. arXiv preprint arXiv: 1308.3432, 2013.\nDatasets consisting of a discrete set of classes are naturally modeled using discrete latent variables.\nHowever, it is difficult to train probabilistic models over discrete latent variables using efficient\ngradient approximations based upon backpropagation, such as variational autoencoders, since it is\nsenerally not possible to backpropagate through a discrete variable (Bengio et al.. 2013).\nYuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. Proceed-\nings of the International Conference on Learning Representations, arXiv:1509.00519, 2016.\nSteve Cheng. Differentiation under the integral sign with weak derivatives. Technical report, Work\ning paper, 2006.\nAaron C. Courville, James S. Bergstra, and Yoshua Bengio. Unsupervised models of images b\nspike-and-slab rbms. In Proceedings of the 28th International Conference on Machine Learnin;\npp. 1145-1152, 2011.\nAlex Graves. Stochastic backpropagation through mixture density distributions. arXiv preprint\narXiv: 1607.05690, 2016.\nKarol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregres:\nsive networks. In Proceedings of the 31st International Conference on Machine Learning, pp\n1242-1250, 2014.\nGeoffrey E. Hinton and R. S. Zemel. Autoencoders, minimum description length, and Helmholtz\nfree energy. In J. D. Cowan, G. Tesauro, and J. Alspector (eds.), Advances in Neural Information\nProcessing Systems 6, pp. 3-10. Morgan Kaufmann Publishers, Inc., 1994.\nMatthew Johnson, David K Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P\nAdams. Composing graphical models with neural networks for structured representations and\nfast inference. In Advances in Neural Information Processing Systems. pp. 2946-2954, 2016.\nMichael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introductiot\nto variational methods for graphical models. Machine learning, 37(2):183\u2014233, 1999.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of\nthe International Conference on Learning Representations, arXiv: 1412.6980, 2015.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,\nAaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-\nmation Processing Systems, pp. 2672\u20142680, 2014.\nSergey loffe and Christian Szegedy. Batch normalization: Accelerating deep network training by\nreducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine\nLearning, pp. 448-456, 2015.\nYann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to\ndocument recognition. Proceedings of the IEEE, 86(11):2278\u20142324, 1998.\nBenjamin M Marlin, Kevin Swersky, Bo Chen, and Nando de Freitas. Inductive principles fot\nrestricted Boltzmann machine learning. In Proceedings of the 13th International Conference or\nArtificial Intelligence and Statistics, pp. 509\u2014516, 2010.\nAndriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. Pro-\nceedings of the 31st International Conference on Machine Learning, pp. 1791-1799, 2014.\nAndriy Mnih and Danilo J. Rezende. Variational inference for Monte Carlo objectives. In Proceed.\nings of the 33rd International Conference on Machine Learning, pp. 2188-2196, 2016.\nJain Murray and Ruslan R. Salakhutdinov. Evaluating probabilities under high-dimensional latent\nvariable models. In Advances in Neural Information Processing Systems, pp. 1137-1144, 2009.\nRadford M. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56(1):71-113,\n1992.\nBruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by\nlearning a sparse code for natural images. Nature, 381(6583):607\u2014609, 1996.\nJudea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Mor\ngan Kaufmann, 1988.\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-\nsupervised learning with ladder networks. In Advances in Neural Information Processing Systems,\npp. 3546-3554, 2015.\nMichael R. Shirts and John D. Chodera. Statistically optimal analysis of samples from multiple\nequilibrium states. The Journal of Chemical Physics, 129(12), 2008.\nPaul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In\nD.E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed Processing, volume 1, chapter 6,\npp. 194-281. MIT Press, Cambridge, 1986.\nDavid J. Spiegelhalter and Steffen L. Lauritzen. Sequential updating of conditional probabilities o\ndirected graphical structures. Networks, 20(5):579-605, 1990.\nRonald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement\nlearning. Machine learning, 8(3-4):229-256, 1992.\nA MULTIVARIATE VAES BASED ON THE CUMULATIVE DISTRIBUTION\nFUNCTION\nThe reparameterization trick is always possible if the cumulative distribution function (CDF) of\nq(z|2x, @) is invertible, and the inverse CDF is differentiable, as noted in Kingma & Welling (2014).\nHowever, for multivariate distributions, the CDF is defined by:\nRuslan Salakhutdinov and Geoffrey E. Hinton. Deep Boltzmann machines. In Proceedings of the\n19th International Cnanterence an Artifrial Intellicanre and Statictire nn AAR_ASS IND\nRuslan Salakhutdinov and Jain Murray. On the quantitative analysis of deep belief networks. In\nProceedings of the 25th International Conference on Machine Learning, pp. 872-879. ACM,\n2008.\nThe multivariate CDF maps R\u201d \u2014> [0, 1], and is generally nor invertible.!!\nin place of the multivariate CDF, consider the set of conditional-marginal CDFs defined by:!\nThat is, F(x) is the CDF of ;, conditioned on all a; such that i < h, and marginalized over\nall a, such the j < k. The range of each F; is [0,1], so F maps the domain of the original\ndistribution to p \u20ac [0, 1]\". To invert F, we need only invert each conditional-marginal CDF in turn,\nconditioning a; = Fr *(p) on x, = Fy, 1(p),....%j-1 = Fri (0): These inverses exist so long as\nthe conditional-marginal probabilities are everywhere nonzero. It is not problematic to effectively\ndefine F>*(p) based upon x;<;, rather than p;<;, since by induction we can uniquely determine\nicy given pic;.\nUsing integration-by-substition, we can compute the gradient of the ELBO by taking the expectation\nof a uniform random variable p on (0, 1)\", and using F_, ae z|a,\u00a2) \u00a9 transform p back to the element\nof z on which p(2|z, ) is conditioned. To perform integration-by-substitution, we will require the\ndeterminant of the Jacobian of F~!.\nThe derivative of a CDF is the probability density function at the selected point, and F; is a simple\nCDF when we hold fixed the variables x;<; on which it is conditioned, so using the inverse function\ntheorem we find:\nwhere p is a vector, and Fri is oF Dy . The Jacobian matrix 9E F is triangular, since the earlier conditional-\nmarginal CDFs F\u2019; are independent of the value of the later 2, J < k, over which they are marginal-\nized. Moreover, the inverse conditional-marginal CDFs have the same dependence structure as F,\nso the Jacobian of F~! is also triangular. The determinant of a triangular matrix is the product of\nthe diagonal elements.\nUsing these facts to perform a multivariate integration-by-substitution, we obtain:\nEq(z\\2,\u00a2) [log p(x|z,9)] = | q(2|x, ) - log p(x|z, 4)\n\n| ach 0)\n~ [9 (Fading (les 2) -logp (2 a|FC) |x.) (0), 6) - |det (nee *)\n\n1\n= I (Fidiew) (Ale \u00b0) \u201clogp (x Fein.) 1(0).) : (1 Mateo) _\np\n\n=0 Il; qd (2; = F, (ole)\n\n~ [. loge (clF |r.) 1(0),8)\n[he gradient with respect to \u00a2 is then easy to approximate stochastically:\n(e) 1 te) _\nag nalelne) [log p(x|z, 0)] \u00a9 W \u00bb 3g O8P (cP sd )2,0) (0-9)\npr 1)\"\nNote that if q(z|:, \u00a2) is factorial (i.e., the product of independent distributions in each dimension z;),\nthen the conditional-marginal CDFs F\u2019; are just the marginal CDFs in each direction. However, even\nif q(z|x, @) is not factorial, Equation 17 still holds so long as F is nevertheless defined to be the set\nof conditional-marginal CDFs of Equation 15."}, {"section_index": "7", "section_name": "B THE DIFFICULTY OF ESTIMATING GRADIENTS OF THE ELBO WIT}\nREINFORCE", "section_text": "It is easy to construct a stochastic approximation to the gradient of the ELBO that only requires\ncomputationally tractable samples, and admits both discrete and continuous latent variables. Un-\nfortunately, this naive estimate is impractically high-variance, leading to slow training and poo!\nperformance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the\nbaseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih &\nGregor, 2014: Williams. 1992: Bengio et al., 2013: Mnih & Rezende, 2016):\na\nFBacoe le nll] = Eye [Boe r(ale,8) ~ BC tow alle. 6)]\n\n=F LD _ (oer(ale.#) ~ Bley] 2 tow.)\n\nznq(z\\2,)\nwhere B(x) is a (possibly input-dependent) baseline, which does not affect the gradient, but cai\nreduce the variance of a stochastic estimate of the expectation.\nIn REINFORCE, FE q(zI\\0,4) {log p(x|z, 4)] is effectively estimated by something akin to a finit\ndifference approximation to the derivative. The autoencoding term is a function of the conditiona\nlog-likelihood log p(|z,@), composed with the approximating posterior q(z|x,@), which deter\nmines the value of z at which p(2|z,@) is evaluated. However, the conditional log-likelihood i\nnever differentiated directly in REINFORCE, even in the context of the chain rule. Rather, the con\nditional log-likelihood is evaluated at many different points z ~ q(z|z,\u00a2), and a weighted sum o\nthese values is used to approximate the gradient, just like in the finite difference approximation.\nEquation 18 of REINFORCE captures much less information about p(|z, 0) per sample than Equa-\ntion 3 of the variational autoencoder, which actively makes use of the gradient. In particular, the\n\nchange of p(x|z, @) in some direction dcan only affect the REINFORCE gradient estimate if a sam-\nple is taken with a component in direction d. Ina D-dimensional latent space, at least D samples are\nsje) (log p(|z, 8)] = | alex, 9) \u00ablog p(a|z,8)\n\n_ [ia (Fading (les ) \u2018logy (# a\\F yb ja.u (0): )\n\nee ach x.) \u201c|\n\n1\n= I (Fading ()le+\u00a2) logp (IF i,,4)(6)-8) - (1 Mateo) _\n\n[ 4 (Fye.o)(0)le.9)\n0\n\n=0 TI 4 (4 = Fy eis\n\n= [. log p (IF ,d |x,6) (p), 6)\n\n5 lose (21Ftha.0)(0)4)\n[he variable p has dimensionality equal to that of z; 0 is the vector of all Os; 1 is the vector of all 1s.\nC AUGMENTING DISCRETE LATENT VARIABLES WITH CONTINUOUS LATEN'\nVARIABLES\nIntuitively, variational autoencoders break the encoder! distribution into \u201cpackets\u201d of probability\nof infinitessimal but equal mass, within which the value of the latent variables is approximately\nconstant. These packets correspond to a region rj < pi < rj + 6 for all i in Equation 16, and the\nexpectation is taken over these packets. There are more packets in regions of high probability, so\nhigh-probability values are more likely to be selected. More rigorously, F4(z|\u00ab,)(\u00a2) maps intervals\nof high probability to larger spans of 0 < p < 1, so a randomly selected p ~ U[0, 1] is more likely\nto be mapped to a high-probability point by Fos (p).\nAs the parameters of the encoder are changed, the location of a packet can move, while its mass is\nheld constant. That is, \u00a2 = Fit 2,0) (p) is a function of \u00a2, whereas the probability mass associated\nwith a region of p-space is constant by definition. So long as Foie, 2)\nsmall change in \u00a2 will correspond to a small change in the location of each packet. This allows us tc\nuse the gradient of the decoder to estimate the change in the loss function, since the gradient of the\ndecoder captures the effect of small changes in the location of a selected packet in the latent space.\n\nexists and is differentiable, 2\nIn contrast, REINFORCE (Equation 18) breaks the latent represention into segments of infinites-\nsimal but equal volume; e.g., z; < Zz < 2; +6 for all i (Williams, 1992; Mnih & Gregor, 2014;\nBengio et al., 2013). The latent variables are also approximately constant within these segments,\nbut the probability mass varies between them. Specifically, the probability mass of the segment\nz <2! < z+ 6is proportional to g(z|x\nOnce a segment is selected in the latent space, its location is independent of the encoder and decodet\nIn particular, the gradient of the loss function does not depend on the gradient of the decoder with\nrespect to position in the latent space, since this position is fixed. Only the probability mass assignec\nto the segment is relevant.\nAlthough variational autoencoders can make use of the additional gradient information from the\ndecoder, the gradient estimate is only low-variance so long as the motion of most probability packet:\nhas a similar effect on the loss. This is likely to be the case if the packets are tightly clustered (e.g.\nthe encoder produces a Gaussian with low variance, or the spike-and-exponential distribution o:\nSection 2.1), or if the movements of far-separated packets have a similar effect on the total loss (e.g.\nthe decoder is roughly linear).\nSince the approximating posterior q(z|x, @) maps each input to a distribution over the latent space, it i:\nsometimes called the encoder. Correspondingly, since the conditional likelihood p(x|z,) maps each configu\nration of the latent variables to a distribution over the input space, it is called the decoder.\nrequired to capture the variation of p(x|z, @) in all directions; fewer samples span a smaller subspace.\nSince the latent representation commonly consists of dozens of variables, the REINFORCE gradi-\nent estimate can be much less efficient than one that makes direct use of the gradient of p(x|z, 0).\nMoreover, we will show in Section 5 that, when the gradient is calculated efficiently, hundreds of\nlatent variables can be used effectively.\nNevertheless, Equation 17 of the VAE can be understood in analogy to dropout (Srivastava et al.,\n\n2014) or standout (Ba & Frey, 2013) regularization. Like dropout and standout, Foc, )(P p) is an\n\nelement-wise stochastic nonlinearity applied to a hidden layer. Since Flew oP p) selects a point\nin the probability distribution, it rarely selects an improbable point. Like standout, the distribution\nof the hidden layer is learned. Indeed, we recover the encoder of standout if we use the spike-and-\nGaussian distribution of Section E.1 and let the standard deviation o go to zero.\nHowever, variational autoencoders cannot be used directly with discrete latent representations, since\nchanging the parameters of a discrete encoder can only move probability mass between the allowed\ndiscrete values, which are far apart. If we follow a probability packet as we change the encoder\nparameters, it either remains in place, or jumps a large distance. As a result, the vast majority of\nprobability packets are unaffected by small changes to the parameters of the encoder. Even if we are\nlucky enough to select a packet that jumps between the discrete values of the latent representation,\nthe gradient of the decoder cannot be used to accurately estimate the change in the loss function.\nsince the gradient only captures the effect of very small movements of the probability packet.\nTo use discrete latent representations in the variational autoencoder framework, we must first trans\nform to a continuous latent space, within which probability packets move smoothly. That is, wi\nmust compute Equation 17 over a different distribution than the original posterior distribution. Sur\nprisingly, we need not sacrifice the original discrete latent space, with its associated approximatin;\nposterior. Rather, we extend the encoder q(z|x,) and the prior p(z|@) with a transformation to ;\ncontinuous, auxiliary latent representation \u00a2, and correspondingly make the decoder a function o\nthis new continuous representation. By extending both the encoder and the prior in the same way\nwe avoid affecting the remaining KL divergence in Equation 2.!\u00a2\nThe gradient is defined everywhere if we require that each point in the original latent space map tc\nnonzero probability over the entire auxiliary continuous space. This ensures that, if the probability\nof some point in the original latent space increases from zero to a nonzero value, no probability\npacket needs to jump a large distance to cover the resulting new region in the auxiliary continuous\nspace. Moreover, it ensures that the conditional-marginal CDFs are strictly increasing as a functior\nof their main argument, and thus are invertible.\nIf we ignore the cases where some discrete latent variable has probability 0 or 1, we need only\nrequire that, for every pair of points in the original latent space, the associated regions of nonzero\nprobability in the auxiliary continuous space overlap. This ensures that probability packets can move\ncontinuously as the parameters @ of the encoder, q(z|x, \u00a2), change, redistributing weight amongst\nthe associated regions of the auxiliary continuous space.\nD_ ALTERNATIVE TRANSFORMATIONS FROM DISCRETE TO CONTINUOUS\nLATENT REPRESENTATIONS\nAs another concrete example, we consider a case where both r(\u00a2;|z; = 0) and r(\u00a2;|z; = 1) are\nlinear functions of \u00a2;:\n2-(1-G), if0<G <1 , , , 1\nr(Gi|zi = 0) = to (1-G) vtherhe Fy jaaoy(C\") = 26 \u2014 GI = 20\" 07\n\nGla=0 = {5 if0<G <1\n\n0, otherwise Fra) = @ I< =\nwhere F,,(\u00a2\u2019) = fone p(\u00a2) - d\u00a2 is the CDF of probability distribution p in the domain [0, 1]. The\nCDF for q(C\\x, d) as a function of g(z = 1\\x, d) is:\nFy(ciao)(6!) = (1 (2 = Ver, 9) (26 \u2014) + a(e = Ie, 6)\"\n\n=2-g(z=la,9)- (67 = 6) 426-67,\n'4Rather than extend the encoder and the prior, we cannot simply prepend the transformation to continuous\nspace to the decoder, since this does not change the space of the probabilty packets.\nThe spike-and-exponential transformation from discrete latent variables z to continuous latent vari-\nables \u00a2 presented in Section 2.1 is by no means the only one possible. Here, we develop a collection\nof alternative transformations.\nWe can calculate Fide, #) explicitly, using the substitutions Fy(\u00a2jz,4) > p, q(z = 1a, 6) > q, ane\n\n\u00a2c\u2019 > C in Equation 19 to simplify notation:\n(@Q-1)+ VP? +2(p-Dat+\u2014-p\n2q-1\n_q-1+VJq-1?+(2q-1)-p\n\n2q-1\n\nF~*(p)\n(0)\n\n-1\na(cla.\u00a2\n\ni Ll Ll i\n02 04 06 08\nq(z = 1x, \u00a2)\nFigure 7: Inverse CDF of the mixture of ramps transformation for p \u20ac {0.2,0.5,0.8"}, {"section_index": "8", "section_name": "D.2 SPIKE-AND-SLAB", "section_text": "oo, if\u00a2 =0\n\njo. = _ 1)\nr(Gilzi = 0) \\o otherwise Fy(\u00a2\\2:=0)(\u00a2') = 1\n\n1, f0<G<1\n\nJe \u2014-1l)= Nao act\nr(Gilzi = 1) {0 otherwise Fria) = Gilg =\u00a2\nFy(cle,o)(6\") = (1 \u2014 a(z = Ue, )) - Fecejexoy (6) + a(2 = Ma, 6) \u00bb Feeijevaay(\u00a2\n= q(z = la,d)-(\u00a2-1) +1.\npa=2-q(C-\u2014G+2%\u00a2-\u00a2\n0= (2q-1)-( +2(1\u20149)-\u00a2-p\n\u00a2 2(q\u2014-1) + V4 - 2\u00a2 + @) + 4(2q \u2014 Lp\n2(2q \u2014 1)\n(q-N)+V+4+2(p\u2014-Dat+\u2014p)\n2q-1\nWe can also use the spike-and-slab transformation, which is consistent with sparse coding and\nproven in other successful generative models (Courville et al., 2011):\np-1 i _\nP-+1, ifp>l\u2014q\n1 _ q \u2019\nFa(Cle.8) (p) 10 otherwise\nWe plot F, on __4.(p) as a function of q for various values of p in Figure 8\n)(p)\n\n-1\n\nq(C|a.6\n\nL Ll i f\n02 04 06 08\nq(z = |x, 4)\nFigure 8: Inverse CDF of the spike-and-slab transformation for p \u20ac {0.2, 0.5, 0.8!\n(a)\n2 rrp) =\n8 | pry)\n\n;\nple): Ze Mp) =\n\n_ OF\nOz\n\nOa\naor (A) =!\n\nF-\\(p)\n\nz\nIt is not necessary to define the transformation from discrete to continuous latent variables in the\napproximating posterior, r(C|z), to be independent of the input x. In the true posterior distribution,\nWe can calculate F-, ,. explicitly, using the substitution g(z = 1|x, \u00a2) \u2014 q to simplify notation:\nIf the smoothing transformation is not chosen appropriately, the contribution of low-probability\nregions to the expected gradient of the inverse CDF may be large. Using a variant of the inverse\nfunction theorem, we find:\nwhere z = F'~1(p). Consider the case where r(\u00a2;|z; = 0) and r(\u00a2;|z; = 1) are unimodal, but have\nlittle overlap. For instance, both distributions might be Gaussian, with means that are many standard\ndeviations apart. For values of \u00a2; between the two modes, F(\u00a2;) ~ q(zi = O|x,\u00a2@), assuming\n\nwithout loss of generality that the mode corresponding to z; = 0 occurs at a smaller value of G than\nthat corresponding to z; = 1. As a result, x = 1 between the two modes, and or & A y even\nif r(\u00a2;) ~& 0. In this case, the stochastic estimates of the gradient in equation 8, which depend upon\n\nor\u201d , have large variance.\nThese high-variance gradient estimates arise because r(\u00a2;|z; = 0) and r(\u00a2;|z; = 1) are too well\nseparated, and the resulting smoothing transformation is too sharp. Such disjoint smoothing trans-\nformations are analogous to a sigmoid transfer function o(c - x), where o is the logistic function\nand c \u2014 oo. The smoothing provided by the continuous random variables \u00a2 is only effective\nif there is a region of meaningful overlap between r(\u00a2|z = 0) and r(\u00a2\\z = 1). In particular,\nY., (Glzi = 0) +r(Gilzi = 1) > 0 for all \u00a2; between the modes of r(\u00a2;|zi = 0) and r(Gi]z; = 1),\nso p(z) remains moderate in equation 21. In the spike-and-exponential distribution described in\nSection 2.1, this overlap can be ensured by fixing or bounding 8.\np(\u00a2|z, x) \u00a9 p(\u00a2|z) only if z already captures most of the information about x and p(\u00a2|z, x) changes\nlittle as a function of x, since\np(cle) = f r(G.2l2) = | r(Cle.2) - plete.\nThis is implausible if the number of discrete latent variables is much smaller than the entropy of the\ninput data distribution. To address this, we can define:\nq(G, zl\", @) = q(2|z, 6) a(lz,2,0\nPG, 218) = v(\u00a2|z) - p(z|9)\nThis leads to an evidence lower bound that resembles that of Equation 2, but adds an extra term:\nThe extension to hierarchical approximating posteriors proceeds as in sections 3 and 4.\nIf both g(\u00a2|z, x, \u00a2) and p(\u00a2|z) are Gaussian, then their KL divergence has a simple closed form,\nwhich is computationally efficient if the covariance matrices are diagonal. However, while the gra-\ndients of this KL divergence are easy to calculate when conditioned on z, the gradients with respect\nof g(zlx. d) in the new term seem to force us into a REINFORCE-like approach (c.f. Equation 18):\nlog q(z|x. 6)\n\nHME) KL fa(le.2, 9llC|2)] = Eyaw) [KL (o(l2.2, 4) ley] 8\n\n0\u00a2\n\nz\n(23\nThe reward signal is now KL [q(\u00a2|z, x, \u00a2)||p(\u00a2|z)] rather than log p(x|z,), but the effect on th\n\nvariance is the same, likely negating the advantages of the variational autoencoder in the rest of th\nlace fiinctinn\nAlog q(z\\x, 4)\n\nHME) KL fa(le.2, 9llC|2)] = Eyaw) [KL (o(l2.2, 4) ley] 8\n\nz\nHowever, whereas REINFORCE is high-variance because it samples over the expectation, we can\nperform the expectation in Equation 23 analytically, without injecting any additional variance.\nSpecifically, if q(z|a,@) and q(\u00a2|z,\u00ab,) are factorial, with q(\u00a2;|zi,7,@) only dependent on z;,\nthen KL [q(\u00a2|z, x, )||p(\u00a2|z)] decomposes into a sum of the KL divergences over each variable, as\ndoes Steg qle.6) The expectation of all terms in the resulting product of sums is zero except those"}, {"section_index": "9", "section_name": "E.1 SPIKE-AND-GAUSSIAN", "section_text": "We might wish \u00a2(\u00a2;|z;, x, @) to be a separate Gaussian for both values of the binary z;. However, it\nis difficult to invert the CDF of the resulting mixture of Gaussians. It is much easier to use a mixture\nof a delta spike and a Gaussian, for which the CDF can inverted piecewise:\n= 108 PULIF) BEING 125 hs P) * LAL, P)IPAS 23 U3 7) * PAZI 5 7)\n\n- 5) (clad) tom [PUIG 8) -P(Cl=.8) -rlel0)\n=X facle.2.9) -acle,0) om ey ate\n\n= Eg(\u00a2\\z,0,4)-4(2|2,\u00a2) [log p(x|\u00a2, \u00ae)] \u2014 KL [g(z|, 9)||p(z19)]\n\u2014 SF (zl, \u00a2)- KL [a(\u00a2|z,2,4)||(Cl2)] -\n0, if\u00a2; <0\n1, otherwise\n\na(Gilzi = 0, @, 6) = 5(G) Fa (\u00e9:\\2:=0,0,6)(G) = H(Gi) = {\n\nil@; =1,2,\u00a2) = Ug i(a, b), 02 (a, o o(G) <2 /14 Gi = Ha.i(e-9)\nq(Gil2i = 1,2, \u00a2) Np gil .), ail ,6)) Fy(e;2i=1,0,) (Gi) 3 [ T at( V0, ale, d) )|\nwhere j1g(x, @) and o4(x, \u00a2) are functions of z and \u00a2. We use the substitutions q(z; = 1|x,@) > q,\nMqi(X, 0) > ftg,i, and o4,;(a,) \u2014> oq,: in the sequel to simplify notation. The prior distribution p\nis similarly parameterized.\nWe can now find the CDF for q(\u00a2|z, @) as a function of g(z = 1|x,\u00a2) > qg\nFacjx,\u00a2)(G) = (1 - a) - (Gi)\n\nGi = Hayi\nl+erf (Sat )|\n\nGi\n+5\nSince z; = 0 makes no contribution to the CDF until \u00a2; = 0, the value of p at which \u00a2; = 0 is\nMai + V204,i -erf? (% - 1) ; if pi < pier\nG= 40, if pi\"? <pi< pi\"? +a)\n\nHq + V 204, erf | (248 + 1) , otherwise\nGradients are always evaluated for fixed choices of p, and gradients are never taken with respect\nto p. As aresult. expectations with respect to p are invariant to permutations of 9. Furthermore,\n_ 9, ifpi <1-\u20144qi\n= Hgi + V20q,-erf (o> + 1) , otherwise\nAll parameters of the multivariate Gaussians should be trainable functions of x, and independent \u00ab\nq. The new term in Equation 22 is:\n2 - 2\nKL [q||p] = > (us Opi \u2014 log ogi + Fai + (Hari = Moi) 3)\n2\n\n2\na 2 . TD,\nTo train q(z; = 1|x, @), we thus need to backpropagate KL [q(\u00a2;|z; = )|\\p(C;|z; = 1)] into i\nCSMINP] _ Hai \u2014 Hpi\n\nOltg,i oi\nOKLg\\|pP] _ Logi\n004, Oq,i Oni\nste\nye P\n\n4 )r+en (a\n\nTai\n\nV204,i\n\n)|\ndi\n\n1)\n\n_ 2p) =\n\n+1\n> azl2, 4) KL [a(6lz,2, 9)|Ip(6lz)] =\n\nz\n\nYo ai = Ue, $) KL [a Gila = 1,2, 6)|I(Gla =D)\n\nZt\n\n+ (1\u2014q(% = 1x, 4)) KL [g(Gilzi = 0, 2, 6)||p(Glz = 0)]\nLagi\nPat ~~\nLp,i\n\nYalele.9)- 5\u00b0\nOu KL {q||p]\nUa =1\n|x, 9)\n\nz\nqt\n2\nPit\n1\n\nley\n\nYi alele.9)- 5\u00b0\n3o, KL la\n\\|p] = a(zi = 12,4) (\n-- + se)\nai Op\npi\n\nz\nqt\nFor p, it is not useful to make the mean values of \u00a2 adjustable for each value of z, since this is\nredundant with the parameterization of the decoder. With fixed means, we could still parameterize\nthe variance, but to maintain correspondence with the standard VAE, we choose the variance to be\none.\nThe KL term of the ELBO (Equation 2) is not significantly affected by the introduction of additiona\ncontinuous latent variables \u00a2, so long as we use the same expansion r(\u00a2|z) for both the approximat\ning posterior and the prior:\n_ r(Cle;) \u00abglz;\\Ge,.2) | \u00ablo Thejcn (Giles): (2ilGi<j, 2)\nKL [all -r/ (1, (Gales) - af a) 'e8 | P(2)-Thejcn (Gil2a) |\n\n1Sj<k P()\nThe gradient of Equation 24 with respect to the parameters @ of the prior, p(z|@), can be es-\ntimated stochastically using samples from the approximating posterior, g(\u00a2, z|z, @), and the true\nprior, p(z|@). When the prior is an RBM, defined by Equation 6, we find:\n(a) OE,(z,0 OEY (2,8\n\u2014 2 KL ale] =~ Yo a6 22,9) PE) +See OF 2.8)\nGz\n\nJE, (2,0) OB, (2,0)\n= \u2014Eq(21\\2,9) [ [Eataiccnae) 6 \"7 + Epc2|a) 30\nThe final expectation with respect to q(zx|Cick, 2, ) can be performed analytically; all other expec-\ntations require samples from the approximating posterior. Similarly, for the prior, we must sample\nfrom the RBM, although Rao-Blackwellization can be used to marginalize half of the units.\nIn contrast, the gradient of the KL term with respect to the parameters of the approximating posterior\nis severely complicated by a nonfactorial approximating posterior. We break KL [q||p| into two\nterms, the negative entropy Vc qlog q, and the cross-entropy \u2014 Vac qlog p, and compute their\ngradients separately.\n\u201ctale = > [ ( Il His) \u201cHees \u201ctog [isreere aee |\n\nz 1<j<k P(2)* Mhejen (Gl2i)\n\n-r/ ( I] Giles) taker] - log [Pete ) steer) : (24)\n\n1Sj<k P()\n-H(q)= x / (1 . r(Gjlz3) - shoes} \u00ab| Il g(2;|Ci<j, 2)\n<j<\n\n1SjSk\n-\u00a9[ (Tl r(Gil25) - azilGi<y, \u00ae \u00bb) . [Zvewaiee)\nJ\n-=b/ (1 Hsle)-sls) log q(zj|Ce<j,#)\ni<j\n= SE Kc.cj.2cjl2.9) = g(2j|Gi<j,@) - log rok)\nj 5\nLE <i ba (2j|Pi<j,2) - brats\n\n2\nWe wish to take the gradient of \u2014H(q) in Equation 26. Using the identity:\n& [ogo] =e Doe (Sela) = 35 (Ze) =!\n\u2014 Ha) = Le Ena bs (Foateilcne)) uss) .\n\n25\n\u201c|\n\nLeg\n0g, i\nOS\n\nTY aled (BD (ales\n\n(40 [als =1) Ble = np]\n\n0\u00a2\nwhere u and z, correspond to single variables within the hierarchical groups denoted by 7. In Ten-\nsorFlow, it might be simpler to write:\nWe can regroup the negative entropy term of the KL divergence so as to use the reparameterization\ntrick to backpropagate through []._. g(z;\\Gie\n1<j<k\n\nella r(Gjlzj) - Weslbicg, \u00a9 von) : [Zvewaiee)\n\n-=b/ (1 Hsle)-sls) log q(zj|Ce<j,#)\ni<j\n\n1<j<k\n\n= SE Kc.cj.2cjl2.9) = a(zi\\Gi<j,@) - log q(zjlGi<j; 7\nj 3\n\n2\n\nLE a ba (2j|Pi<j,2) - brats\n\n|\nwhere indices i and j denote hierarchical groups of variables. The probability q(z;|p;<j,\u00ab) is\nevaluated analytically, whereas all variables z;<; and \u00a2;<; are implicitly sampled stochastically\nVia Pies.\nMoreover, we can eliminate any log-partition function in log q(z;|p;<;,) by an argument analogous\nto Equation 27.'5 By repeating this argument one more time, we can break Bey |Picj, 2) into its\n\nfactorial components.'\u00a9 If z; \u20ac {0,1}, then using Equation 10, gradient of the negative entropy\nreduces to:\n|\n\nLeg\n0g, i\nOS\n\nTE ated: (BE (wera\n\n(40 [als =1) Ble = \u00bb|\n\nag.\n\n0\u00a2\n0 0 0 to)\n~ 9g 2218 Du gt Bet 359 8% dat\ndia Bp=-B\n\n-W-z+b!-z\n\na\n-E, [b' -z] =b' -E, aoa =D)\n-W.- z=) OW: By 2;\n\nag\ndepends upon variables that are not usually in the same hierarchical level, so in general\nwhere without loss of generality z; is in an earlier hierarchical layer than z;; however, it is not clear\nhow to take the derivative of z;, since it is a discontinuous function of p,<;."}, {"section_index": "10", "section_name": "F.3. NAIVE APPROACH", "section_text": "The naive approach would be to take the gradient of the expectation using the gradient of log-\nprobabilities over all variables:\n(a) ; (2)\naa\" [Wij 2i2;] = Ey [Wass : 30 08 (\n\n, 0\n= Eq, go)1,... wise Ss a6 log sone\nk\n\n| Ow\\t<k\n= Eg ga wuss \u00bb a\n\nIk\\l<k\nThe gradient calculation in Equation 28 is an instance of the REINFORCE algorithm (Equation 18)\nMoreover, the variance of the estimate is proportional to the number of terms (to the extent that th\nterms are independent). The number of terms contributing to each gradient Ptah grows quadrati\n\ncally with number of units in the RBM. We can introduce a baseline, as in NVIL (Mnih & Grego\n2014):\nfe a, |\n0\nEy | (Wij2iz; \u2014 e(x)) - a0 log q|\nEy [Wij 2125] = Wiz Ep, -; [21> Ep,., [z]] ,\nO\n\na6\n\nE[W,;ziz;] =\n\nEy [Waxes : Flos i\n\n, 0\n= Eq, go)1,... wise Ss a6 log sone\nk\n\n| Ow\\t<k\n= Egan. [eee x \u2014\n\ndk\\l<k\n\ntet]\nFor as we can drop out terms involving only z;<, and z;<, that occur hierarchically before k,\n\nsince those terms can be pulled out of the expectation over q,, and we can apply Equation 27.\nHowever, for terms involving z;s or zj>, that occur hierarchically after k, the expected value of z;\nor z; depends upon the chosen value of z,\n+>\n\nterms are independent). [he number of terms contributing to each gradient - grows quadrati-\nWhen using the spike-and-exponential, spike-and-slab, or spike-and-Gaussian distributions of sec-\ntions 2.1 D.2, and E.1, we can decompose the gradient of E [W;,z;z;] using the chain rule. Previ-\nously, we have considered z to be a function of p and @. We can instead formulate z as a function o:\ng(z = 1) and p, where g(z = 1) is itself a function of p and @. Specifically,\n2i(gi(zi = 1), pi) = {\n\n0 ifp; <l-gla=1) =ai(%\n1 otherwise.\nUsing the chain rule, ae 3 aq an : ee, where oa a ay holds all q,.4; fixed, even\ny=\n\nthough they all depend on the common variables p and parameters \u00a2. We use the chain rule to\ndifferentiate with respect to g(z = 1) since it allows us to pull part of the integral over p inside the\nderivative with respect to \u00a2. In the sequel, we sometimes write g in place of g(z = 1) to minimize\nnotational clutter.\nExpanding the desired gradient using the reparameterization trick and the chain rule, we find\n(2) o ,\na6 5g Bo [Wisi\n\nE, (Wizz) = ae\n\nOW; 2:2; Ogn(ze = 1)\n=E jer,\n\u00b0 \u00bb Ox (2% = 1) Og\nWe can change the order of integration (via the expectation) and differentiation since\n|Wi52123| < Wij < 00\nfor all p and bounded \u00a2 (Cheng, 2006). Although z(q, p) is a step function, and its derivative is\na delta function, the integral (corresponding to the expectation with respect to p) of its derivative\nis finite. Rather than dealing with generalized functions directly, we apply the definition of the\nderivative, and push through the matching integral to recover a finite quantity.\nE, [ 240) 2G) OG = |\n\nOqi(zi = 1) Oe\n=E vm in 2G + 81,0) 2G + Sai, P) \u2014 Way = ia, 0) (G0) Oailzi =D)\n\u00b0 5qi(zi=1) 30 bi ae\n\nEps lim 6q)- Wij 1+ 2;(a,0) \u2014 Wij 0+ 2)(4,0) Oai(zi = 1)\n6qi(zi=1) 0 54; ab\n\n|\n\nsate]\n\n= Ens IW, %(4) \u201cag\nThe third line follows from Equation 29, since z;(q + 5q;, \u00a2) differs from z;(q, \u00bb) only in the region\nof p of size dq; around g;(z; = 0) = 1 \u2014 gi(z; = 1) where z;(q + 6q:, p) 4 zi(q, p). Regardless of\nthe choice of p, z;(q\u00a2 + 6q;, p) = 2;(q, p):\nThe third line fixes p; to the transition between z; 0 and z; 1 at qi(z; = 0). Since z; = 0\nimplies \u00a2; = 0,!7 and \u00a2 is a continuous function of p, the third line implies that \u00a2; = 0. At the same\n\ntime, since gq; is only a function of p;<; from earlier in the hierarchy, the term Oa is not affected by\n\nthe choice of p;.!8 As noted above. due to the chain rule. the perturbation dq; has no effect on other\nFor simplicity, we pull the sum over k out of the expectation in Equation 30, and consider each\nsummand independently. From Equation 29, we see that z; is only a function of q;, so all terms\nin the sum over k in Equation 30 vanish except k = i and k = j. Without loss of generality, we\nconsider the term k = 7; the term k = 7 is symmetric. Applying the definition of the gradient to one\nof the summands, and then analytically taking the expectation with respect to p;, we obtain:\nOWij 2i(G Pp) 25(G. 0) Ogle = >|\naaqi(z = 1) a6\n\n| tim Mia 2 2g + Ogi, 0) + 25 (9 + OG, 0) ~ Wig (9, 0) (G0) Ogi(@ = 1)\nE, .\n\n5qi(2i=1) 40 qi : ae\nz lim 5q:- Wij 1+ 2;(4, p) \u2014 Wig - 9+ 23 (9, p) ; Oqi(zi = 1)\nPAY sa(ai=1) 90 0\u201d Ob ov=ile:=0)\nOqi(zi = 1)\nm | _ a9 pi=qi (zi=0)\n\n|\nSince p; is fixed such that \u00a2; = 0, all units further down the hierarchy must be sampled consis-\ntent with this restriction. A sample from p has \u00a2; = 0 if z; = 0, which occurs with probability\nqi(z: = 0).!\u00b0 We can compute the gradient with a stochastic approximation by multiplying each\nsample, by 1 \u2014 z;, so that terms with \u00a2; 4 0 are ignored,\u201d\u00b0 and scaling up the gradient when z; = 0\nhv\nOqi(%i = 1)\n1-% ay 7\n1-g(z=1)\n\non [Wij2i2;] = E, |Wi; -\nOg\nG MOTIVATION FOR BUILDING APPROXIMATING POSTERIOR AND PRIOR\nHIERARCHIES IN THE SAME ORDER\nIntuition regarding the difficulty of approximating the posterior distribution over the latent variables\ngiven the data can be developed by considering sparse coding, an approach that uses a basis set of\nspatially locallized filters (Olshausen & Field, 1996). The basis set is overcomplete, and there are\ngenerally many basis elements similar to any selected basis element. However, the sparsity prior\npushes the posterior distribution to use only one amongst each set of similar basis elements.\nAs a result, there is a large set of sparse representations of roughly equivalent quality for any singl\u00ab\ninput. Each basis element individually can be replaced with a similar basis element. However\nhaving changed one basis element, the optimal choice for the adjacent elements also changes sc\nthe filters mesh properly, avoiding redundancy or gaps. The true posterior is thus highly correlated\nsince even after conditioning on the input, the probability of a given basis element depends strongly\non the selection of the adjacent basis elements.\nThese equivalent representations can easily be disambiguated by the successive layers of the rep-\nresentation. In the simplest case, the previous layer could directly specify which correlated set of\nbasis elements to use amongst the applicable sets. We can therefore achieve greater efficiency by\ninferring the approximating posterior over the top-most latent layer first. Only then do we compute\nthe conditional approximating posteriors of lower layers given a sample from the approximating\nposterior of the higher layers, breaking the symmetry between representations of similar quality."}, {"section_index": "11", "section_name": "H ARCHITECTURE", "section_text": "The stochastic approximation to the ELBO is computed via one pass down the approximating pos-\nterior (Figure 4a), sampling from each continuous latent layer \u00a2; and 3,,51 in turn; and another pass\ndown the prior (Figure 4b), conditioned on the sample from the approximating posterior. In the pass\ndown the prior, signals do not flow from layer to layer through the entire model. Rather, the input\nto each layer is determined by the approximating posterior of the previous layers, as follows from\nEquation 14. The gradient is computed by backpropagating the reconstruction log-likelihood, and\nthe KL divergence between the approximating posterior and true prior at each layer, through this\ndifferentiable structure.\n\u2018Tt might also be the case that \u00a2; = 0 when z; = 1, but with our choice of r(\u00a2|z), this has vanishing];\nsmall probability.\n\npr: be Pah fant that ~ - [N17\nAll hyperparameters were tuned via manual experimentation. Except in Figure 6, RBMs have 128\nunits (64 units per side, with full bipartite connections between the two sides), with 4 layers o!\nhierarchy in the approximating posterior. We use 100 iterations of block Gibbs sampling, with 2(\npersistent chains per element of the minibatch, to sample from the prior in the stochastic approxi\nmation to Equation 11.\nWhen using the hierarchy of continuous latent variables described in Section 4, discrete VAEs overfit\nif any component of the prior is overparameterized, as shown in Figure 9a. In contrast, a larger\nand more powerful approximating posterior generally did not reduce performance within the range\nexamined, as in Figure 9b. In response, we manually tuned the number of layers of continuous latent\nvariables, the number of such continuous latent variables per layer, the number of deterministic\nhidden units per layer in the neural network defining each hierarchical layer of the prior, and the\nuse of parameter sharing in the prior. We list the selected values in Table 2. All neural networks\nimplementing components of the approximating posterior contain two hidden layers of 2000 units.\nTable 2: Architectural hyperparameters used for each dataset. Successive columns list the numbe\nof layers of continuous latent variables, the number of such continuous latent variables per layet\nthe number of deterministic hidden units per layer in the neural network defining each hierarchica\nlayer of the prior, and the use of parameter sharing in the prior. Smaller datasets require mor\nregularization, and achieve optimal performance with a smaller prior.\nOn statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes, we further regularize using\nrecurrent parameter sharing. In the simplest case, each p (3m|3:<m,9) and p (a|3,@) is a func-\ntion of rem 3b rather than a function of the concatenation [30,31,---,3m-\u2014i]. Moreover, all\nP(3m>1|31<m,9) share parameters. The RBM layer 39 is rendered compatible with this parame-\nterization by using a trainable linear transformation of \u00a2, M - C: where the number of rows in IZ\nLog likelihood\n\n\u201482\n\n\u201484\n\n\u201486\n\n\u201488\n\n1 1 1 1 1 1 1 1 1\n100 =.200 = 300) 400 ~\u2014 5500 500 1,000 1,500 2,000\nNum hidden units per decoder layer Num hidden units per encoder layer\n\n(a) Prior (b) Approximating posterior\nFigure 9: Log likelihood on statically binarized MNIST versus the number of hidden units per neural\nnetwork layer, in the prior (a) and approximating posterior (b). The number of deterministic hidden\nlayers in the networks parameterizing the prior/approximating posterior is 1 (blue), 2 (red), 3 (green)\nin (a/b), respectively. The number of deterministic hidden layers in the final network parameterizing\np(a|3) is 0 (solid) or 1 (dashed). All models use only 10 layers of continuous latent variables, with\nno parameter sharing.\nMNIST (dyn bin)\nMNIST (static bin)\nOmniglot\nCaltech-101 Sil\n\nNum Vars per Hids per Param\nlayers layer _ prior layer sharing\n18 64 1000 none\n\n20 256 2000 2 groups\n\n16 256 800 2 groups\n\n12 80 100 complete\nOn datasets of intermediate size, a degree of recurrent parameter sharing somewhere between ful\nindependence and complete sharing is beneficial. We define the n group architecture by dividing the\ncontinuous latent layers 3,,>1 into n equally sized groups of consecutive layers. Each such group is\nindependently subject to recurrent parameter sharing analogous to the complete sharing architecture\nand the RBM layer 39 is independently parameterized.\nWe use the spike-and-exponential transformation described in Section 2.1. The exponent is a train-\nable parameter, but it is bounded above by a value that increases linearly with the number of training\nepochs. We use warm-up with strength 20 for 5 epochs, and additional warm-up of strength 2 on the\nRBM alone for 20 epochs (Raiko et al., 2007: Bowman et al., 2016: Sgnderby et al., 2016).\nWhen p(x\\3) is linear, all nonlinear transformations are part of the prior over the latent variables\nIn contrast, it is also possible to define the prior distribution over the continuous latent variable:\nto be a simple factorial distribution, and push the nonlinearity into the final decoder p(x|3), as ir\ntraditional VAEs. The former case can be reduced to something analogous to the latter case using\nthe reparameterization trick.\nHowever, a VAE with a completely independent prior does not regularize the nonlinearity of th\nprior; whereas a hierarchical prior requires that the nonlinearity of the prior (via its effect on th\ntrue posterior) be well-represented by the approximating posterior. Viewed another way, a com\npletely independent prior requires the model to consist of many independent sources of variance\nso the data manifold must be fully unfolded into an isotropic ball. A hierarchical prior allows th\ndata manifold to remain curled within a higher-dimensional ambient space, with the approximatin,\nposterior merely tracking its contortions. A higher-dimensional ambient space makes sense whe\nmodeling multiple classes of objects. For instance, the parameters characterizing limb positions ani\norientations for people have no analog for houses."}, {"section_index": "12", "section_name": "H.1 ESTIMATING THE LOG PARTITION FUNCTION", "section_text": "We estimate the log-likelihood by subtracting an estimate of the log partition function of the RBM\n(log Z, from Equation 6) from an importance-weighted computation analogous to that of Burda et al.\n(2016). For this purpose, we estimate the log partition function using bridge sampling, a variant\nof Bennett\u2019s acceptance ratio method (Bennett, 1976; Shirts & Chodera, 2008), which produces\nunbiased estimates of the partition function. Interpolating distributions were of the form p(x)\u2019,\nand sampled with a parallel tempering routine (Swendsen & Wang, 1986). The set of smoothing\nparameters \u00a3 in [0,1] were chosen to approximately equalize replica exchange rates at 0.5. This\nstandard criteria simultaneously keeps mixing times small, and allows for robust inference. We\nmake a conservative estimate for burn-in (0.5 of total run time), and choose the total length of run,\nand number of repeated experiments, to achieve sufficient statistical accuracy in the log partition\nfunction. In Figure 10, we plot the distribution of independent estimations of the log-partition\nfunction for a single model of each dataset. These estimates differ by no more than about 0.1,\nindicating that the estimate of the log-likelihood should be accurate to within about 0.05 nats.\nRather than traditional batch normalization (loffe & Szegedy, 2015), we base our batch normaliza-\ntion on the L1 norm. Specifically, we use:\ny=x-xX\n\nxm =y/ (+6) Os +o\nwhere x is a minibatch of scalar values, X denotes the mean of x, \u00a9 indicates element-wise mul-\ntiplication, \u20ac is a small positive constant, s is a learned scale, and o is a learned offset. For the\napproximating posterior over the RBM units, we bound 2 < s < 3, and \u2014s < o < s. This helps\nensure that all units are both active and inactive in each minibatch, and thus that all units are used.\nFigure 10: Distribution of estimates of the log-partition function, using Bennett\u2019s acceptance ratio\nmethod with parallel tempering, for a single model trained on dynamically binarized MNIST (a),\nstatically binarized MNIST (b), Omniglot (c), and Caltech-101 Silhouettes (d)"}, {"section_index": "13", "section_name": "I COMPARISON MODELS", "section_text": "In Table 1, we compare the performance of the discrete variational autoencoder to a selection o!\nrecent, competitive models. For dynamically binarized MNIST, we compare to deep belief network:\n(DBN; Hinton et al., 2006), reporting the results of Murray & Salakhutdinov (2009); importance.\nweighted autoencoders (I[WAE; Burda et al., 2016); and ladder variational autoencoders (Ladder\nVAE; Sgnderby et al., 2016).\nFor the static MNIST binarization of (Salakhutdinov & Murray, 2008), we compare to Hamilto-\nnian variational inference (HVJ; Salimans et al., 2015); the deep recurrent attentive writer (DRAW;\nGregor et al., 2015); the neural adaptive importance sampler with neural autoregressive distribution\nestimator (NAIS NADE; Du et al., 2015); deep latent Gaussian models with normalizing flows (Nor-\nmalizing flows; Rezende & Mohamed, 2015); and the variational Gaussian process (Tran et al.,\n2016).\nFinally, for Caltech-101 Silhouettes, we compare to the importance-weighted autoencoder (IWAE:\nBurda et al., 2016), reporting the results of Li & Turner (2016); reweighted wake-sleep with <\ndeep sigmoid belief network (RWS SBN; Bornschein & Bengio, 2015); the restricted Boltzmanr\nmachine (RBM; Smolensky, 1986), reporting the results of Cho et al. (2013); and the neural adaptive\nimportance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015).\nFraction of estimates\n\nFraction of estimates\n\n1 1\n33.6 33.65 33.7\n\n1 1\n40.1 40.15 40.2\n\n(a) MNIST (dyn bin) (b) MNIST (static bin)\n107? 107?\nFE T | FT 4\nL | | | J L | | | al\n34.1 34.15 34.2 21.1 21.15 21.2\n\nLog partition function estimate\n\n(c) Omniglot\n\nLog partition function estimate\n\n(d) Caltech-101 Silhouettes\nOn Omniglot, we compare to the importance-weighted autoencoder (IWAE; Burda et al., 2016);\nladder variational autoencoder (Ladder VAE; Sgnderby et al., 2016); and the restricted Boltzmann\nmachine (RBM; Smolensky, 1986) and deep belief network (DBN; Hinton et al., 2006), reporting\nthe results of Burda et al. (2015).\nSkeONMABEOVWWYEFEOUOQKOTIA\nwBNOHONS OSes FOEQSANOTTA\nSaAOMF WoYSFFROCOPO TIA\nWYAHMHVLeEYVYSTPAO OOO ROTIOK\nSaenMSseovvw > ToOVUVIeoSCoFHA\n~S NAM OTNAKRTTFEKRLANAATN-N\nSNSMOFAKBTENOCANAYOH\u2014-~\nANNUM FANPTEREA ANNOY\nSWNYVOMFAKPETEOKRSCARANH~L\nANNAN OH SFARARTSREGIANNHO-Y\nBMH ~ RADA MANN MR BOROND\nPRA~AAKN A MOWANAMREOR- NX>\nSTRAT NRAN AMMAN MXREORA NN\nTRH~ANANAEOMBAMNMDBF ERAN\nCH~~KRANK GMO PBINMAROEB~ NN\nHYOXARNA PATA YM Or HKTTARHMN ME\nVYNKANAAKAHH SOTO KAYVA\u2122 w&\nMWMNNAKATENB MD BTTAHMW YH\nOHREBANMKAAPYUVERETSRY HAO\nMNAARNK KANN eT HKTTKHNO\nMYVYSOOVAHSOHSVOSOHYVVAXG\nBRYYVBVSKOOSQVSASYVHOCOISIDs\u2019\nAYNYVYVSCDIOSCHFDO SHAOC HOY VV\nBIVYVWVSOONSHSCHSHOO OWINVs\nBRYMMDOCONVNQVSCASYVOANDSYHAVVYNY\nMaweNnr~ ~~~ OYMHMt ROBT\nHaAMONy > HHH HM MHP Reso\nHORON~~H\u2014 NHK K- MMM o RIF\nMM w&ONN~H\u2014~~\u2014-\u2014-NMMPYOKTHR\nHO&OAON TSS SH KT MMMTFVT BRITO\n~b wre -GAGMYLAKH-NwENK\u2014NH\n~HON FI -\u2014ANATNAHS BHR MM\n~Wur7rG\u2014-BRABAANMN MH ~NSHe NK MMH\nSb eet H-BGABNANH-K KE HH 1-9\nNh nw SH\u2014-ACrAAN Wwe ne My\nFigure 11: Evolution of samples from a discrete VAE trained on statically binarized MNIST, using\npersistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM\nbetween successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but\nindependent continuous latent variables, and shows the variation induced by the continuous layers\nas opposed to the RBM. Vertical sequences in which the digit ID remains constant demonstrate that\nthe RBM has distinct modes, each of which corresponds to a single digit ID, despite being trained\nin a wholly unsupervised manner."}, {"section_index": "14", "section_name": "SUPPLEMENTARY RESULTS", "section_text": "To highlight the contribution of the various components of our generative model, we investigate\nperformance on a selection of simplified models.*! First, we remove the continuous latent layers\nThe resulting prior, depicted in Figure 1b, consists of the bipartite Boltzmann machine (RBM), the\nsmoothing variables \u00a2, and a factorial Bernoulli distribution over the observed variables x defined vis\na deep neural network with a logistic final layer. This probabilistic model achieves a log-likelihooc\nof \u201486.9 with 128 RBM units and \u201485.2 with 200 RBM units.\nWe then remove the lateral connections in the RBM, reducing it to a set of independent binary\nrandom variables. The resulting network is a noisy sigmoid belief network. That is, samples are\nproduced by drawing samples from the independent binary random variables, multiplying by ar\nindependent noise source, and then sampling from the observed variables as in a standard SBN\nWith this SBN-like architecture, the discrete variational autoencoder achieves a log-likelihood o!\n\u201497.0 with 200 binary latent variables.\nFinally, we replace the hierarchical approximating posterior of Figure 3a with the factorial approxi-\nmating posterior of Figure la. This simplification of the approximating posterior, in addition to the\nprior, reduces the log-likelihood to \u2014102.9 with 200 binary latent variables.\n*'Tn all cases, we report the negative log-likelihood on statically binarized MNIST (Salakhutdinov & Mur-\nray, 2008), estimated with 104 importance weighted samples (Burda et al., 2016).\nNext, we further restrict the neural network defining the distribution over the observed variables 2\nziven the smoothing variables \u00a2 to consist of a linear transformation followed by a pointwise logistic\nnonlinearity, analogous to a sigmoid belief network (SBN; Spiegelhalter & Lauritzen, 1990; Neal.\n1992). This decreases the negative log-likelihood to \u201492.7 with 128 RBM units and \u201488.8 with\n200 RBM units.\nFigure 12: Evolution of samples from a discrete VAE trained on Omniglot, using persistent RBM\nMarkov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive\nrows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent\ncontinuous latent variables, and shows the variation induced by the continuous layers as opposed to\nthe RBM.\nFigure 13: Evolution of samples from a discrete VAE trained on Caltech-101 Silhouettes, using\npersistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM\nbetween successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM,\nbut independent continuous latent variables, and shows the variation induced by the continuous\nlayers as opposed to the RBM. Vertical sequences in which the silhouette shape remains similar\ndemonstrate that the RBM has distinct modes, each of which corresponds to a single silhouette type,\ndespite being trained in a wholly unsupervised manner.\nCRG TERK OG OMIT KT HET BE\nkete Re mlbg samme r-r\u00ab<g ims\ns.r +s Pee gmworWr wg oo\nIs Rr Ft OSAGaAHwr eo BOAnee\nKPRTY &RPHs QgmKRr - we rd ary\n5 8 PPE OAGTHAFO42UVHME FHA w\nCR FTFereemMOT Awe Dw He Bw\na4 CF runmsFrahp 4s prynrudn\nBaLMEE WME Owe dH Puy wor\naG.Ff@Re lpm Bee OW RD dh\nMPSXartoxrd P Hodge CHBYPsuws\nRaya ee Dr WORK FHOTrISG Nr\u00bb\nWg ter Ger Foo Ur aH? BDMN>\nPast roarerraogr RWP2UABsKA\ncrppr hoe P He Nelr gm a #Ela>\ne4avs ih > he Fords vo page\netaguohK HoRKe Be qe s dere\nstneguSwosereaar dag a-ro\nrazorut geo PYRG As oer\nb;FRA SHR On Gar Fe 29 KANN\nRYO kK SH Yt Ce PREV AZT Y\nSPupEAe De gd. *tGPu cys s fH\n\u201cMunk Pw OF GDB EAGCHYTProW\nGoaOne Pe SivPr CHS KLQarowk\nGyovrt & set Go gawrocwice ws\nlowe xGmPez te rtvyer sare\nReRVEnHVI Rwy FS euwe SUE\nSwRI PRA. Pre pdew~ Bw OF\naoaAanewria Eevw2xAwxroe BERnE\nowr tar SP -aArrrebeo bse 2S\nws wBRe Reet inhi gp DH Bein wt\noHeArMpenresypRirvrshaeNrey\nceiw Dy, Kecyt eH tS aM EN\narPoR: Rwomnmr an Bt gaexerre\nwmit=mA SF DET? BHR eH YE YB\nFigures 11, 12, and 13 repeat the analysis of Figure 5 for statically binarized MNIST, Omniglot,\nand Caltech-101 Silhouettes. Specifically, they show the generative output of a discrete VAE as\nthe Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held constant\nacross each sub-row of five samples, and variation amongst these samples is due to the layers of\ncontinuous latent variables. Given a multimodal distribution with well-separated modes, Gibbs\nsampling passes through the large, low-probability space between the modes only infrequently. As a\nresult, consistency of the object class over many successive rows in Figures 11, 12, and 13 indicates\nthat the RBM prior has well-separated modes.\nOn statically binarized MNIST, the RBM still learns distinct, separated modes corresponding to\nmost of the different digit types. However, these modes are not as well separated as in dynamically\nbinarized MNIST, as is evident from the more rapid switching between digit types in Figure 11.\nThere are not obvious modes for Omniglot in Figure 12; it is plausible that an RBM with 128 units\ncould not represent enough well-separated modes to capture the large number of distinct character\ntypes in the Omniglot dataset. On Caltech-101 Silhouettes, there may be a mode corresponding to\nlarge. roughly convex blobs."}]
ByG8A7cee
[{"section_index": "0", "section_name": "REFERENCE- AWARE LANGUAGE MODELS", "section_text": "Zichao Yang'*, Phil Blunsom?*, Chris Dyer!?, and Wang Ling\u201d\n1Carnegie Mellon University, 2DeepMind, and *University of Oxford"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Referring expressions (REs) in natural language are noun phrases (proper nouns, common nouns,\nand pronouns) that identify objects, entities, and events in an environment. REs occur frequently\nand they play a key role in communicating information efficiently. While REs are common, previ-\nous works neglect to model REs explicitly, either treating REs as ordinary words in the model o1\nreplacing them with special tokens. Here we propose a language modeling framework that explicitly\nincorporates reference decisions.\nIn Figure [] we list examples of REs in the context of the three tasks that we consider in this work\nFirstly, reference to a database is crucial in many applications. One example is in task orientec\ndialogue where access to a database is necessary to answer a user\u2019s query 2073; IL\netal, 2016; [Vinyals & Le, 2015; Wen et al 2015; Sordoni et al, ZO15; Serban et al, 2016; Borde:\n& Weston) 2016; 2016; 2015; Wen etal, 2014). Here we conside\nthe domain of restaurant recommendation where a system refers to restaurants (name) and thei\nattributes (address, phone number etc) in its responses. When the system says \u201cthe nirala is:\nnice restaurant\u201d, it refers to the restaurant name the nirala from the database. Secondly, man\nmodels need to refer to a list of items (Kiddon_et_al], 2016; Wen ef al, 2015). In the task of recip\ngeneration from a list of ingredients (Kiddon et aL], 2016), the generation of the recipe will frequenth\nreference these items. As shown in Figure [I], in the recipe \u201cBlend soy milkand...\u201d,soy mill\nrefers to the ingredient summaries. Finally, we address references within a document (Mikolov etal.\n2010; i et all, 2015; [Wang & Cho, 2015), as the generation of words will ofter refer to previoush\ngenerated words. For instance the same entity will often be referred to throughout a document. i\nFigure [I the entity you refers to I in a previous utterance.\nIn this work we develop a language model that has a specific module for generating REs. A series of\nlatent decisions (should I generate a RE? If yes, which entity in the context should I refer to? How\nshould the RE be rendered?) augment a traditional recurrent neural network language model and\nthe two components are combined as a mixture model. Selecting an entity in context is similar to\nfamiliar models of attention (Bahdanau etal], 2014), but rather than being a deterministic function\nthat reweights representations of elements in the context, it is treated as a distribution over contextual\nelements which are stochastically selected and then copied or, if the task warrants it, transformed\n(e.g., a pronoun rather than a proper name is produced as output). Two variants are possible for\nupdating the RNN state: one that only looks at the generated output form; and a second that looks\nat values of the latent variables. The former admits trivial unsupervised learning, latent decisions\nare conditionally independent of each other given observed context, whereas the latter enables more"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We propose a general class of language models that treat reference as an explicit\nstochastic latent variable. This architecture allows models to create mentions of\nentities and their attributes by accessing external databases (required by, e.g., di-\nalogue generation and recipe generation) and internal state (required by, e.g. lan-\nzuage models which are aware of coreference). This facilitates the incorporation\nof information that can be accessed in predictable locations in databases or dis-\ncourse context, even when the targets of the reference may be rare words. Ex-\nperiments on three tasks show our model variants outperform models based on\ndeterministic attention.\nFigure 1: Reference-aware language models.\nexpressive models that can extract information from the entity that is being referred to. In each of\nthe three tasks, we demonstrate our reference aware model\u2019s efficacy in evaluations against models\nthat do not explicitly include a reference operation.\nWe denote each document as a series of tokens 7 ,...,2 , where L is the number of tokens in th\ndocument. Our goal is to maximize the probabilities p(x; | c;), for each word in the document base\non its previous context c; = \u00a31,...,%;_1. In contrast to traditional neural language models, w\nintroduce a variable at each position z;, which controls the decision on which source x; is generate\nfrom. The token conditional probably is then obtained by:\nIn dialogue modeling and recipe generation, z; will simply taken on values in {0,1}. Where z; = |\ndenotes that x; is generated as a reference, either to a database entry or an item in a list. However\nz, can also be defined as a distribution over previous entities, allowing the model to predict x:\nconditioned on its a previous mention word. This will be the focus of the coreference languag\u00a2\nmodel. When z; is not observed (which it generally will not be), we will train our model to maximiz\u00ab\nthe marginal probability in Eq. [ directly."}, {"section_index": "3", "section_name": "2.1 DIALOGUE MODEL WITH DATABASE SUPPORT", "section_text": "We can observe from this example, users get recommendations of restaurants based on queries\nthat specify the area, price and food type of the restaurant. We can support the system\u2019s decisions\nby incorporating a mechanism that allows the model to query the database allowing the model to\nfind restaurants that satisfy the users queries. Thus, we crawled TripAdvisor for restaurants in the\ndialogue\n\nrecipe\n\nreference\n\nexample\n\nwo\n\nthe\nhirala\n\nmoderate\n\n1 cpu plain soy milk\n\ntable\n\nM: the nirala is a nice restuarant\n\nBlend soy milk and ...\n\nng\ncoreference um and {ly think ... [yo 4 we\ndialogue\n\nrecipe\n\nin\n\nein ied\n\nwo\n\nthe\nhirala\n\nmoderate\n\n1 cpu plain soy milk\n\ntable\n\nM: the nirala is a nice restuarant\n\nBlend soy milk and ...\n\nng\ncoreference i um and {ly think ... [yo 4 we\ne We propose a general framework to model reference in language and instantiate it in the\ncontext of dialogue modeling, recipe generation and coreference based language models.\n\ne We build three data sets to test our models. There lack existing data sets that satisfy out\nneed, so we build these data sets ourselves. These data sets are either built on top existing\ndata set (we constructed the table for DSTC2 data set for dialogue evaluation), crawled\nfrom websites (we crawled all recipes in or annotated with\nNLP tools (we annotate the coreference with Gigaword corpus for our evaluation).\n\ne We perform comprehensive evaluation of our models on the three data sets and verify out\nmodels perform better than strong baselines.\np(xi | ci) = p(x; | zi, c1)p(% | ci).\nTable 1: Example dialogue, M stands for Machine and U stands for User\nTable 2: Fragment of database for dialogue system.\nCambridge area, where the dialog dataset was collected. Then, we remove restaurants that do not\nappear in the data set and create a database with 109 entries with restaurants and their attributes (e.g.\nfood type). A sample of our database is shown in Table. 2. We can observe that each restaurant\ncontains 6 attributes that are generally referred in the dialogue dataset. As such, if the user requests\na restaurant that serves \u201cindian\u201d food, we wish to train a model that can search for entries whose\n\u201cfood\u201d column contains \u201cindian\u201d. Now, we describe how we deploy a model that fulfills these\nrequirements.\nFigure 2: Hierarchical RNN Seq2Seq model\nConsider a dialogue with T\u2019 turns, and the utterance from a user is denoted as X = (7; };_,, where\n\ni is the i-th utterance, whereas the utterance from a machine is denoted as Y = {yi}2,, where 7\n\nis the i-th utterance. We define x; = {xj ye Yi = {yin}, where \u00ab;; denotes the j-th toker\n\nin the i-th utterance from the user, whereas y;,, denotes the v-th token in the i-th utterance from\nthe machine. Finally, |2;| and |y;| denote the number of tokens in the user and machine utterances.\nrespectively. The dialogue sequence starts with machine utterance {y1, 21, yo, Yr, cp}. We\nwould like to model the utterances from the machine\nPYL,Y2.-+-,YT|e1, 22, = TI yilveis tei) = [| pyivlyicy: Yi, t<i)\n\niv\nwhere y<; denotes all the utterances before i and y;,<, denotes the first v \u2014 1 tokens in the i-th\nutterance of the machine. A neural model is employed to predict p(yi,v|yi,<v, Y<i;t<i), Which\noperates as follows:\nSentence Encoder: We first encode previous utterances y<; and a <; into continuous space by gen-\nerating employing a LSTM encoder. Thus, for a given utterance x;, and start with the initial LSTM\nstate h?, and apply the recursion h?, = LSTMg(Wex \u00ae 1), where Wpx;,; denotes a word\nSGSG \u00a7\n\nHello , welcome to the Cambridge restaurant system? You can ask for restaurants by\narea,price range or food type. How may I help you?\niwantamoderately priced restaurant in the north town\n\nthe nirala isa nice restaurant in the north of town in the moderate price range\nwhat is the address\n\nSure, the niralaison7 Milton Road Chesterton\nNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE\n\nali baba moderate lebanese | centre | 59 Hills Road City Cen- | CB 2,1NT | 01462 432565\ntre\n\nthe moderate indian north | 7 Milton Road Chester- | CB 4,1 UY | 01223 360966\n\nnirala ton\n7 Milton Road Chester-\nton\nWe build a model based on the hierarchical RNN model described in (Serhan etal, 2016), as in\ndialogues, the generation of the response is not only dependent on the previous sentence, but on all\nsentences leading to the response. We assume that a dialogue is alternated between a machine and a\nuser. An illustration of the model is shown in Figure J.\nembedding lookup for the token x;,;, and LSTMg denotes the LSTM transition function describec\nin [Hochreiter & Schmidhuber (1997). The representation of the user utterance is represented by\nthe final LSTM state hf = h? The same process is applied to obtain the machine utteranc\u00ab\n\n-anrecentation bY \u2014 hY\n\niles|*\nTurn Encoder: Then, combine all the representations of all the utterances with a second LSTM,\nwhich encodes the sequence {h\u2019, h?, ..., h?, h?} into a continuous vector. Once again, we start with\nan initial state uo and feed each of the utterance representation to obtain the following LSTM state,\nuntil the final state is obtained. For simplicity, we shall refer to this as u;, which can be seen as the\nhierarchical encoding of the previous 7 utterances.\nSeq2Seq Decoder: As for decoding, in order to generate each utterance y;, we can feed u;_1 into\nthe decoder LSTM as the initial state s; 9 = u;_1 and decode each token in y;. Thus, we can express\nthe decoder as:\nsi, = LSTMp(Weyji,v\u20141, Si,v\u20141).\n\niv\n\ny y\npi, = softmax(Ws?.,),\nwhere the desired probability p(y;.u|Yyi.cu, Yei, <i) is expressed by p\u201d\nmodel. An attention model builds a representation d by averaging over a set of vectors p. We define\nthe attention function as a = ATTN(p, g), where a is a probability distribution over the set of vectors\np, conditioned on any input representation q. A full description of this operation is described in (Bah-\ndanau-et all, 2014). Thus, for each generated token y;,,,, we compute the attentions a;,,,, conditioned\non the current decoder state s\u00a5 ,, obtaining the attentions over input tokens from previous turn (i\u20141).\n\n4,U?\nwe denote the vector of all tokens in previous turn ash?\" = [{h?_1_ Ayan 1 {hia yes 1), Let\n\n= |h7\"4| be the number of tokens in Previous turn. Thus, we obtain the attention probabilities\nover all previous tokens a;,,, as ATTN (s}, ,h4,). Then, the weighted sum is computed over these\nprobabilities d;,, = eek iy RA po where i,v,k is the probability of aligning to the k-th token\nfrom previous turn. The resulting vector d; ,, is used to obtain the probability of the following word\np!,. Thus, we express the decoder as:\ny\n\nSiv\n\nGin =\n\ndiy\n\niY\nPi =\n\n= LSTMp( [Wevi, v1; div\u20141), Siv-1),\n= ATTN(hi\"4, 5%,,),\n\n= \u00bb Gin bhi js\n\nkek\nsoftmax( (Ws? yr div):\nFigure 3: Table based decoder.\nAttention based decoder: We can also incorporate the attention mechanism in our hierarchical\nmodel. An attention model builds a representation d by averaging over a set of vectors p. We define\nthe attention function as a = ATTN(p, q), where a is a probability distribution over the set of vectors\np, conditioned on any input representation q. A full description of this operation is described in (Bah\n\ndanau_etal), 2014). Thus, for each generated token y;,,,, we compute the attentions a;,,, conditioned\non the current decoder ctate eY = ahtainino the attentions over innit takens fram nrevions thrn (7 \u20141)\nWe now extend the attention model in order to allow the attention to be computed over a table\nallowing the model to condition the generation on a database.\nWe denote a table with R rows and C columns as { f,,.},7 \u20ac [1, R],c \u20ac [1, C], where f,,- is the cell\nin row r and column c. The attribute of each column is denoted as s,, where c is the c-th attribute.\nf,...and s, are one-hot vector.\nTable Encoding: To encode the table, we build an attribute vector g. for each column. For each\ncell f,.. of the table, we concatenate it with the corresponding attribute g, and then feed it through\na one-layer MLP as follows: g. = Wes, and then e,... = tanh(W/Wef,... del).\nTable Attention: The diagram for table attention is shown in Figure Bal. The attention over cell:\nin the table is conditioned on a given vector q, similarly to the attention model for sequence:\nATTN(p, q). However, rather than a sequence p, we now operate over a table f. Our attentior\nmodel computes a attribute attention followed by row attention of the table. We first use the atten.\ntion mechanism on the attributes to find out which attribute the user asks about. Suppose a use!\nsays cheap, then we should focus on the price attribute. After we get the attention probabil.\nity p* = ATTN({g-}, q), over the attribute, we calculate the weighted representation for each row\nep = . peere conditioned on p*. Then e, has the price information of each row. We further use\nattention mechanism on e,. and get the probability p\u201d = ATTN({e,.}, q) over the rows. Then restau:\nrants with cheap price will be picked. Then, using the probabilities p\u201d, we compute the weightec\naverage over the all rows e. = S. p'-\u20acy,c, Which is used in the decoder. The detailed process is:\n= Lieve Ye\nPr = ATTN( ({er},q),\n\nCc = Ss Drerc Ve.\n7\n\nat re LPC\nThis is embedded in the decoder by replacing the conditioned state g as the current decoder state\ns#. and then at each step, conditioning the prediction of y;,,, on {e,} by using attention mechanism\nat each step. The detailed diagram of table attention is shown in Figure Bal."}, {"section_index": "4", "section_name": "2.1.3. INCORPORATING TABLE POINTER NETWORKS", "section_text": "We now describe the mechanism used to refer to specific database entries during decoding. At each\ntimestep, the model needs to decide whether to generate the next token from an entry of the database\nor from the word softmax. This is performed as follows.\nPointer Switch: We use z;,, \u20ac [0,1] to denote the decision of whether to copy one cell from th\ntable. We compute this probability as follows:\nP(Ziw = sigmoid(W[s;.4, di.v])\np\u00b0 = ATTN({ec}, q),\npork = p\" @ p\u00ae,\nwhere p*\u00b0 is a probability distribution over columns, whereas p\u201d is a probability distribution over\nrows. In order to compute a matrix with the probability of copying each cell, we simply compute\nthe outer product p\u00b0?Y = p\" @ p*.\nObjective: As we treat z; as a latent variable, we wish to maximize the marginal probability of the\nsequence y; over all possible values of z;. Thus, our objective function is defined as:\n= p> y(0]s;0) + po\u2019 p(1|s:,v) = pr (1 \u2014 p(1|si.v)) + po\u2019 p(1|s:.v).\nThus, if z;,,, = 1, the next token y;,,, will be generated from the database, whereas if z;,,, = 0, then\nthe following token is generated from a softmax. We shall now describe how we generate tokens\nfrom the database.\nTable Pointer: If z;,,, = 1, the token is generated from the table. The detailed process of calculating\nthe probability distribution over the table is shown in Figure BB. This is similar to the attention\nmechanism, except that we perform a column attention to compute the probabilities of copying from\neach column after Equation. 81. More formally:\nThe model can also be trained in a fully supervised fashion, if z;,, is observed. In such cases.\nwe simply maximize the likelihood of p(z;,.|si,.), based on the observations, rather than using the\nmarginal probability over z;_,,.\nNext, we consider the task of recipe generation conditioning on the ingredient lists. In this task, we\nmust generate the recipe from a list of ingredients. Table. BJ illustrates the ingredient list and recipe\nfor Spinach and Banana Power Smoothie. We can see that the ingredients soy milk, spinach\nleaves, and banana occur in the recipe.\nsoy\nwocab\n\npery Pp\n\ningredients,\nro-- = decode\n\n(Cheb |\ni\n\n\\\n(Choro\n1 ro Blend\ni\n(ChCPrO\nL\n\n1\noder\nsoy\nwocab\n\npery Pp\n\ningredients,\n\ndecoder\n\nBlend\n\nFigure 4: Recipe pointer\nhij = LSTME(Weai;, hij-1) Ve.\nSy = LSTMp(Sy-1, dyv\u2014-1, Weyo-1\nper\u2019 = ATTN({ {hij Hea bjt Sv)\u00bb\n\ndy = \u00bb Poig hig,\n\np(Zv|Su) = sigmoid(W[s,, dv]),\npre \u2014 softmax(W |[s,, dv]).\nSimilar to the previous task, the decision to copy from the ingredient list or generate a new\nword from the softmax is performed using a switch, denoted as p(z,|s,). We can obtain a\nprobability distribution of copying each of the words in the ingredients by computing py?\u201d =\nATTN({ {hij} 421 }41 Sv) in the attention mechanism. For training, we optimize the marginal\n\nlikelihood function employed in the previous task.\nLet the ingredients of a recipe be X = {x;}7_, and each ingredient contains L tokens x; =\nris}. The corresponding recipe is y = {y,}/_,. We first use a LSTM to encode each in-\ngredient:\nThen, we sum the resulting state of each ingredient to obtain the starting LSTM state of the decoder.\nOnce again we use an attention based decoder:\n8y = LSTMp(sy-1, dv\u20141, Weyv\u20141)\n2! = ATTN({ {hig} Hears 80)\n\ndy = \u00bb Poig hig,\n\nzy |S) = sigmoid(W[s,, dy]),\npro? = softmax(W|s,,, dy]).\nFinally, we build a language model that uses coreference links to point to previous words. Before\ngenerating a word, we first make the decision on whether it is an entity mention. If so, we decide\nwhich entity this mention belongs to, then we generate the word based on that entity. Denote the\ndocument as X = {a;}4_,, and the entities are E = {e;}\u201c,, each entity has M; mentions, e; =\n{mij Heo. such that {2m,;; ant refer to the same entity. We use a LSTM to model the document.\nthe hidden state of each token is h; = LSTM(Wea;, hi_1). We use a set h\u00ae = {h\u00a7, hf, ..., hhc} tc\nkeep track of the entity states, where h\u00a9 is the state of entity 7.\num and [I]; think that is whats - Go ahead [Linda]2. Well and thanks goes to [you], and tc\n[the media]3 to help [us]4...So [our], hat is off to all of [you]s...\nFigure 5: Coreference based language model, example taken from [Wiseman etal] (2016)\nWord generation: At each time step before generating the next word, we predict whether the wor\nis an entity mention:\nwhere z; denotes whether the next word is an entity and if yes v; denotes which entity the\nnext word corefers to. If the next word is an entity mention, then p(2x;|v;,hi-1,h\u00b0) =\nsoftmax(W tanh(W2[h\u00e9., hi_1])) else p(a;|hi_-1) = softmax(W hi_1),\nEntity state update: We update the entity state h\u00b0 at each time step. In the beginning, h\u00b0 = {ho}.\nhg denotes the state of an virtual empty entity and is a learnable variable. If z; = 1 and v; = 0, then\nit * ndicates the next word is a new entity mention, then in the next step, we append h; to h\u00ae, i.e\n\nhe = {h*, hi}, if e; > 0, then we update the corresponding entity state with the new hidden state.\nh\u00b0[v;] = h;. Another way to update the entity state is to use one LSTM to encode the mention states\nand get the new entity state. Here we use the latest entity mention state as the new entity state for\nsimplicity. The detailed update process is shown in Figure 8.\nDialogue: We use the DSTC2 data set. We only extracted the dialogue transcript from data set.\nThere are about 3,200 dialogues in total. Since this is a small data set, we use 5-fold cross validation\nand report the average result over the 5 partitions. There may be multiple tokens in each table cell,\nfor example in Table, the name, address, post code and phone number have multiple tokens, we\nreplace them with one special token. For the name, address, post code and phone number of the j-th\nrow, we replace the tokens in each cell with NNAME_j, -ADDR_j, POSTCODE_j, PHONE_j.\nIf a table cell is empty, we replace it with an empty token EMPTY. We do a string match in the\ntranscript and replace the corresponding tokens in transcripts from the table with the special tokens.\nentity\nentity state ' it\nipdate process 1\u00b0\n\nV\\ My\n\nttn i} pushState\n\n1 I\n\nFI T>|\n\n1 I\n\n1 I ij\n\num | and ; {ly [Linda],\n' |\nentity state\nupdate process\n\npus!\n\ntN, [Linda]\ndi = YF wid,\nme\nv(alarei) ={? p(xi\\hi-1)p(zi|hi_1, h\u00ae) if z;=0.\nLei) =\n(x;|vi, hi-1, h\u00ae) pO (vi|hi-1, h\u00b0)p(zi|hi-1, A\u00b0) if z=1.\nEach dialogue on average has 8 turns (16 sentences). We use a vocabulary size of 900, including\nabout 400 table tokens and 500 words.\nRecipes: We crawl all recipes from www. allrecipes.com. There are about 31, 000 recipes ir\ntotal, and every recipe has a ingredient list and a corresponding recipe. We exclude the recipes tha\nhave less than 10 tokens or more than 500 tokens, those recipes take about 0.1% of all data set. Or\naverage each recipe has 118 tokens and 9 ingredients. We random shuffle the whole data set and take\n80% as training and 10% for validation and test. We use a vocabulary size of 10,000 in the model.\nCoref LM: We use the Xinhua News data set from Gigaword Fifth Edition and sample 100,000\ndocuments from it that has length in range from 100 to 500. Each document has on average 234\ntokens, so there are 23 million tokens in total. We use a tool to annotate all the entity mentions\nand use the annotation in the training. We take 80% as training and 10% as validation and test\nrespectively. We ignore the entities that have only one mention and for the mentions that have\nmultiple tokens, we take the token that is most frequent in the all the mentions for this entity. After\nthe preprocessing, tokens that are entity mentions take about 10% of all tokens. We use a vocabulary\nsize of 50,000 in the model."}, {"section_index": "5", "section_name": "4.1 MODEL TRAINING AND EVALUATION", "section_text": "We train all models with simple stochastic gradient descent with clipping. We use a one-layer LSTM\nfor all RNN components. Hyper-parameters are selected using grid search based on the validation\nset. We use dropout after the input embedding and LSTM output. The learning rate is selected from\n[0.1, 0.2, 0.5, 1], maximum gradient norm is selected from [1, 2, 5, 10] and drop ratio is selected\nfrom [0.2, 0.3, 0.5]. The batch size and LSTM dimension size is slightly different for different\ntasks so as to make the model fit into memory. The number of epochs to train are different for\neach task and we drop the learning rate after reaching a given number of epochs. We report the\nper-word perplexity for all tasks, specifically, we report the perplexity of all words, words that can\nbe generated from reference and non-reference words. For recipe generation, we also generate the\nrecipe using beam size of 10 and evaluate the generated recipe with BLEU.\nTable 4: Dialogue perplexity results. (All means all tokens, table means tokens from table, table oov\ndenotes table tokens that does not appear in the training set, word means non-table tokens). sentence\nattn denotes we use attention mechanism over tokens from past turn. Table pointer and table laten\n\ndiffers in that table pointer, we provide supervised signal on when to generate a table token, while\nin table latent it is a latent decision.\nTable 5: Recipe result, evaluated in perplexity and BLEU score. ing denotes tokens from recipe that\nappear in ingredients.\nmodel all table\nseq2seq 1.35+0.01 4.98+0.38\ntable attn 1.37+0.01 5.09+0.64\ntable pointer 1.3340.01 3.99+0.36\ntable latent 1.36+0.01 4.99+0.20\n+ sentence attn\n\nseq2seq 1.28+0.01 3.31+0.21\ntable attn 1.28+0.01 3.17\u00a30.21\ntable pointer 1.27+0.01 2.99+0.19\ntable latent 1.28+0.01 3.2640.25\nval\n\ntest\n\nmodel ppl ppl\n\nall ing word BLEU all ing word BLEU\nseq2seq 5.60 11.26 5.00 14.07 | 5.52 11.26 4.91 14.39\nattn 5.25 6.86 5.03 1484] 5.19 6.92 4.95 15.15\npointer 5.15 5.86 5.04 15.06 | 5.11 6.04 498 15.29\nlatent 5.02 5.10 5.01 14.87 | 4.97 5.19 494 15.41\nval\n\ntest\n\nmodel all entity word all entity word\nIm 33.08 44.52 32.04 | 33.08 43.86 32.10\npointer 32.57 32.07 32.62 | 32.62 32.07 32.69\npointer +init 30.43 28.56 30.63 | 30.42 28.56 30.66\nTable 6: Coreference based LM. pointer + init means we initialize the model with the LM weights"}, {"section_index": "6", "section_name": "5 RELATED WORK", "section_text": "Recently, there has been great progresses in modeling languages based on neural network, including\nlanguage modeling (Mikolov ef all, 2010; [ozefowicz et all, 2016), machine translation (Sutskever\netal, 2014; 2074), question answering (Hermann ef all, 2015) etc. Based on the\nsuccess of seq? sed models, neural networks are applied in modeling chit-chat dialogue (Ci_etal,\n2016; \\Vinyals & Le, ZOTS: ee ee PUTS) and task\noriented Sablon Went [5; Eanes Weston 2ZOT6; [Wilhams & Zweig, G; [Wen et all,\n2016). Most of the chit-chat neural dialogue models are simply applying the seq2seq \u201cmodels, For\nthe task oriented dialogues, most of them embed the seq2seq model in traditional dialogue systems,\nin which the table query part is not differentiable. while our model queries the database directly.\nRecipe generation was proposed in (Kiddon ef all, 2016). Their model extents previous work on\nattention models (Allamanis etal, 2016) to checklists, whereas our work models explicit references\nto those checklists. Context dependent language models (Mikoloy etal, 2010; Si-etall, 2015;\nare proposed to capture long term dependency of text. There are also lots of works\non coreference resolution O10; Wiseman_et_al], 2014). We are the first to\ncombine coreference with language modeling, to the best of our knowledge. Much effort has been\ninvested in embedding a copying mechanism for neural models 2016; Gu-et-all,\n2016; [Ling et al, 2016). In general, a gating mechanism is employed to combine the softmax over\nobserved words and a pointer network (Vinyals et al], 2015). These gates can be trained either by\nmarginalizing over both outcomes, or using heuristics (e.g. copy low frequency words). Our models\nare similar to models proposed in (Ahn_et_all, 2016; Merity et al], 2016), where the generation of\neach word can be conditioned on a particular entry in knowledge lists and previous words. In our\nwork, we describe a model with broader applications, allowing us to condition, on databases, lists\nThe results for dialogue, recipe generation and coref language model are shown in Table & BI anc\n] respectively. We can see from Table @ that models that condition on table performs better ir\nredicting table tokens in general. Table pointer has the lowest perplexity for token in the table\nsince the table token appears rarely in the dialogue, the overall perplexity does not differ much anc\nhe non-table tokens perplexity are similar. With attention mechanism over the table, the perplexity\nyf table token improves over basic seq2seq model, but not as good as directly pointing to cells in the\nable. As expected, using sentence attention improves significantly over models without sentence\nttention. Surprisingly, table latent performs much worse than table pointer. We also measure the\nerplexity of table tokens that appear only in test set. For models other than table pointer, because\nhe tokens never appear in training set, the perplexity is quite high, while table pointer can predic!\nhese tokens much more accurately. The recipe results in Table H in general follows that findings\nrom the dialogue. But the latent model performs better than pointer model since that tokens ir\nngredients that match with recipe does not necessarily come from the ingredients. Imposing <\nupervised signal will give wrong information to the model and hence make the result worse. Hence\nvith latent decision, the model learns to when to copy and when to generate it from the vocabulary\nThe coref LM results are shown in Table @ We find that coref based LM performs much better or\nhe entities perplexities, but however is a little bit worse than for non-entity words. We found it is ar\nyptimization problem and perhaps the model is stuck in local optimum. So we initialize the pointe:\nnodel with the weights learned from LM, the pointer model performs better than LM both for entity\nerplexity and non-entity words perplexity.\nWe introduce reference-aware language models which explicitly model the decision of from wher\nto generate the token at each step. Our model can also learns the decision by treating it as a laten\nvariable. We demonstrate on three tasks, table based dialogue modeling, recipe generation and core!\nbased LM, that our model performs better than attention based model, which does not incorporate\nthis decision explicitly. There are several directions to explore further based on our framework. The\ncurrent evaluation method is based on perplexity and BLEU. In task oriented dialogues, we can als\u00a2\ntry human evaluation to see if the model can reply users\u2019 query accurately. It is also interesting tc\nuse reinforcement learning to learn the actions in each step."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Sungjin Ahn, Heeyoul Choi, Tanel Pirnamaa, and Yoshua Bengio. A neural knowledge language\nmodel. CoRR, abs/1608.00318, 2016.\nAntoine Bordes and Jason Weston. Learning end-to-end goal-oriented dialog. arXiv preprini\narXiv: 1605.07683, 2016.\nMatthew Henderson, Blaise Thomson, and Jason Williams. Dialog state tracking challenge 2 & 3\n2014.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa\nSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in\nNeural Information Processing Systems, pp. 1693-1701, 2015.\nYangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. Document context\nlanguage models. arXiv preprint arXiv: 1511.03962, 2015.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the\nlimits of language modeling. arXiv preprint arXiv: 1602.02410, 2016.\nWang Ling, Edward Grefenstette, Karl Moritz Hermann, Toma Ko\u00a2isky, Andrew Senior, Fumin\nWang, and Phil Blunsom. Latent predictor networks for code generation. In Proc. ACL, 2016.\nStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture\nmodels. arXiv preprint arXiv: 1609.07843, 2016.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurren\nneural network based language model. In Interspeech, volume 2, pp. 3, 2010.\nAlessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian-\nYun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generatior\nof conversational responses. In Proc. NAACL, 2015.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proc. NIPS, 2015.\nTsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes.\nDavid Vandyke, and Steve Young. A network-based end-to-end trainable task-oriented dialogue\nsystem. arXiv preprint arXiv: 1604.04562, 2016.\nJason D Williams and Geoffrey Zweig. End-to-end Istm-based dialog control optimized with super-\nvised and reinforcement learning. arXiv preprint arXiv: 1606.01269, 2016.\nSam Wiseman, Alexander M Rush, and Stuart M Shieber. Learning global features for coreference\nresolution. arXiv preprint arXiv: 1604.03035, 2016.\nSteve Young, Milica GaSi\u00a2, Blaise Thomson, and Jason D Williams. Pomdp-based statistical spoken\ndialog systems: A review. Proceedings of the IEEE, 101(5):1160\u20141179. 2013.\n[lya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.\nIn Advances in neural information nrocessine svstemse nn 323104\u201431179 9014"}, {"section_index": "8", "section_name": "A VISUALIZATION", "section_text": "ZGECEG &\n\n\u201c\n\nHello , welcome to the Cambridge restaurant system ? You can ask for restaurants by area\nprice range or food type . How may I help you ?\n\nand expensive restaurant that serves tuscan food\n\nSorry there is no tuscan restaurant in the expensive price range\n\nfrench food\n\ncote is a great restaurant serving french food and it is in the expensive price range\n\nmay i have the phone number\n\nThe phone number of cote is 01223 311053.\n\u2018Lh... Lee\n(a) Dialogue script\n\nNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE\n\ncharlie chan cheap chinese east Regent Street City Cen- | C.B 2,1D.B | 01223 361763\ntre\n\nchiquito restau- expensive mexican | south | 2G Cambridge Leisure | C.B 1,7D.Y | 01223 400170\n\nrant bar Park Cherry Hinton\nRoad Cherry Hinton\n\ncity stop expensive food north | Cambridge City Foot- -EMPTY 01223 363270\nball Club Milton Road\nChesterton\n\nclowns cafe expensive italian centre | EMPTY C.B1,1L.N | 01223 355711\n\ncocum expensive indian west | 71 Castle Street City | C.B3,0A.H | 01223 366668\nCentre\n\nST expensive french centre | Bridge Street City Cen- | C.B2,1U.F | 01223 311053\n\ntre\n\ncurry garden expensive indian centre | 106 Regent Street City -EMPTY 01223 302330\nCentre\n\ncurry king expensive indian centre | 5 Jordans Yard Bridge | C.B1,2B.D | 01223 324351\nStreet City Centre\n\ncurry prince moderate indian east 451 Newmarket Road | C.B5,8J.J | 01223 566388\nFen Ditton\n\n(b) Attention heat map: cote is a great restaurant serving french food and it is in the expensive price range.\n\nNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE\n\ncharlie chan cheap chinese east Regent Street City Cen- | C.B 2,1D.B | 01223 361763\ntre\n\nchiquito restau- expensive mexican | south | 2G Cambridge Leisure | C.B 1,7D.Y | 01223 400170\n\nrant bar Park Cherry Hinton\nRoad Cherry Hinton\n\ncity stop expensive food north | Cambridge City Foot- -EMPTY 01223 363270\nball Club Milton Road\nChesterton\n\nclowns cafe expensive italian centre | EMPTY C.B1,1L.N | 01223 355711\n\ncocum expensive indian west | 71 Castle Street City |] C.B3,0A.H | 01223 366668\nCentre\n\ncote expensive french centre | Bridge Street City Cen- | C.B2,1U.F\ntre\n\ncurry garden expensive indian centre | 106 Regent Street City -EMPTY 01223 302330\nCentre\n\ncurry king expensive indian centre | 5 Jordans Yard Bridge | C.B1,2B.D | 01223 324351\nStreet City Centre\n\ncurry prince moderate indian east 451 Newmarket Road | C.B5,8J.J | 01223 566388\nFen Ditton\n\n(c) Attention heap map: The phone number of cote is 01223 311053.\n106 Regent Street City\nCentre\nNN RIN NRA Na OBI IIDI NIN BR OANAL LINE OY RNG DRE BRIN AD EOIN ERIE BR ONAN GALLS MAY GAO\nprice range or food type . How may I help you ?\n\ncheap restaurant\n\nWhat kind of food would you like ?\n\nin the west part of town\n\nla margherita isa nice restaurant in the west of town in the cheap price range\naddress\n\nSure , la margheritaison 15 Magdalene Street City Centre\n\nthank you\n\nla margherita is a nice restaurant in the west of town in the cheap price range\n\ngood bye\n(c) Attention heap map: Sure , la margheritaison 15 Magdalene Street City Centre\n(a) Dialogue script\n\nNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE\nindia house expensive indian west | 31 Newnham Road -EMPTY 223 461661\nNewnham\nJj restaurant cheap oriental centre | 86 Regent Street City | C.B2,1D.P 223 307581\nCentre\njinling noodle moderate chinese centre | 11 Peas Hill City Cen- | C.B2,3 PP 223 566188\nbar tre\nkohinoor cheap indian centre | 74 Mill Road City Cen- -EMPTY 223 323639\ntre\nkymmoy expensive oriental centre | 52 Mill Road City Cen- | C.B1,2A.S 223 311911\ntre\nla margherita cheap italian west | 15 Magdalene Street | C.B 3,0 A.F 223 315232\nCity Centre\nJa mimosa expensive mediterranean | centre | Thompsons Lane Fen | C.B5,8A.Q 223 362525\nDitton\nla raza cheap spanish centre | 4 - 6 Rose Crescent C.B2,3L.L 223 464550\nla tasca moderate spanish centre | 14-16 Bridge Street C.B2,1U.F 223 464630\nJan hong house moderate chinese centre | 12 Norfolk Street City EMPTY 223 350420\nCentre\n(b) Attention heat map: la margherita is a nice restaurant in the west of town in the cheap price range\nNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE\nindia house expensive indian west | 31 Newnham Road -EMPTY 223 461661\nNewnham\nJj restaurant cheap oriental centre | 86 Regent Street City | C.B2,1D.P 223 307581\nCentre\njinling noodle moderate chinese centre | 11 Peas Hill City Cen- | C.B2,3 PP 223 566188\nar tre\nkohinoor cheap indian centre | 74 Mill Road City Cen- EMPTY 223 323639\ntre\nkymmoy expensive oriental centre | 52 Mill Road City Cen- | C.B1,2A.S 223 311911\ntre\nla margherita cheap italian west C.B 3,0 A.F 223 315232\n[a mimosa expensive mediterranean | centre | Thompsons Lane Fen | C.B5,8A.Q 223 362525\nDitton\nlaraza cheap spanish centre | 4 - 6 Rose Crescent C.B2,3L.L 223 464550\na tasca moderate spanish centre | 14-16 Bridge Street C.B2,1U.F 223 464630\nlan hong house moderate chinese centre | 12 Norfolk Street City EMPTY 223 350420\n\nCentre\n11 Peas Hill City Cen-\ntre\n\n7 ns WF ASE Es Ce cc\nFigure 6: Recipe heat map example 1. The ingredient tokens appear on the left while the recipe\ntokens appear on the top. The first row is the p(z,,|s,,).\nplz)\n\n1\ntablespoon\nolive\n\noil\n\n1\n\nshallot\n\ndiced\n1\n\n(\n\n10\nounce\n)\n\nbag\nbaby\nspinach\nleaves\nkosher\nsalt\nand\nfreshly\nground\nPepper\nto\ntaste\n\nple)\n\n1\ntablespoon\nolive\n\noil\n\n1\n\nshallot\n\ndiced\n\n1\n\n(\n\n10\nounce\n)\n\nbag\nbaby\nspinach\nleaves\nkosher\nsalt\nand\nfreshly\nground\npepper\nto\ntaste\n\npie)\n\n1\ntablespoon\nolive\n\noil\n\n1\n\nshallot\n\ndiced\n1\n\n10\nounce\n)\n\nbag\nbaby\nspinach\nleaves\nkosher\nsalt\nand\nfreshly\nground\nPepper\nto\ntaste\n\nand\n\nand\n\ncook\n\npepper\n\nlarge\n\nuntil\n\nskillet\n\ntransparent\n\ncook\n\nand\n\nheat\n\nabout\n\nstir\n\nolive oil\n\n(a) part 1\n\n5 minutes\n\n(b) part 2\n3\n\nto\n\n(c) part 3\n\nover\n\nmedium\n\nAdd\n\nminutes\n\nheat\n\nspinach\n\nuntil\n\nleaves\n\nstir\n\nsprinkle\n\nare\n\nin\n\nwith\n\nwilted\n\nshallots\n\nsalt\n\nand\nFigure 7: Recipe heat map example 2.\n(a) part 1\n\ncook until transparent, about 5 minutes. Add spinach = ,_\u2014ssprinkle. with sa"}]
HyxQzBceg
[{"section_index": "0", "section_name": "DEEP VARIATIONAL INFORMATION BOTTLENECK", "section_text": "Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy\nfalemi, iansf, jvdillon, kpmurphy}@google.com\nent a variational approximation to the information bottleneck of /Tishby\n. This variational approach allows us to parameterize the informa-\ntion bottleneck model using a neural network and leverage the reparameterization\ntrick for efficient training. We call this method \u201cDeep Variational Information\nBottleneck\u201d, or Deep VIB. We show that models trained with the VIB objective\noutperform those that are trained with other forms of regularization, in terms of\ngeneralization performance and robustness to adversarial attack."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Given the data processing inequality, and the invariance of the mutual information to reparameteriza-\ntions, if this was our only objective we could always ensure a maximally informative representation\nby taking the identity encoding of our data (Z = X), but this is not a useful representation of out\ndata. Instead we would like to find the best representation we can obtain subject to a constraint on\nits complexity. A natural and useful constraint to apply is on the mutual information between out\nencoding and the original data, [(X, Z) < I, where I, is the information constraint. This suggests\nthe objective:\nmax I(Z,Y; 6) s.t. [(X,Z;0) < Ie.\nThe IB principle is appealing, since it defines what we mean by a good representation, in terms of the\nfundamental tradeoff between having a concise representation and one with good predictive power\n(Tishby & Zaslavsky} |2015a). The main drawback of the IB principle is that computing mutual\n\ninformation is, in general, computationally challenging. There are two notable exceptions: the first"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We adopt an information theoretic view of deep networks. We regard the internal representation of\nsome intermediate layer as a stochastic encoding Z of the input source X, defined by a parametric\nencoder p(z|x; 6)[ | Our goal is to learn an encoding that is maximally informative about our target\nY, measured by the mutual information between our encoding and the target J(Z, Y: 8), where\nI(Z,Y;0) = da dj z,y|9) lo, _ Pl 9/8)\n| y D(z, y| oe Cae TaTt\nRrp(@) =1(Z,Y;0) \u2014 BI(Z, X;@).\nHere our goal is to learn an encoding Z that is maximally expressive about Y while being maximally\ncompressive about X, where 6 > 0 controls the tradeoff|*} This approach is known as the informa-\ntion bottleneck (IB), and was first proposed in| (1999). Intuitively, the first term in Rrp\nencourages Z to be predictive of Y; the second term encourages Z to \u201cforget\u201d X. Essentially it\nforces Z to act like a minimal sufficient statistic of X for predicting Y.\n\u2018In this work, X,Y, Z are random variables, x,y,z and x,y,z are instances of random variables, and\nF(-;@) and f(-;@) are functionals or functions parameterized by 0.\n\n> Note that in the present discussion, Y is the ground truth label which is independent of our parameters so\nP(y|9) = ply).\n\n5 Note that, in our notation, large 6 results in a highly compressed representation. In some works, the IB\nprinciple i is formulated as the minimization of I(Z, X) \u2014 BI(Z, Y), in which case large \u00a7 corresponds to high\nis when X, Y and Z are all discrete, as in|Tishby et al.|(1999); this can be used to cluster discret\ndata, such as words. The second case is when X, Y and Z are all jointly Gaussian (Chechik et al.\n\n2005). However, these assumptions both severely constrain the class of learnable models.\nIn this paper, we propose to use variational inference to construct a lower bound on the IB objective\nin Equation|3} We call the resulting method VIB (variational information bottleneck). By using the\nreparameterization trick 2014), we can use Monte Carlo sampling to get an\nunbiased estimate of the gradient, and hence we can optimize the objective using stochastic gradient\ndescent. This allows us to use deep neural networks to parameterize our distributions, and thus to\nhandle high-dimensional, continuous data, such as images, avoiding the previous restrictions to the\ndiscrete or Gaussian cases.\nWe also show, by a series of experiments, that stochastic neural networks, fit using our VIB method.\nare robust to overfitting, since VIB finds a representation Z which ignores as many details of the\ninput X as possible. In addition, they are more robust to adversarial inputs than deterministic models\nwhich are fit using (penalized) maximum likelihood estimation. Intuitively this is because each input\nimage gets mapped to a distribution rather than a unique Z, so it is more difficult to pass small.\nidiosyncratic perturbations through the latent bottleneck.\nThe idea of using information theoretic objectives for deep neural networks was pointed out ir\nTishby & Zaslavsky (2015b). However, they did not include any experimental results, since thei\napproach for optimizing the IB objective relied on the iterative Blahut Arimoto algorithm, which is\ninfeasible to apply to deep neural networks.\nVariational inference is a natural way to approximate the problem. Variational bounds on mutual\n\ninformation have previously been explored in though not in conjunction with the\ninformation bottleneck objective. ) also explore variational bounds on\nmutual information, and apply them to deep neural networks, but in the context of reinforcement\nlearning. We recently discovered|Chalk et al.|(2016), who independently developed the same varia-\ntional lower bound on the IB objective as us. However, they apply it to sparse coding problems, and\nuse the kernel trick to achieve nonlinear mappings, whereas we apply it to deep neural networks,\nwhich are computationally more efficient. In addition, we are able to handle large datasets by using\nstochastic gradient descent, whereas they use batch variational EM.\nN\n\nJor = 3 > [H(p(ylyn).Plulen)) ~ BH (lu)\n\nn=1\nfea\n\nwhere H(p,q) = \u2014 7, p(y) log q(y) is the cross entropy, H(p) = H(p,p) is the entropy,\nP(y\\Yn) = dy, (y) is a one-hot encoding of the label y,,, and N is the number of training exam-\nples. (Note that setting 6 = 0 corresponds to the usual maximum likelihood estimate.) In (Pereyra\nthey show that CP performs better than the simpler technique of label smoothing, in\nwhich we replace the zeros in the one-hot encoding of the labels by \u00ab > 0, and then renormalize\nso that the distribution still sums to one. We will compare our VIB method to both the confidence\npenalty method and label smoothing in Section/4.1]\nIn the unsupervised learning literature, our work is closely related to the work in|Kingma & Welling\n(2014) on variational autoencoders. In fact, their method is a special case of an unsupervised versior\nof the VIB, but with the 3 parameter fixed at 1.0, as we explain in Appendix|B]} The VAE objective\nbut with different values of 8, was also explored in [Higgins et al.] (2016), but from a differen\n\nperspective.\nThe method of [Wang et al.| (2016b) proposes a latent variable generative model of both x and y;\ntheir variational lower bound is closely related to ours, with the following differences. First, we do\nIn the supervised learning literature, our work is related to the recently proposed confidence penalty\n(entropy regularization) method of [2016). In this work, they fit a deterministic\nnetwork by optimizing an objective that combines the usual cross entropy loss with an extra term\nwhich penalizes models for having low entropy predictive distributions. In more detail, their cost\nfunction has the form\nnot have a likelihood term for x, since we are in the discriminative setting. Second, they fix 3 = 1,\nince they do not consider compression.\nFinally, the variational fair autoencoder of ) shares with our paper the idea of\nignoring parts of the input. However, in their approach, the user must specify which aspects of the\ninput (the so-called \u201csensitive\u201d parts) to ignore, whereas in our method, we can discover irrelevan\nparts of the input automatically."}, {"section_index": "3", "section_name": "3. METHOD", "section_text": "Following standard practice in the IB literature, we assume that the joint distribution p(X, Y, Z)\nfactors as follows:\nP(X, Y,Z) = p(Z|X, Y)p(Y|X) p(X) = p(Z|X)p(Y|X)p(X)\nie., we assume p(Z|X,Y) = p(Z|X), corresponding to the Markov chain Y \u00ab+ X \u00a9 Z. This\nrestriction means that our representation Z cannot depend directly on the labels Y. (This opens\nthe door to unsupervised representation learning, which we will discuss in Appendix [B)) Besides\nthe structure in the joint data distribution p(X, Y), the only content at this point is our model for\nthe stochastic encoder p(Z|X), all other distributions are fully determined by these and the Markov\nchain constraint.\nRecall that the IB objective has the form I(Z,Y) \u2014 BI(Z,X). We will examine each of thes\nexpressions in turn. Let us start with J(Z, Y). Writing it out in full, this becomes\nPY, 2)\npl(y)p(z)\n\nplylz)\nply)\u201d\n\nI(Z,Y) = [ aydzp(y.2)t08 = [ tyazr(y.z 2) log\nvtule) = [ dev(evle) = f aeptyle\\n(ele) = fax PURE)\n\np(z\nKL[p(Y |Z), q(\u00a5|Z)] > 0 => [sve log p(y|z) = [evil log q(y|z) ,\naylz)\nPly)\n\n= [ evant.) log q(y|z) ~ [ vow) log p(y\n\nI(Z,Y) > fu dz p(y, z) log\n\n= | dy dz p(y, 2) log q(ylz) + H(Y).\nNotice that the entropy of our labels H(Y) is independent of our optimization procedure and so can\nbe ignored.\n1(2Z,\u00a5) > [ de dydzp(a)p(ylx)p(e\\2)1oealyl2)\np(z|z)\np(z)\n\nI(Z,X) = [ ecacvte, z) log = [ e<avvle,2) log p(z|x) \u2014 [ eve) log p(z)\nSince this is intractable in our case, let q(y|z) be a variational approximation to p(y|z). This is our\ndecoder, which we will take to be another neural network with its own set of parameters. Using the\nfact that the Kullback Leibler divergence is always positive, we have\nFocusing on the first term in Equation [11] we can rewrite p(y, z) as p(y,z) = fdxp(a,y,z) =\nf dx p(x)p(y|x)p(z|2) (leveraging our Markov assumption), which gives us a new lower bound on\nthe first term of our objective:\nThis only requires samples from both our joint data distribution as well as samples from our stochas-\ntic encoder, while it requires we have access to a tractable variational approximation in g(y|z).\np(z|x)\n\n1(2,X) < [ dedep(a)p(2\\2) log ra)\nN\nan Wo [ [ ezrlelen log q(Ynl2) \u2014 B p(2|an) Jog ME) .\nSuppose we use an encoder of the form p(z|x) = N(z|f(x), f2(a)), where fe is an MLP which\noutputs both the A\u2019-dimensional mean pu of z as well as the K x K covariance matrix 4. Then we\ncan use the reparameterization trick to write p(z|x)dz = p(e)de, where\nz = f(z, \u20ac) is a deterministic function of z and the Gaussian random variable \u00ab. This formulatior\n\nhas the important advantage that the noise term is independent of the parameters of the model, so it\nis easy to take gradients.\n\nY\nAs in|/Kingma & Welling| (2014), this formulation allows us to directly backpropagate through a\n\nsingle sample of our stochastic code and ensure that our gradient is an unbiased estimate of the true\nexpected gradient|*]"}, {"section_index": "4", "section_name": "4.1 BEHAVIOR ON MNIST", "section_text": "We start with experiments on unmodified MNIST (i.e. no data augmentation). In order to pick a\nmodel with some \u201cheadroom\u201d to improve, we decided to use the same architecture as in the (Pereyra\npaper, namely an MLP with fully connected layers of the form 784 - 1024 -\n\n- 10, and ReLu activations. (Since we are not exploiting spatial information, this correpsonds to\nthe \u201cpermutation invariant\u201d version of MNIST.) The performance of this baseline is 1.38% error.\n\n(Pereyra et al.||2016) were able to improve this to 1.17% using their regularization technique. We\n\nwere able to improve this to 1.13% using our technique, as we explain below.\nIn our method, the stochastic encoder has the form p(z|z) = N(z|f# (x), f\u00a5(x)), where fe is ai\nMLP of the form 784 \u2014 1024 \u2014 1024 \u2014 2K, where K is the size of the bottleneck. The first A\noutputs from f. encode ju, the remaining K outputs encode o (after a softplus transform).\nIn general, while it is fully defined, computing the marginal distribution of Z, p(z) =\nf dx p(z|x)p(x), might be difficult. So let r(z) be a variational approximation to this marginal.\nSince KL[p(Z),r(Z)| > 0 => f dzp(z)logp(z) > f dz p(z) logr(z), we have the following\nupper bound:\n[(Z,Y) \u2014 BI(Z, X) > fe dy dz p(x)p(y|x)p(z|x) log q(y|z)\np(z|z)\n\nr(z)\n\n=L.\n\n=6 f ded pa) p(z|x) log\nAssuming our choice of p(z|z) and r(z) allows computation of an analytic Kullback-Leibler di-\nvergence, we can put everything together to get the following objective function, which we try to\nninimize:\nJig = 1 Bol log q(Yn|f(2n,\u20ac))] + BKL [p(Z|2n),7(Z)] -\n\nNas\nIn this section, we present various experimental results, comparing the behavior of standard deter-\nministic networks to stochastic neural networks trained by optimizing the VIB objective.\n* Even if our choice of encoding distribution and variational prior do not admit an analytic KL, we could\n\nsimilarly reparameterize through a sample of the divergence (Kingma & Welling}, 2014}|Blundell et al. 2015).\nModel | error\nBaseline | 1.38%\nDropout | 1.34%\nDropout 140%\n\u2018onfidence Penalt 1.36%\nConfidence Penalty Pereyra et 2016) 1.17%\nabel Smoothing | 1.40%\nLabel Smoothing (Pereyra et 201 1.23%\n= 10\u00b0 1.13%\nThe decoder is a simple logistic regression model of the form q(y|z) = S(y|fa(z)), where S(a) -\nfexp(ae)/ roy exp(ac\u2019)] is the softmax function, and fa(z) = Wz + b maps the K dimension:\nlatent code to the logits of the C = 10 classes. (In later sections, we consider more comple\ndecoders, but here we wanted to show the benefits of VIB in a simple setting.)\nFinally, we treat r(z) as a fixed k-dimensional spherical Gaussian, r(z) = NV(z|0, J).\nWe compare our method to the baseline MLP. We calso consider the following deterministic limi\nof our model, when 2 = O. In this case, we obtain the following objective function:"}, {"section_index": "5", "section_name": "4.1.1 HIGHER DIMENSIONAL EMBEDDING", "section_text": "To demonstrate that our VIB method can achieve competitive classification results, we comparec\nagainst a deterministic MLP trained with various forms of regularization. We use a kK = 25\u00a2\ndimensional bottleneck and a diagonal Gaussian for p(z|x). The networks were trained using Ten.\n\nsorFlow for 200 epochs using the Adam optimizer (Kingma & Ba||2015) with a learning rate o:\n0.0001. Full hyperparameter details can be found in Appendix\nThe results are shown in Table|1} We see that we can slightly outperform other forms of regulariz:\ntion that have been proposed in the literature while using the same network for each. Of course, tk\nperformance varies depending on \u00a3. These results are not state of the art, nor is our main focus \u00ab\nour work to suggest that VIB is the best regularization method by itself, which would require muc\nmore experimentation. However, using the same architecture for each experiment and comparin\nto VIB as the only source of regularization suggests VIB works as a decent regularizer in and \u00ab\nitself. Figure[I[a) plots the train and test error vs 6, averaged over 5 trials (with error bars) for tt\ncase where we use a single Monte Carlo sample of z when predicting, and also for the case whet\nwe average over 12 posterior samples (i.e., we use p(y|xz) = ty, q(y|z*) for 28 ~ p(z|x\nwhere S = 12). In our own investigations, a dozen samples seemed to be sufficient to capture ar\nadditional benefit the stochastic evaluations had to offer in this experimen\nWe see several interesting properties in Figure Ja). First, we notice that the error rate shoots up\nonce \u00a3 rises above the critical value of 6 ~ 10~*. This corresponds to a setting where the mutual\ninformation between X and Z is less than log,(10) bits, so the model can no longer represent the\nfact that there are 10 different classes. Second, we notice that, for small values of (3, the test error\n\u00b0 A dozen samples wasn\u2019t chosen for any particular reason, except the old addage that a dozen samples ar\nsufficient, as mirrored in David MacKay\u2019s book (MacKay 2003). They proved sufficient in this case.\nTable 1: Test set misclassification rate on permutation-invariant MNIST using K = 256. We com-\npare our method (VIB) to an equivalent deterministic model using various forms of regularization.\nThe discrepancy between our results for confidence penalty and label smoothing and the numbers\n\nreported in (Pereyra et al.|{2016) are due to slightly different hyperparameters.\nN\n1 .\nTipo =H \u00bb ELAN (ff (an) f2(an)) 08 S(Yn| fa(2)]\nWhen 3 \u2014 0, we observe the VIB optimization process tends to make f(a) \u2014 0, so the network\nbecomes nearly deterministic. In our experiments we also train an explicitly deterministic model\nthat has the same form as the stochastic model, except that we just use z = f!(a) as the hidden\nencoding, and drop the Gaussian layer.\nis higher than the training error, which indicates that we are overfitting. This is because the network\nlearns to be more deterministic, forcing o + 0, thus reducing the benefits of regularization. Third.\nwe notice that for intermediate values of 8, Monte Carlo averaging helps. Interestingly, the region\nwith the best performance roughly corresponds to where the added benefit from stochastic averaging\ngoes away, suggesting an avenue by which one could try to optimize { using purely statistics on the\ntraining set without a validation set. We have not extensively studied this possibility yet.\nIn Figure[I]d) we plot the second term in our objective, the upper bound on the mutual information\nbetween the images X and our stochastic encoding Z, which in our case is simply the relative\nentropy between our encoding and the fixed isotropic unit Gaussian prior. Notice that the y-axis is a\nlogarithmic one. This demonstrates that our best results (when \u00a7 is between 107? and 10~?) occur\nwhere the mutual information between the stochastic encoding and the images is on the order of 10\nto 100 bits.\n0.020\n\n01s\n\n5 0.010\n5\n\n=F test | shot eval\n\ntest avgeval\n\noot = train 1 shot eval\ntrain avg eval\n\n=F test | shot eval\n0.005 =f test avg eval\nf= train 1 shot eval\n\ntrain avg eval\n\n0.000 0.00 a\n10% 10 10-7 10% 10-5 10-4 109 107 10 1 10! 10-9 10\u00b0 10-7 10-6 10 10-4 104 10? 10-4 10 10!\nB B\n(a) (b)\n, .\n33 train \u00a2 train\nHE test we Se est\n32\n10!\nga\nx 10\u00b0\n30\n10\n29 02\n28 10\u00b0\n10! 10 10 10! 10\u00b0? 10-* 10-7 10\u00b0 10S to 10-10? 1010\u201d 0\nWZ,X) B\n\n(c) (d)\nFigure 1: Results of VIB model on MNIST. (a) Error rate vs 6 for K = 256 on train and test set.\n\u201c1 shot eval\u201d means a single posterior sample of z, \u201cavg eval\u201d means 12 Monte Carlo samples. The\nspike in the error rate at 8 ~ 10~? corresponds to a model that is too highly regularized. Plotted\nvalues are the average over 5 independent training runs at each @. Error bars show the standard\ndeviation in the results. (b) Same as (a), but for ky = 2. Performance is much worse, since we pass\nthrough a very narrow bottleneck. (c) I(Z, Y) vs I(Z, X) as we vary 8 for K = 256. We see that\nincreasing I(Z, X) helps training set performance, but can result in overfitting. (d) I(Z, X) vs 6\nfor K = 256. We see that for a good value of 8, such as 10~?, we only need to store about 10 bits\nof information about the input."}, {"section_index": "6", "section_name": "4.1.2 TWO DIMENSIONAL EMBEDDING", "section_text": "To better understand the behavior of our method, we refit our model to MNIST using a k = |\ndimensional bottleneck, but using a full covariance Gaussian. (The neural net predicts the mean ani\nthe Cholesky decomposition of the covariance matrix.) Figure[I{b) shows that, not surprisingly, th\nclassification performance is worse (note the different scaled axes), but the overall trends are th\nIn Figure[iIc), we plot the IB curve, i.e., we plot [(Z, Y) vs I(Z, X) as we vary 3. As we allow\nmore information from the input through to the bottleneck (by lowering 3), we increase the mutual\ninformation between our embedding and the label on the training set, but not necessarily on the test\nset, as is evident from the plot.\nsame as in the K = 256 dimensional case. The IB curve (not shown) also has a similar shape t\nbefore, except now the gap between training and testing is even larger.\nFigure[2|provides a visualization of what the network is doing. We plot the posteriors p(z|:) as a 2d\nGaussian ellipse (representing the 95% confidence region) for 1000 images from the test set. Colors\ncorrespond to the true class labels. In the background of each plot is the entropy of the variational\nclassifier g(y|z) evaluated at that point.\n(a) 8 = 107%, errme = 3.18%, (b) 6 = 1071, errme = 3.44%, (c) 8 = 10\u00b0, errme = 33.82\nerry = 3.24% err = 4.32% err = 62.81%.\nFigure 2: Visualizing embeddings of 1000 test images in two dimensions. We plot the 95% confi-\ndence interval of the Gaussian embedding p(z|x) = N(, X) as an ellipse. The images are colorec\naccording to their true class label. The background greyscale image denotes the entropy of the vari-\national classifier evaluated at each two dimensional location. As 6 becomes larger, we forget more\nabout the input and the embeddings start to overlap to such a degree that the classes become indis-\ntinguishable. We also report the test error using a single sample, err;, and using 12 Monte Carlc\nsamples. errmc. For \u201cgood\u201d values of 8. a single sample suffices.\nWe see several interesting properties. First, as 6 increases (so we pass less information through)\nthe embedding covariances increase in relation to the distance between samples, and the classe\nstart to overlap. Second, once \u00a3 passes a critical value, the encoding \u201ccollapses\u201d, and essential:\nall the class information is lost. Third, there is a fair amount of uncertainty in the class predition\n(q(y|z)) in the areas between the class embeddings. Fourth, for intermediate values of 3 (say 10\u2014\nin Figure [2{b)), predictive performance is still good, even though there is a lot of uncertainty abou\nwhere any individual image will map to in comparison to other images in the same class. This mean\nit would be difficult for an outside agent to infer which particular instance the model is representing\na property which we will explore more in the following sections.\nSzegedy et al.| (2013) was the first work to show that deep neural networks (and other kinds of\n\nclassifiers) can be easily \u201cfooled\u201d into making mistakes by changing their inputs by imperceptibly\n\nsmall amounts. In this section, we will show how training with the VIB objective makes models\nsignificantly more robust to such adversarial examples.\nSince the initial work by |Szegedy et al. (2013) and|Goodfellow et al. (2014), many different adver\n\nsaries have been proposed. Most attacks fall into three broad categories: optimization-based attack\n\n(Szegedy et al.\\!2013}\\Carlini & Wagner\\|2016}|Moosavi-Dezfooli et al. 2016} Papernot et al.| 2015\n\nRobinson & Graham |2015}|Sabour et al.]/2016), which directly run an optimizer such as L-BFG:\nor ADAM (Kingma & Ba} |2015) on image pixels to find a minimal perturbation that changes th\nmodel\u2019s classification; single-step gradient-based attacks Kurakin et al.\n{2016} [Huang et al.|{2015), which choose a gradient direction of the image pixels at some Toss an\n\nthen take a single step in that direction; and iterative gradient-based attacks (Kurakin et al.|{2016}\nMany adversaries can be formalized as either untargeted or targeted variants. An untargeted ad-\nversary can be defined as A(X, M) \u2014 X\u2019, where A(.) is the adversarial function, X is the input\nimage, X\u2019 is the adversarial example, and M is the target model. A is considered successful if\nM(X) # M(X\u2019). Recently, showed how to create a \u201cuniversal\u201d\nadversarial perturbation 6 that can be added to any image X in order to make M(X + 6) 4 M(X)\nfor a particular target model.\nIn this work, we focus on the Fast Gradient Sign (FGS) method proposed in |Goodfellow et al.\nand the Lz optimization method proposed in|Carlini & Wagner| (2016). FGS is a standard\nbaseline attack that takes a single step in the gradient direction to generate the adversarial example.\nAs originally described, FGS generates untargeted adversarial examples. On MNIST, [Goodfellow]\nreported that FGS could generate adversarial examples that fooled a maxout networ!\napproximately 90% of the time with \u00ab = 0.25, where \u20ac is the magnitude of the perturbation at each\npixel. The Lz optimization method has been shown to generate adversarial examples with smaller\nperturbations than any other method published to date, which were capable of fooling the target\nnetwork 100% of the time. We consider both targeted attacks and untargeted attacks for the L\noptimization method]"}, {"section_index": "7", "section_name": "4.2.2 ADVERSARIAL ROBUSTNESS", "section_text": "There are multiple definitions of adversarial robustness in the literature. The most basic, which we\nshall use, is accuracy on adversarially perturbed versions of the test set, called adversarial examples\nIt is also important to have a measure of the magnitude of the adversarial perturbation. Since ad-\nversaries are defined relative to human perception, the ideal measure would explicitly correspond to\nhow easily a human observer would notice the perturbation. In lieu of such a measure, it is common\nto compute the size of the perturbation using Lo, Li, L2, and L.. norms (Szegedy et al.||2013\nGoodfellow et al.| 2014} (Carlini & Wagner 2016} Sabour et al. 2016). In particular, the Lo norm\nmeasures the number of perturbed pixels, the L2 norm measures the Euclidean distance between X\nand XY\u2019 and the J... narm meacnres the laroect cinole change to anv nixel\nWe used the same model architectures as in Section [4.1] using a kK = 256 bottleneck. The archi-\ntectures included a deterministic (base) model trained by MLE; a deterministic model trained with\ndropout (the dropout rate was chosen on the validation set); and a stochastic model trained with VIB\nfor various values of 8.\nFor the VIB models, we use 12 posterior samples of Z to compute the class label distribution p(y|).\nThis helps ensure that the adversaries can get a consistent gradient when constructing the perturba-\ntion, and that they can get a consistent evaluation when checking if the perturbation was successful\nA targeted adversary can be defined as A(X, M,1) > X\u2019, where | is an additional target label, and\nA is only considered successful if M(X\u2019) = 1|\"| Targeted attacks usually require larger magnitude\nperturbations, since the adversary cannot just \u201cnudge\u201d the input across the nearest decision boundary,\nbut instead must force it into a desired decision region.\n\u00ae There are also other adversaries that don\u2019t fall as cleanly into those categories, such as \u201cfooling im-\niges\u201d from|Nguyen et al.| which remove the human perceptual constraint, generating regular geometric\nyatterns or noise patterns networks confidently classify as natural images; and the idea of generating ad-\nversaries by stochastic search for images near the decision boundary of multiple networks from Baluja et al]\n7 proposes a variant of the targeted attack, A(X 5, M, Xr,k) + X&, where X\u00a2g is the\nsource image, Xr is a target image, and k is a target layer in the model M. A produces X% by minimizing the\ndifference in activations of M at layer k between Xr and XG. The end result of this attack for a classification\nnetwork is still that //(X) yields a target label implicitly specified by Xr in a successful attack.\n*|Carlini & Wagner) (2016) shared their code with us, which allowed us to perform the attack with exactly\nthe same parameters they used for their paper, including the maximum number of iterations and maximum C\nvalue (see their paper for details).\n(i.e., it reduces the chance that the adversary \u201cgets lucky\u201d in its perturbation due to an untypical\nsample). We also ran the VIB models in \u201cmean mode\u201d, where the os are forced to be 0. This had nc\nnoticeable impact on the results, so all reported results are for stochastic evaluation with 12 samples."}, {"section_index": "8", "section_name": "4.2.4 MNIST RESULTS AND DISCUSSION", "section_text": "We selected the first 10 zeros in the MNIST test set, and use the L2 optimization adversary of|Carlini\nto try to perturb those zeros into ones Some sample results are shown in Figure\nWe see that the deterministic models are easily fooled by making small perturbations, but for the\nVIB models with reasonably large 6, the adversary often fails to find an attack (indicated by the\ngreen borders) within the permitted number of iterations. Furthermore, when an attack is succesful,\nit needs to be much larger for the VIB models. To quantify this, Figure[4]plots the magnitude of the\nperturbation (relative to that of the deterministic and dropout models) needed for a successful attack\nas a function of 8. As increases, the Lo norm of the perturbation decreases, but both Lz and Lx.\nnorms increase, indicating that the adversary is being forced to put larger modifications into fewer\npixels while searching for an adversarial perturbation.\nFigure[5]plots the accuracy on FGS adversarial examples of the first 1000 images from the MNIST\ntest set as a function of 3. Each point in the plot corresponds to 3 separate executions of three\ndifferent models trained with the same value of 3. All models tested achieve over 98.4% accuracy on\nthe unperturbed MNIST test set, so there is no appreciable measurement distortion due to underlying\nmodel accuracy.\nFigure [6] plots the accuracy on L2 optimization adversarial examples of the first 1000 images from\nthe MNIST test set as a function of 3. The same sets of three models per 6 were tested three times.\nas with the FGS adversarial examples.\nWe generated both untargeted and targeted adversarial examples for Figure (6 For targeting, we\ngenerate a random target label different from the source label in order to avoid biasing the result:\nwith unevenly explored source/target pairs. We see that for a reasonably broad range of 3 values\nthe VIB models have significantly better accuracy on the adversarial examples than the deterministic\nmodels, which have an accuracy of 0% (the Lz optimization attack is very effective on traditiona\nmodel architectures).\nFigure [6] also reveals a surprising level of adversarial robustness even when 6 \u2014> 0. This can be\nexplained by the theoretical framework of (2016). Their work proves that quadratic\nclassifiers (e.g., \u00ab' Ax, symmetric A) have a greater capacity for adversarial robustness than linear\nclassifiers. As we show in Appendix |C] our Gaussian/softmax encoder/decoder is approximately\nquadratic for all 8 < oo."}, {"section_index": "9", "section_name": "Architecture", "section_text": "We make use of publicly available, pretrained checkpoint{\u2122 of Inception Resnet V2\non ImageNet (Deng et al.|[2009). The checkpoint obtains 80.4% classification accuracy on the\nImageNet validation set. Using the checkpoint, we transformed the original training set by applying\nthe pretrained network to each image and extracting the representation at the penultimate layer\nThis new image representation has 1536 dimensions. The higher layers of the network continue tc\nclassify this representation with 80.4% accuracy; conditioned on this extraction the classificatior\n\u201d We chose this pair of labels since intuitively zeros and ones are the digits that are least similar in terms of\nhuman perception, so if the adversary can change a zero into a one without much human-noticeable perturba-\ntion, it is unlikely that the model has learned a representation similar to what humans learn.\nVIB improved classification accuracy and adversarial robustness for toy datasets like MNIST. We\nnow investigate if VIB offers similar advantages for ImageNet, a more challenging natural image\nclassification. Recall that ImageNet has approximately 1M images spanning 1K classes. We pre-\nprocess images such that they are 299x299 pixels.\nFigure 3: The adversary is trying to force each 0 to be classified as a 1. Successful attacks have a rec\nbackground. Unsuccessful attacks have a green background. In the case that the label is changec\nto an incorrect label different from the target label (i.e., the classifier outputs something other thar\n0 or 1), the background is purple. The first column is the original image. The second column is\nadversarial examples targeting our deterministic baseline model. The third column is adversaria\nexamples targeting our dropout model. The remaining columns are adversarial examples targeting\nour VIB models for different 3.\nAll L*/Deterministic Model L*\n\n3.0 - - Deterministic Model L*\nTargeted L2 Optimi:\n\n15\n\n10\"\n\n10\"\n\n10\u00b0\n\n10\u00b0\n\n107\na\n\n(a)\n\n10\u00b0\n\n10%\n\n10%\n\n10?\n\nAll L*/Dropout Model L*\n\n18 \u2014\n16\n\n14\n\n12\n\n1.0 --\n08\n\n0.6\n\nDropout Model L*\nTargeted L2 Opti\nTargeted L2 Op\u2019\n\nization (0->1):LO\n\nization (0->1):L2\n\nTargeted L2 Optimization (0->1):Loo\n\n10%\n\n10?\n=10\u00b0%\n\n&b\n4\nll\nu\n4\nll\nLu\n4\nll\nL\na\nll\n\nDropout B=0\n\nDet.\n\nOrig.\nrn ne rs 2.0 ~~ \u201cres\n\n\u2018oss Targeted L2 Optimization (0->1):LO \u2018uss Targeted L2 Optimization (0->1):L0\n\u201c1+ Targeted L2 Optimization (0->1):12 > + += Targeted L2 Optimization (0->1):L2\n\u2014 Targeted L2 Optimization (0->1):Loo 1,8 \u2014= Targeted L2 Optimization (0->1):Loo\n\noie\n3\n3\n8\n=\nS14\n5\n3\na\n2\nQ seen rieensensneary\n= oe\npe 06\n+ 10% 10\u00b0 10\u00b0 107 10\u00b0 10\u00b0 10% 10% 10? 107+ 10% 10\u00b0 10\u00b0 107 10\u00b0 10\u00b0 10% 10%\na a\nFigure 4: (a) Relative magnitude of the adversarial perturbation, measured using Lo, Lz, and Lo\nnorms, for the images in Figure[3]as a function of 3. (We normalize all values by the corresponding\nnorm of the perturbation against the base model.) As ( increases, Lo decreases, but both Lz and Lg\nincrease, indicating that the adversary is being forced to put larger modifications into fewer pixels\nwhile searching for an adversarial perturbation. (b) Same as (a), but with the dropout model as the\nbaseline. Dropout is more robust to the adversarial perturbations than the base deterministic model,\nbut still performs much worse than the VIB model as @ increases.\nRelative Accuracy on Adversarial Examples\n\n= = Deterministic Model\n\n= FGS, epsilon=0.350\n= FGS, epsilon=0.400\n10 fe FGS, epsilon=0.450\nl= FGS, epsilon=0.500\n\n10% 107 10\u00b0\n\n| Examples\n\n5\nfa\n6\n$\n5]\n<\n\u00a9\n5\niy\nis\nFI\n<\ng\n\n= = Dropout Model\n5 be FS, epsilon=0.350\n\u2014& FGS, epsilon=0.400\n& FGS, epsilon=0.450\n= FGS, epsilon=0.500\n\n10\u00b0 107\n\n10\u00b0\n\n10\u00b0\n\n(b)\n\n10%\n\n10\n\n10?\n\n107\nFigure 5: Classification accuracy of VIB classifiers, divided by accuracy of baseline classifiers, or\nFGS-generated adversarial examples as a function of 3. Higher is better, and the baseline is always\nat 1.0. For the FGS adversarial examples, when 6 = 0 (not shown), the VIB model\u2019s performance is\nalmost identical to when 6 = 10-8. (a) FGS accuracy normalized by the base deterministic mode!\nperformance. The base deterministic model\u2019s accuracy on the adversarial examples ranges from\nabout 1% when \u20ac = 0.5 to about 5% when \u20ac = 0.35. (b) Same as (a), but with the dropout mode\nas the baseline. The dropout model is more robust than the base model, but less robust than VIB\nparticularly for stronger adversaries (i.e., larger values of \u20ac). The dropout model\u2019s accuracy on the\nadversarial examples ranges from about 5% when \u20ac = 0.5 to about 16% when \u20ac = 0.35. As ir\nthe other results, relative performance is more dramatic as ( increases, which seems to indicate tha\nthe VIB models are learning to ignore more of the perturbations caused by the FGS method, ever\nthough they were not trained on any adversarial examples.\n\u2018Accuracy on Adversarial Examples\n\n0.7\n\nES\n\nBS\n\n- - Deterministic and Dropout Models (Targeted and Untargeted)\nTargeted L2 Optimization\n Untargeted L2 Optimization\n\n0.0 = nn eee:\n107 107\u00b0 10\u00b0 10\u00b0 107 10\u00b0 10\u00b0 10% 10% 10? 107\na\nFigure 6: Classification accuracy (from 0 to 1) on Lz adversarial examples (of all classes) as a\nfunction of 6. The blue line is for targeted attacks, and the green line is for untargeted attacks\n(which are easier to resist). In this case, 8 = 10\" has performance indistinguishable from 3 = 0.\nThe deterministic model and dropout model both have a classification accuracy of 0% in both the\ntargeted and untargeted attack scenarios, indicated by the horizontal red dashed line at the bottom of\nthe plot. This is the same accuracy on adversarial examples from this adversary reported in|Carlini\n& Wagner|(2016) on a convolutional network trained on MNIST.\nFigure 7: The results of our ImageNet targeted Ly optimization attack. In all cases we target a\nnew label of 222 (\u201csoccer ball\u201d). Figure (a) shows the 30 images from the first 40 images in the\nImageNet validation set that the VIB network classifies correctly. The class label is shown in green\non each image. The predicted label and targeted label are shown in red. Figure (b) shows adversarial\nexamples of the same images generated by attacking our VIB network with 8 = 0.01. While all\nof the attacks change the classification of the image, in 13 out of 30 examples the attack fails to\nhit the intended target class (\u201c\u2018soccer ball\u201d). Pink crosses denote cases where the attack failed to\nforce the model to misclassify the image as a soccer ball. Figure (c) shows the same result but\nfor our deterministic baseline operating on the whitened precomputed features. The attack always\nsuccceeds. Figure (d) is the same but for the original full Inception ResNet V2 network without\nmodification. The attack always succceeds. There are slight variations in the set of adversarial\nexamples shown for each network because we limited the adversarial search to correctly classified\nimages. In the case of the deterministic baseline and original Inception ResNet V2 network, the\nperturbations are hardly noticable in the perturbed images, but in many instances, the perturbations\nfor the VIB network can be percieved.\nFigure 8: Shown are the absolute differences between the original and final perturbed images for\nall three networks. The left block shows the perturbations created while targeting the VIB network.\nThe middle block shows the perturbations needed for the deterministic baseline using precomputed\nwhitened features. The right block shows the perturbations created for the unmodified Inception\nResNet V2 network. The contrast has been increased by the same amount in all three columns to\nemphasize the difference in the magnitude of the perturbations. The VIB network required much\nlarger perturbations to confuse the classifier, and even then did not achieve the targeted class in 13\nof those cases.\nUnder this transformation, the experiment regime is identical to the permutation invariant MNIST\ntask. We therefore used a similar model architecture. Inputs are passed through two fully connected\nlayers, each with 1024 units. Next, data is fed to a stochastic encoding layer; this layer is charac-\nterized by a spherical Gaussian with 1024 learned means and standard deviations. The output of\nthe stochastic layer is fed to the variational classifier\u2014itself a logistic regression, for simplicity. All\nother hyperparameters and training choices are identical to those used in MNIST, more details in\n\nAppendix"}, {"section_index": "10", "section_name": "Classification", "section_text": "We see the same favorable VIB classification performance in ImageNet as in MNIST. By varying\n8, the estimated mutual information between encoding and image (I(Z, X )) varies as well. At large\nvalues of 3 accuracy suffers, but at intermediate values we obtain improved performance over both\na deterministic baseline and a 8 = 0 regime. In all cases our accuracy is somewhat lower than\nthe original 80.4% accuracy. This may be a consequence of inadequate training time or suboptimal\nhyperparameters.\nOverall the best accuracy we achieved was using 6 = 0.01. Under this setting we saw an accu:\nracy of 80.12%-\u2014nearly the same as the state-of-the-art unmodified network\u2014 but with substantial];\nsmaller information footprint, only [(X, Z) ~ 45 bits. This is a surprisingly small amount of infor.\nmation; 6 = 0 implies over 10,000 bits yet only reaches an accuracy of 78.87%. The deterministic\nbaseline, which was the same network, but without the VIB loss and a 1024 fully connected lin.\near layer instead of the stochastic embedding similarly only achieved 78.75% accuracy. We stres:\nthat regressions from the achievable 80.4% are likely due to suboptimal hyperparameters settings o1\ninadequate training.\nConsidering a continuum of { and a deterministic baseline, the best classification accuracy wa:\nachieved with a 8 = 0.01 \u20ac (0,1). In other words, VIB offered accuracy benefit yet using a mere\n~ 45 bits of information from each image."}, {"section_index": "11", "section_name": "Adversarial Robustness", "section_text": "We next show that the VIB-trained network improves resistance to adversarial attack.\nFigure[8|shows the absolute pixel differences between the perturbed and unperturbed images for the\nexamples in Figure[7] We see that the VIB network requires much larger perturbations in order to\nfool the classifier, as quantified in Table [2]\nMetric\n\nDeterm | IRv2_ | VIB(0.01)\n\nSucessful target | 1.0 1.0 0.567\nLz | 6.45 14.43 | 43.27\nLoo | 0.18 0.44 | 0.92\nTable 2: Quantitative results showing how the different Inception Resnet V2-based architectures\n(described in Section| respond to targeted Ly adversarial examples. Determ is the deterministic\narchitecture, JRv2 is the unmodified Inception Resnet V2 architecture, and V/B(0.01) is the VIB\narchitecture with 6 = 0.01. Successful target is the fraction of adversarial examples that caused\nthe architecture to classify as the target class (soccer ball). Lower is better. Lz and L.. are the\naverage L distances between the original images and the adversarial examples. Larger values mean\nthe adversary had to make a larger perturbation to change the class."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Abhijit Bendale and Terrance Boult. Towards open world recognition. In CVPR, 2015.\n~~ The attacks still often cause the VIB model to misclassify the image, but not to the targeted label. Thi\na form of \u201cpartial\u201d robustness, in that an attacker will have a harder time hitting the target class, but can still\ndisrupt correct function of the network.\nWe focus on the Carlini targeted Lz attack (see Section[4.2. 1p. We show results for the VIB-trained\nnetwork and a deterministic baseline (both on top of precomputed features), as well as for the origi-\nnal pretrained Inception ResNet V2 network itself. The VIB network is more robust to the targeted\nLy\u00bb optimization attack in both magnitude of perturbation and frequency of successful attack.\nFigure [7| shows some example images which were all misclassified as \u201csoccer balls\u201d by the deter-\nministic models; by contrast, with the VIB model, only 17 out of 30 of the attacks succeeded in\nbeing mislabeled as the target label\" | We find that the VIB model can resist about 43.3% of the\nattacks, but the deterministic models always fail (i.e., always misclassify into the targeted label).\nThere are many possible directions for future work, including: putting the VIB objective at multiple\nor every layer of a network; testing on real images; using richer parametric marginal approxima-\ntions, rather than assuming r(z) = AN\u2019(0, J); exploring the connections to differential privacy (see\ne.g.,/Wang et al. 2016a); Cuff & Yu 2016); and investigating open universe classification problems\n(see e.g.,/Bendale & Boult (2015)). In addition, we would like to explore applications to sequence\nprediction, where X denotes the past of the sequence and Y the future, while Z is the current repre-\nsentation of the network. This form of the information bottleneck is known as predictive information\n\n(Bialek et al,}[2001] Palmer et al.|[2015).\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S\nCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine\nlearning on heterogeneous distributed systems. arXiv preprint arXiv: 1603.04467, 2016.\nShumeet Baluja, Michele Covell, and Rahul Sukthankar. The virtues of peer pressure: A simple\nmethod for discovering high-value mistakes. In Intl. Conf: Computer Analysis of Images and\nPatterns, 2015.\nRyan P. Browne and Paul D. McNicholas. Multivariate sharp quadratic bounds via 4-strong con-\nvexity and the fenchel connection. Electronic Journal of Statistics, 9, 2015.\nMatthew Chalk, Olivier Marre, and Gasper Tkacik. Relevant sparse codes with variational inform<\ntion bottleneck. In NJPS, 2016.\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scal\nhierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009\nIEEE Conference on, pp. 248-255. IEEE, 2009.\nAlhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers:\nfrom adversarial to random noise. In N/JPS, 2016.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural\nnetworks. In A//Statistics, volume 9, pp. 249-256, 2010.\nIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversaria\nexamples. In JCLR, 2015.\nRuitong Huang, Bing Xu, Dale Schuurmans, and Csaba Szepesvari. Learning with a strong adver-\nsary. CoRR, abs/1511.03034, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In JCLR, 2015\nDiederik P Kingma and Max Welling. Auto-encoding variational Bayes. In JCLR, 2014.\nAlexey Kurakin, lan Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In\nICLR Workshop, 2017. URL httvos://ovenreview.net/odft?i 1OufnIlx\nShakir Mohamed and Danilo Jimenez Rezende. Variational information maximisation for intrinsi\ncally motivated reinforcement learning. In NJPS, pp. 2125-2133, 2015.\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal\nadversarial perturbations. Arxiv, 2016.\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and\naccurate method to fool deep neural networks. In CVPR, 2016.\nNicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. Arxiv,\n2016.\nrina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick.\nShakir Mohamed, and Alexander Lerchner. beta-VAE: Learning basic visual concepts with <\n\nconstrained variational framework. In JCLR, 2017. URL/https: //openreview.net/pdf?\ni y2f\u00a3zU9g1\\\nStephanie E Palmer, Olivier Marre, Michael J Berry, and William Bialek. Predictive information it\nasensory population. PNAS, 112(22):6908-6913, 2015.\nWeiran Wang, Honglak Lee, and Karen Livescu. Deep variational canonical correlation analysis\n\narXiv [cs.LG], 11 October 2016b. URL'httos://arxiv.org/abs/1610.03454|\nAnh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High con-\n\nfidence predictions for unrecognizable images. In CVPR, 2015. URL\n3oris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging.\nSIAM Journal on Control and Optimization, 30(4):838\u2014-855, 1992.\nSara Sabour, Yanshuai Cao, Fartash Faghri, and David J Fleet. Adversarial manipulation of deep\nrepresentations. In JCLR, 2016.\nNoam Slonim, Gurinder Singh Atwal, GaSper Tka\u00a2ik, and William Bialek. Information-based clus-\ntering. PNAS, 102(51):18297-18302, 2005.\nWeina Wang, Lei Ying, and Junshan Zhang. On the relation between identifiability, differential\nprivacy and Mutual-Information privacy. [EEE Trans. Inf. Theory. 62:5018\u20145029. 2016a."}, {"section_index": "13", "section_name": "A HYPERPARAMETERS AND ARCHITECTURE DETAILS FOR EXPERIMENTS", "section_text": "All of the networks for this paper were trained using TensorFlow Bee era BeTD\nwere initialized using the default TensorFlow Xavier initialization scheme (Glorot & Bengio| {2010}\nusing the averaging fan scaling factor on uniform noise. All biases were initialized to zero. The\nAdam optimizer was used with initial learning rate of 10~4, (81 = 0.5, Bo =\n0.999) and exponential decay, decaying the learning rate by a factor of 0.97 every 2 epochs. The\nnetworks were all trained for 200 epochs total. For the MNIST experiments, a batch size of 10C\nwas used, and the full 60,000 training and validation set was used for training, and the 10,000 test\n\nimages for test results. The input images were scaled to have values between -1 and 1| before fed tc\nthe network.\nAll runs maintained an exponential weighted average of the parameters during the training run;\nthese averaged parameters were used at test time. This is in the style of Polyak averaging|Polyak &\nJuditsky|(1992), with a decay constant of 0.999. Our estimate of mutual informations were measure\nin bits. For the VIB experiments in all sections, no other form of regularization was used.\nFor the 1024 dimensional Imagenet embeddings of Section a sigma bias of 0.57 was used to\nkeep the initial standard deviations near 1 originally, and a batch size of 200 was used.\nmax I(Z, X) \u2014 BI(Z, i),\np(z|z)\np(x)\n\n> [ant 2) [ adep(ele ) log q(a|z)\n\n_ | de p(x) | dz p(2|z) log q(2|2).\n\nI(Z,X)= [ eae v(e,2)lo8\nHere we have dropped the entropy in our data H(X) because it is out of our control and we have\nused the nonnegativity of the Kullbach-Leibler divergence to replace our intractable p(x|z) with a\nvariational decoder g(ax|z).\nFor the 256 dimensional gaussian embeddings of Section a linear layer of size 512 was used\nto create the 256 mean values and standard deviations for the embedding. The standard deviations\nwere made to be positive by a softplus transformation with a bias of -5.0 to have them initially be\nsmall.\no = log (1+ exp(x \u2014 5.0))\nFor the 2 dimensional gaussian embeddings of Section a linear layer was used with 2+4 = 6\noutputs, the first two of which were used for the means, and the other 4 were reshaped to a 2 x 2\nmatrix, the center was transformed according to a softplus with a bias of -5.0, and the off diagonal\ncomponents were multiplied by 10~?, while the upper triangular element was dropped to form the\nCholesky decomposition of the covariance matrix.\nHere the aim is to take our data X and maximize the mutual information contained in some encoding\nZ, while restricting how much information we allow our representation to contain about the identity\nof each data element in our sample (7). We will form a bound much like we did in the main text.\nFor the first term, we form a variational decoder g(x|z) and take a bound:\nI(Z,X) = [ dedz ne, z) log\n\n> [ant 2) [ adep(ele\n= [ avp(e) [ dzv(al2)\n\np(z|z)\np(2)\n\n)log q(2|2)\n\n2) log q(22).\n\n(21)\n\n(22)\n\n(23)\n\n(24)\nTurning our attention to the second term, note that:\np(c\\i) = f de v(eleyptali) = f ae p(ela)6(e ~ 0s) = plete),\nSo that we can bound our second term from above\nI(Z, i) =X] dertelp i og PEN\n\n* Pe)\n\n= yh | en vent\n\n<x [em z\\a;) tog MED.\nVhere we have replaced the intractable marginal p(z) with a variational marginal r(z).\nPutting these two bounds together we have that our unsupervised information bottleneck objectiv\ntakes the form\n1(Z,X) \u2014 A121) < f dere) f de p(2\\2)loxalel2) ~ B55 KL(Zles). 2)\nIt is interesting that while this objective takes the same mathematical form as that of a Variational\nAutoencoder, the interpretation of the objective is very different. In the VAE, the model starts life as\na generative model with a defined prior p(z) and stochastic decoder p(x|z) as part of the model, and\nthe encoder q(z|z) is created to serve as a variational approximation to the true posterior p(z|z) =\np(\u00ab|z)p(z)/p(x). In the VIB approach, the model is originally just the stochastic encoder p(z|z),\nand the decoder q(x|z) is the variational al 5 the true p(x|z) = p(z|x)p(x)/p(z) and\n\nr(z) is the variational approximation to the marginal p(z) = f dx p(x)p(z|x). This difference in\ninterpretation makes natural suggestions for novel nal p(2) \u00ab for improvement.\nThis precise setup, albeit with a different motivation was recently explored in{Higgins et al.|\nwhere they demonstrated that by changing the weight of the variational autoencoders regularization\nterm, there were able to achieve latent representations that were more capable when it came ot zero-\nshot learning and understanding \u2019objectness\u2019\u201d. In that work, they motivated their choice to change\nthe relative weightings of the terms in the objective by appealing to notions in neuroscience. Here\nwe demonstrate that appealing to the information bottleneck objective gives a principled motivation\nand could open the door to better understanding the optimal choice of 8 and more tools for accessing\nthe importance and tradeoff of both terms."}, {"section_index": "14", "section_name": "C QUADRATIC BOUNDS FOR STOCHASTIC LOGISTIC REGRESSION DECODER", "section_text": "Consider the special case when the bottleneck Z is a multivariate Normal, i.e., z|z ~ N (tx, Ue)\nwhere 1, is a Kk x K positive definite matrix. The parameters j1,, X,, can be constructed from a\ndeep neural network, e.g.,\nwhere 7(x) \u20ac R*\u201c*+%)/4 is the network output of input x.\nBeyond the connection to existing variational autoencoder techniques, we note that the unsupervised\ninformation bottleneck objective suggests new directions to explore, including targetting the exact\nmarginal p(z) in the regularization term, as well as the opportunity to explore tighter bounds on the\nfirst [(Z, X) term that may not require explicit variational reconstruction.\nPe ~~ Flt \\)\nhol(=_) = diag(log(1 + exp(y\u00ab41:2K))) + subtril(q2 Kn 41:\u00ab (4K +3)/2):\nThis setup (which is identical to our experiments) induces a classifier which is bounded by \u00ab\nquadratic function, which is interesting because the theoretical framework|Fawzi et al.|(2016) proves\nthat quadratic classifiers have greater capacity for adversarial robustness than linear functions.\nWe now derive an approximate bound using second order Taylor series expansion (TSE). The bound\n\ncan be made proper via (2015). However, using the TSE is sufficient to\n\nsketch the derivation.\nJensen\u2019s inequality implies that the negative log-likelihood soft-max is upper bounded by:\nThe second order Taylor series expansion (TSE) of Ise is given by.\nIse(a + 6) ~ Ise(x) + 67 S(x) + 467 [diag(S(x)) - S(e)S(2)\"| fi\nTaking the expectation of the TSE at the mean yields,\nThe second-moment was calculated by noting,\nPutting this altogether, we conclude,\n\u2014logE[S(WZ)|u2, De] < \u2014E [log S(WZ) |x, U2]\n= Wi, +E [Ilse(WZ)|p2, Da]\n= -\u2014Whyz +E [Ise(Z)|W pz, WEz].\nEnco,ws.wt)[lse(W px + 6)] Ise(W ptr) + Exco,wsewt)[0\" |S(W be) +\n+ FEvo.wy,wrylo\" [diag(SWpe)) \u2014 SWie)S(Whe)\" | 8]\n= Ise(Wpis) + 3 tr(WELWT [diag(S(Wx)) \u2014 S(Wp2)S(W Hx)\" |)\n= Ise(W ur) + 4 tr(WE,W\" diag(S(W,))) \u2014 4S(Wpe)\u2019 WE, W\" S(W ye)\n= Ise(W pix) + 1/5(Whis) WW! J S(Wrs) \u2014 4S(Wr)\u2019 WE,W' S(Wye)\nE(x\u2019 BX] = Etr(XX\u00b0 B) = tr(E[XX' ]B) = tr(XB).\nE [S(WZ)|ptr, De] & S(W te) exp (- 1/S(Wh) WE,WT SW) + $S(Whe)\" WE2W S(W p12)) -\nAs indicated, rather than approximate the Ise via TSE, we can make a sharp, quadratic upper bound\nvia/Browne & McNicholas|(2015). However this merely changes the S(W y,.) scaling in the expo-\n\nnential; the result is still log-quadratic."}]
HycUbvcge
[{"section_index": "0", "section_name": "DEEP GENERALIZED CANONICAL CORREL\nANALYSIS", "section_text": "Adrian Benton, Huda Khayrallah, Biman Gujral,\nDrew Reisinger, Sheng Zhang, Raman Arora\nadrian', huda*, bgujrall*, reisinger\u00ae, zsheng2*, arora!\n*@s+hu.edu, \u00b0@cogsci.jhu.edu, t@cs.jhu.edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Multiview representation learning refers to settings where one has access to many \u201cviews\u201d of data,\nat train time. Views often correspond to different modalities or independent information about ex-\namples: a scene represented as a series of audio and image frames, a social media user characterized\nby the messages they post and who they friend, or a speech utterance and the configuration of the\nspeaker\u2019s tongue. Multiview techniques learn a representation of data that captures the sources of\nvariation common to all views.\nMultiview representation techniques are attractive for intuitive reasons. A representation that is able\nto explain many views of the data is more likely to capture meaningful variation than a representation\nthat is a good fit for only one of the views. They are also attractive for the theoretical reasons. For\nexample, (2014) show that certain classes of latent variable models, such as\nHidden Markov Models, Gaussian Mixture Models, and Latent Dirichlet Allocation models, can be\noptimally learned with multiview spectral techniques. Representations learned from many views\nwill generalize better than one, since the learned representations are forced to accurately capture\nvariation in all views at the same time (Sridharan & Kakade}{2008) \u2014 each view acts as a regularizer,\nconstraining the possible representations that can be learned. These methods are often based on\ncanonical correlation analysis (CCA), a classical statisical technique proposed by{Hotelling](1936).\nIn spite of encouraging theoretical guarantees, multiview learning techniques cannot freely mode!\nnonlinear relationships between arbitrarily many views. Either they are able to model variation\nacross many views, but can only learn linear mappings to the shared space (Horst||1961), or they\nsimply cannot be applied to data with more than two views using existing techniques based on kerne\n\nCCA (Hardoon et al.|/2004) and deep CCA (Andrew et al.|/2013)."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We present Deep Generalized Canonical Correlation Analysis (DGCCA) - a\nmethod for learning nonlinear transformations of arbitrarily many views of data,\nsuch that the resulting transformations are maximally informative of each other.\nWhile methods for nonlinear two-view representation learning (Deep CCA, (\n\ndrew et al. and linear many-view representation learning (Generalized\nCCA (Horst| )) exist, DGCCA is the first CCA-style multiview representation\n\nlearning technique that combines the flexibility of nonlinear (deep) representation\nlearning with the statistical power of incorporating information from many inde-\npendent sources, or views. We present the DGCCA formulation as well as an\nefficient stochastic optimization algorithm for solving it. We learn DGCCA repre-\nsentations on two distinct datasets for three downstream tasks: phonetic transcrip-\ntion from acoustic and articulatory measurements, and recommending hashtags\nand friends on a dataset of Twitter users. We find that DGCCA representations\nsoundly beat existing methods at phonetic transcription and hashtag recommenda-\ntion. and jin veneral nerform no worse than standard linear manv-view techniques.\nHere we present Deep Generalized Canonical Correlation Analysis (DGCCA). Unlike previous\ncorrelation-based multiview techniques, DGCCA learns a shared representation from data with ar-\nbitrarily many views and simultaneously learns nonlinear mappings from each view to this shared\nspace. The only (mild) constraint is that these nonlinear mappings from views to shared space must\nbe differentiable. Our main methodological contribution is the vive (Haat of the gradient update for\nthe Generalized Canonical Correlation Analysis (GCCA) objective (Horst}{1961). As a practical\ncontribution, we have also released an implementation of DGCC. Al\")\nWe also evaluate DGCCA-learned representations on two distinct datasets and three downstrean\ntasks: phonetic transcription from aligned speech and articulatory data, and Twitter hashtag anc\nfriend recommendation from six text and network feature views. We find that downstream perfor\nmance of DGCCA representations is ultimately task-dependent. However, we find clear gains 11\nperformance from DGCCA for tasks previously shown to benefit from representation learning ot\nmore than two views, with up to 4% improvement in heldout accuracy for phonetic transcription.\nThe paper is organized as follows. We review prior work in Section [2] In Section [3] we describe\nDGCCA. Empirical results on a synthetic dataset, and three downstream tasks are presented in\nSection 4] In Section|5] we describe the differences between DGCCA and other non-CCA-based\nmultiview learning work and conclude with future directions in Section|6]"}, {"section_index": "3", "section_name": "2 PRIOR WORK", "section_text": "Some of most successful techniques for multiview representation learning are based on canonical\n\ncorrelation analysis (Wang et al.||2015alb) and its extension to the nonlinear and many view settings,\n\nwhich we describe in this section. For other related multiview learning techniques, see Section|5]\n\u2014 T T uy Ligue\n(uj,u3) = argmax corr(u,; X1,u, X2) = ~\u2014argmax\n\nuy, ER41 uw. ER42 ERGY ,u.ER2 1/Uy 441U Us Vo\n* ak : : T\n(uj, uz) = argmax uy Uy2u2\nuu) Siui=ul Se2ue=1\nDeep CCA (DCCA) is an extension of CCA that addresses the first limitatior\n\nvy finding maximally linearly correlated non-linear transformations of two vectors. It does this by\nyassing each of the input views through stacked non-linear representations and performing CCA o1\nhe outputs.\nLet us use f\\(X1) and f2(X2) to represent the network outputs. The weights, W, and W, of these\nnetworks are trained through standard backpropagation to maximize the CCA objective:\n(uj, ug, Wy, Wy) = argmax corr(u, f1(X1), Ug fo(X2))\n\"See https://bitbucket.org/adrianbenton/dgcca-py3) for implementation of DGCC\n\nalong with data from the svnthetic experiments.\nCanonical correlation analysis (CCA) ) is a statistical method that finds maximally\ncorrelated linear projections of two random vectors and is a fundamental multiview learning tech-\nnique. Given two input views, X; \u20ac R\u00ae and Xj \u20ac R\u00ae, with covariance matrices, \u00a9, and Dy,\nrespectively, and cross-covariance matrix, \u00a932, CCA finds directions that maximize the correlation\nbetween them:\n\nTa\nThis technique has two limitations that have led to significant extensions: First, it is limited to\nlearning representations that are /inear transformations of the data in each view, and second, it can\nonly leverage two input views.\nAnother extension of CCA, which addresses the limitation on the number of views, is Generalized\nCCA (GCCA) (Horst| {1961). It corresponds to solving the optimization problem in Equation (2),\nof finding a shared representation G of J different views, where N is the number of data points,\nd; is the dimensionality of the jth view, r is the dimensionality of the learned representation, and\nYX. \u20ac R42 is the data matrix for the ith view]\nJ\nminimize Ss ||G- Uj Xj\\7\n1\n\nUj ER%*\" GERTXN\n\nsubject to GG' =I,"}, {"section_index": "4", "section_name": "3. DEEP GENERALIZED CANONICAL CORRELATION ANALYSIS (DGCCA)", "section_text": "In this section, we present deep GCCA (DGCCA): a multiview representation learning technique\nthat benefits from the expressive power of deep neural networks and can also leverage statistical\nstrength from more than two views in data, unlike Deep CCA which is limited to only two views.\nMore fundamentally, deep CCA and deep GCCA have very different objectives and optimization\nproblems, and it is not immediately clear how to extend deep CCA to more than two views.\nDGCCA learns a nonlinear map for each view in order to maximize the correlation between the\nlearnt representations across views. In training, DGCCA passes the input vectors in each view\nthrough multiple layers of nonlinear transformations and backpropagates the gradient of the GCCA\nobjective with respect to network parameters to tune each view\u2019s network, as illustrated in Figure[I]\nThe objective is to train networks that reduce the GCCA reconstruction error among their outputs.\nAt test time, new data can be projected by feeding them through the learned network for each view.\nFigure 1: A schematic of DGCCA with deep networks for J views.\nSolving GCCA requires finding an eigendecomposition of an N x N matrix, which scales quadrat-\nically with sample size and leads to memory constraints. Unlike CCA and DCCA, which only learn\nprojections or transformations on each of the views, GCCA also learns a view-independent repre-\nsentation G that best reconstructs all of the view-specific representations simultaneously. The key\nlimitation of GCCA is that it can only learn Jinear transformations of each view.\nWe now formally define the DGCCA problem. We consider J views in our data, and let Xj; \u20ac\nR4*N denote the j*\u201d input matrix[}] The network for the j\u201c\u201d view consists of K; layers. Assume,\nfor simplicity, that each layer in the j\u2018\u201d view network has c; units with a final (output) layer of size\noj. The output of the k*\u201d layer for the j\u201d view is hi. = s(Wjh3_,), wheres : R > Risa\n\nnonlinear activation function and w} \u20ac R*\u00a2k-1 js the weight matrix for the k*\u201d layer of the j\u2019\u201d\nview network. We denote the output of the final layer as f;(X;).\nthat\n\nUj ER *\" ,GERTXN\n\nJ\nminimize Ss |G - Uj} fi (Xi)\nj=1\n\nsubjectto GG! =I,\nwhere G \u20ac R\u201d*% is the shared representation we are interested in learning.\nOptimization: We solve the DGCCA optimization problem using stochastic gradient descent\n(SGD) with mini-batches. In particular, we estimate the gradient of the DGCCA objective in Prob-\nlem] on a mini-batch of samples that is mapped through the network and use back-propagation\nto update the weight matrices, W/\u2019s. However, note that the DGCCA optimization problem is <\nconstrained optimization problem. It is not immediately clear how to perform projected gradient de-\nscent with back-propagation. Instead, we characterize the objective function of the GCCA probler\nat an optimum, and compute its gradient with respect to the inputs to GCCA, i.e. with respect to the\nnetwork outputs. These gradients are then back-propagated through the network to update W/\u2019s.\nAlthough the relationship between DGCCA and GCCA is analogous to the relationship betweet\nDCCA and CCA, derivation of the GCCA objective gradient with respect to the network outpu\nlayers is non-trivial. The main difficulty stems from the fact that there is no natural extension of th\ncorrelation objective to more than two random variables. Instead, we consider correlations betweet\nevery pair of views, stack them in a J x J matrix and maximize a certain matrix norm for tha\nmatrix. For GCCA, this suggests an optimization problem that maximizes the sum of correlation:\nbetween a shared representation and each view. Since the objective as well as the constraints of th\ngeneralized CCA problem are very different from that of the CCA problem, it is not immediatel\nobvious how to extend Deep CCA to Deep GCCA.\nNext, we show a sketch of the gradient derivation, the full derivation 1s given in appendix |A} It 1s\nstraightforward to show that the solution to the GCCA problem is given by solving an eigenvalue\nproblem. In particular, define C;; = f(Xj;)f(Xj;)\" \u20ac IR\u00b0/*\u00b0, to be the scaled empirical covariance\nmatrix of the j*\u201d network output, and P; = f(Xj) \"CG F(X) \u20ac RN*N be the corresponding\nprojection matrix that whitens the data; note that P; is symmetric and idempotent. We define M =\nan P;. Since each P; is positive semi-definite, so is !. Then, it is easy to check that the rows of\nG are the top r (orthonormal) eigenvectors of M, and U; = C5; f(X))G\". Thus, at the minimum\nof the objective, we can rewrite the reconstruction error as follows:\n\nre ee ee ne re a\nJ J\n\u2014ul -) 2, \u2014 \u2014l\u00a2 2\nMIE - 0 HADI = VIG - G06) CP AAD = rJ \u2014 Te(GMGT\n\nj=l j=l\nMinimizing the GCCA objective (w.r.t. the weights of the neural networks) means maximizing\nTr(GMG'\"), which is the sum of eigenvalues L = >7)_, \\;(M). Taking the derivative of L with\nrespect to each output layer f;(X;) we have:\nThus, the gradient is the difference between the r-dimensional auxiliary representation G embedded\ninto the subspace spanned by the columns of U; (the first term) and the projection of the actual\ndata in f; (Xj) onto the said subspace (the second term). Intuitively, if the auxiliary representation\nG is far away from the view-specific representation Uy f;(X;), then the network weights should\nreceive a large update. Computing the gradient descent update has time complexity O(JNrd),\nwhere d = maz(dj,do,...,d,) is the largest dimensionality of the input views.\nIn this section, we apply DGCCA to a small synthetic data set to show how it preserves the generative\nstructure of data sampled from a multiview mixture model. The data we use for this experiment are\nOL\nOf; (Xj)\n\n= 2UjG \u2014 2U;U} fi(X3)\nplotted in Figure [2] Points that share the same color across different views are sampled from the\nsame mixture component.\nFigure 3: The matrix G learned from applying (linear) GCCA or DGCCA to the data in Figure}2|\nIt is also illustrative to consider the view-specific representations learned by DGCCA, that is, to\nconsider the outputs of the neural networks that were trained to maximize the GCCA objective. We\nplot the representations in Figure [4] For each view, we have learned a nonlinear mapping that does\nremarkably well at making the mixture components linearly separable. Recall that absolutely no\ndirect supervision was given about which mixture component each point was generated from. The\nonly training signals available to the networks were the reconstruction errors between the network\noutputs and the learned representation G.\nFigure 4: Outputs of the trained input neural networks in Section|4.1]applied to the data in Figure)\nIn this section, we discuss experiments on the University of Wisconsin X-ray Microbeam Database\n\n(XRMB) (Westbury| |1994). XRMB contains acoustic and articulatory recordings as well as phone-\nmic labels. We present phoneme classification results on the acoustic vectors projected using DCCA,\nFigure 2: Synthetic data used in in Section|4.1}experiments.\nImportantly, in each view, there is no linear transformation of the data that separates the two mixture\ncomponents, in the sense that the generative structure of the data could not be exploited by a linear\nmodel. This point is reinforced by Figure[3{a), which shows the two-dimensional representation G\nlearned by applying (linear) GCCA to the data in Figure [2] The learned representation completely\nloses the structure of the data.\n(b) DGCCA\n\n(a) GCCA\n0 OL\nWe can contrast the failure of GCCA to preserve structure with the result of applying DGCCA; in this\nvase, the input neural networks had three hidden layers with ten units each with weights randomly\n\nnitialized. We plot the representation G' learned by DGCCA in Figure[3](b). In this representation,\nhe mixture components are easily separated by a linear classifier; in fact, the structure is largely\n\nreserved even after projection onto the first coordinate of G.\n\nre eS |\nWe use the same train/tune/test split of the data as (2014). To limit experimen\n\nruntime, we use a subset of speakers for our experiments. We run a set of cross-speaker experiment:\nusing the male speaker JW11 for training and two splits of JW24 for tuning and testing. We als\u00a2\nperform parameter tuning for the third view with 5-fold cross validation using a single speaker\nJW11. For both experiments, we use acoustic and articulatory measurements as the two views it\nDCCA. Following the pre-processing in [Andrew et al. (2013), we get 273 and 112 dimensiona\nfeature vectors for the first and second view respectively. Each speaker has ~50,000 frames. Fo\nthe third view in GCCA and DGCCA, we use 39-dimensional one-hot vectors corresponding to th\n\nlabels for each frame, following (2"}, {"section_index": "5", "section_name": "4.2.3 RESULTS", "section_text": "As we show in Table[]] DGCCA improves upon both the linear multiview GCCA and the non-lineai\n2-view DCCA for both the cross-speaker and speaker-dependent cross-validated tasks.\nIn addition to accuracy, we examine the reconstruction error, i.e. the objective in Equation\ntained from the objective in GCCA and DGCCA/\"|This sharp improvement in reconstruction error\nshows that a non-linear algorithm can better model the data.\nIn this experimental setup, DCCA under-performs the baseline of simply running KNN on the orig.\ninal acoustic view. Prior work considered the output of DCCA stacked on to the central frame of the\noriginal acoustic view (39 dimensions). This poor performance, in the absence of original features.\nindicates that it was not able to find a more informative projection than original acoustic features\nbased on correlation with the articulatory view within the first 30 dimensions.\nTable 1: KNN phoneme classification performance\nCROSS-SPEAKER\n\nSPEAKER-DEPENDENT\n\nDEV TEST REC DEV TEST REC\nMETHOD = ACC Acc ERROR = ACC Acc ERROR\nMFCC 48.89 49.28 66.27 66.22\nDCCA 45.40 46.06 65.88 65.81\nGCCA 49.59 50.18 40.67 69.52 69.78 40.39\nDGCCA 53.78 54.22 35.89 72.62 72.33 20.52\nTo highlight the improvements of DGCCA over GCCA, Figure [5] presents a subset of the the con-\nfusion matrices on speaker-dependent test data. In particular, we observe large improvements in the\nclassification of D, F, K, SH, V and Y. GCCA outperforms DGCCA for UH and DH. These\nmatrices also highlight the common misclassifications that DGCCA improves upon. For instance,\n+For 2-view experiments, correlation is a common metric to compare performance. Since that metric is\nunavailable in a multiview setting, reconstruction error is the analogue.\nGCCA, and DGCCA. We set acoustic and articulatory data as the two views and phoneme labels as\nthe third view for GCCA and DGCCA. For classification, we run K-nearest neighbor classification\n\non the projected result.\nWe use a fixed network size and regularization for the first two views, each containing three hidden\nlayers with sigmoid activation functions. Hidden layers for the acoustic view were all width 1024.\nand layers in the articulatory view all had width 512 units. L2 penalty constants of 0.0001 and\n0.01 were used to train the acoustic and articulatory view networks, respectively. The output layer\ndimension of each network is set to 30 for DCCA and DGCCA. For the 5-fold speaker-dependent\nexperiments, we performed a grid search for the network sizes in {128, 256, 512, 1024} and covari-\nance matrix regularization in {10~?, 10-+, 10~\u00b0, 105} for the third view in each fold. We fix the\nhyperparameters for these experiments optimizing the networks with minibatch stochastic gradient\ndescent with a step size of 0.005, batch size of 2000, and no learning decay or momentum. The third\nview neural network had an L2 penalty of 0.0005.\nDDHP R B F KSHWAOIY S THHVUHY\n\ncor sben is]\n<I<T4Hn20STtAnoDvVIO\n\nDDHP R B F KSHWAOIY S THHVUHY\n\n(2) is]\nTAnwpvIO\n\n3s\n\n<I<Tiuwxo\n\nes\n\n(a) GCCA\n\nee\n\n(b) DGCCA\n\nat ANA,\nDGCCA trectifies the frequent misclassification of V as P, R and B by GCCA. In addition, com\nmonly incorrect classification of phonemes such as S' and T is corrected by DGCCA, which enable\nbetter performance on other voiceless consonants such as like F\u2019, K and SH. Vowels are classifi:\nwith almost equal accuracy by both the methods.\nLinear multiview techniques are effective at recommending hashtag and friends for Twitter users\nBenton et al O14). In this experiment, six views of a Twitter user were constructed by applying\nprincipal component analysis (PCA) to the bag-of-words representations of (1) tweets posted by the\nego user, (2) other mentioned users, (3) their friends, and (4) their followers, as well as one-hot\nencodings of the local (5) friend and (6) follower networks. We learn and evaluate DGCCA models\non identical training, development, and test sets as{Benton et al.](2016), and evaluate the DGCCA\nrepresentations on macro precision at 1000 (P@ 1000) and recall at 1000 (R@ 1000) for the hashtag\nand friend recommendation tasks described there.\nWe trained 40 different DGCCA model architectures, each with identical architectures across views,\nwhere the width of the hidden and output layers, c, and c2, for each view are drawn uniformly from\n[10, 1000], and the auxiliary representation width r is drawn uniformly from [10, co|\u00b0} All networks\nused ReLUs as activation functions, and were optimized with Adam (Kingma & Bal for 200\n| Networks were trained on 90% of 102,328 Twitter users, will % of users used as a\ntuning set to estimate heldout reconstruction error for model selection. We report development and\ntest results for the best performing model on the downstream task development set. Learning rate\nwac cet to 10-4 with an 1.1 and 12 reonlarization canctante of 001 and 0.001 for all eee\nFRIEND HASHTAG\n\nALGORITHM P@1000 R@ 1000 P@1000 R@ 1000\n\nPCA[TEXT+NET] 0.445/0.439 0.149/0.147 0.011/0.008 0.312/0.290\nGCCA[TEXT] 0.244/0.249 0.080/0.081 0.012/0.009 0.351/0.326\nGCCA[TEXT+NET] 0.271/0.276 0.088/0.089 0.012/0.010 0.359/0.334\nDGCCA[TEXT+NET] 0.297/0.268 0.099/0.090 0.013/0.010 = 0.385/0.373\nWGCCA[TEXT] 0.269/0.279 0.089/0.091 0.012/0.009 0.357/0.325\nWGCCA[TEXT+NET] 0.376/0.364 0.123/0.120 0.013/0.009 0.360/0.346\nTable [2] displays the performance of DGCCA compared to PCA[text+net] (PCA applied to con\ncatenation of view feature vectors), linear GCCA applied to the four text views, /text], and al\n\u00b0We chose to restrict ourselves to a single hidden layer with non-linear activation and identical architec-\ntures for each view, so as to avoid a fishing expedition. If DGCCA is appropriate for learning Twitter use:\nrepresentations, then a good architecture should require little exploration.\nFrom preliminary experiments, we found that Adam pushed down reconstruction error more quickly than\nSGD with momentum, and that ReLUs were easier to optimize than sigmoid activations.\nTThic catting af reo?\n\nlarizatinn canctante led ta law reconetriuctian errar in nreliminary eynerimentec\nFigure 5: The confusion matrix for speaker-dependent GCCA and DGCCA\nTable 2: Dev/test performance at Twitter friend and hashtag recommendation tasks.\nviews, /text+net], along with a weighted GCCA variant (WGCCA). We learned PCA, GCCA, and\nWGCCA representations of width r \u20ac {10, 20,50, 100, 200, 300, 400, 500, 750, 1000}, and repor\nthe best performing representations on the development set.\nThere are several points to note: First is that DGCCA outperforms linear methods at hashtag rec-\nommendation by a wide margin in terms of recall. This is exciting because this task was shown to\nbenefit from incorporating more than just two views from Twitter users. These results suggest that\na nonlinear transformation of the input views can yield additional gains in performance. In addi-\ntion, WGCCA models sweep over every possible weighting of views with weights in {0, 0.25, 1.0}.\nWGCCA has a distinct advantage in that the model is allowed to discriminatively weight views\nto maximize downstream performance. The fact that DGCCA is able to outperform WGCCA at\nhashtag recommendation is encouraging, since WGCCA has much more freedom to discard unin-\nformative views, whereas the DGCCA objective forces networks to minimize reconstruction error\nequally across all views. As noted in{Benton et al.|(2016), only the friend network view was useful\nfor learning representations for friend recommendation (corroborated by performance of PCA ap-\nplied to friend network view), so it is unsurprising that DGCCA when applied to all views cannot\ncompete with WGCCA representations learned on the single useful friend network views}"}, {"section_index": "6", "section_name": "5S OTHER MULTIVIEW LEARNING WORK", "section_text": "There has been strong work outside of CCA-related methods to combine nonlinear representation\nand learning from multiple views. elegantly outlines two main approaches these\nmethods take to learn a joint representation from many views: either by 1) explicitly maximizing\npairwise similarity/correlation between views or by 2) alternately optimizing a shared, \u201cconsen-\nsus\u201d representation and view-specific transformations to maximize similarity. Models such as the\nsiamese network proposed by Masei etal (2014), fall in the former camp, minimizing the squared\nerror between embeddings learned from each view, leading to a quadratic increase in the terms of\nthe loss function size as the number of views increase. [Rajendran et al.| extend Correlational\nNeural Networks to many views and avoid this quadratic explosion in the\nloss function by only computing correlation between each view embedding and the embedding of a\n\u201cpivot\u201d view. Although this model may be appropriate for tasks such as multilingual image caption-\ning, there are many datasets where there is no clear method of choosing a pivot view. The DGCCA\nobjective does not suffer from this quadratic increase w.r.t. the number of views, nor does it require\na privileged pivot view. since the shared representation is learned from the per-view representations.\nApproaches that estimate a \u201cconsensus\u201d representation, such as the multiview spectral clustering ap\nproach in (2011), typically do so by an alternating optimization scheme which depend:\non a strong initialization to avoid bad local optima. The GCCA objective our work builds on is par\nticularly attractive, since it admits a globally optimal solution for both the view-specific projection\nU,...U,, and the shared representation G by singular value decomposition of a single matrix: ;\nsum of the per-view projection matrices. Local optima arise in the DGCCA objective only becaus\nwe are also learning nonlinear transformations of the input views. Nonlinear multiview method\noften avoid learning these nonlinear transformations by assuming that a kernel or graph Laplaciat\n\n(e.g. in multiview clustering) is given (Kumar et al.| Sharma et al."}, {"section_index": "7", "section_name": "6 CONCLUSION", "section_text": "We present DGCCA, a method for non-linear multiview representation learning from an arbitrary\nnumber of views. We show that DGCCA clearly outperforms prior work when using labels as a\nexploit multiple views to learn user representations useful for downstream tasks such as hashtag\nrecommendation for Twitter users. To date, CCA-style multiview learning techniques were eithet\nrestricted to learning representations from no more than two views, or strictly linear transformations\nof the input views. This work overcomes these limitations.\n\u201cThe performance of WGCCA suffers compared to PCA because whitening the friend network data ignores\nthe fact that the spectrum of the decays quickly with a long tail \u2014 the first few principal components made up a\nlarge portion of the variance in the data, but it was also important to compare users based on other components."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "a \u201ceae\ngence, 36(4):824\u2014-830, 2014.\n\nKaare Petersen and Michael Pedersen. The matrix cookbook, Nov 2012. URL|http: //www2 .|\nVersion 20121115.\n\nJanarthanan Rajendran, Mitesh M Khapra, Sarath Chandar, and Balaraman Ravindran. Bridge cor-\nrelational neural networks for multilingual multimodal representation learning. arXiv preprint\narXiv:1510.03519, 2015.\n\nAbhishek Sharma, Abhishek Kumar, Hal Daume, and David W Jacobs. Generalized multiview\nanalysis: A discriminative latent space. In Computer Vision and Pattern Recognition (CVPR),\n2012 IEEE Conference on, pp. 2160-2167. IEEE, 2012.\n\nKarthik Sridharan and Sham M Kakade. An information theoretic framework for multi-view learn-\ning. In Proceedings of COLT, 2008.\n\nWeiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. Unsupervised learning of acoustic\nfeatures via deep canonical correlation analysis. In Proc. of the IEEE Int. Conf. Acoustics, Speech\nand Sig. Proc. (ICASSP\u201915), 2015a.\n\nWeiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representation\nlearning. In Proc. of the 32nd Int. Conf. Machine Learning (ICML 2015), 2015b.\n\nWeiran Wang, Raman Arora, Karen Livescu, and Nathan Srebro. Stochastic optimization for deep\ncca via nonlinear orthogonal iterations. In Proceedings of the 53rd Annual Allerton Conference\non Communication, Control and Computing (ALLERTON), 2015c.\n\nJohn R. Westbury. X-ray microbeam speech production database users handbook. In Waisman Cen-\nter on Mental Retardation & Human Development University of Wisconsin Madison, WI 53705-\n2280, 1994.\n\nDong Xiaowen. Multi-View Signal Processing and Learning on Graphs. PhD thesis, Ecole Poly-\ntechnique F\u00e9d\u00e9rale de Lausanne, 2014.\n.bhishek Kumar, Piyush Rai, and Hal Daume. Co-regularized multi-view spectral clustering. In"}, {"section_index": "9", "section_name": "APPENDIX A DERIVING THE GCCA OBJECTIVE GRADIENT", "section_text": "In order to train the neural networks in DGCCA, we need to compute the gradient of the GCCA\nobjective with respect to any one of its input views. This gradient can then be backpropagated\nthrough the input networks to derive updates for the network weights.\nLet N be the number of data points and J the number of views. Let Y; \u20ac R\u00b0\u00ab*N be the dat\nmatrix representing the output of the jth neural network, i.e. Y; = f;(Xj), where cj, is the numbe\nof neurons in the output layer of the jth network. Then, GCCA can be written as the followin;\noptimization problem, where r is the dimensionality of the learned auxiliary representation:\nminimize > |G - U; Y; |?\n\nU; ERK *\" GeERTXN j=l\nsubject to ac! =I.\nIt can be shown that the solution is found by solving a certain eigenvalue problem. In particular,\ndefine Cj; = Y5Y;\" \u20ac R&e*eK, Pj = y;\" C5;'Y; (note that P; is symmetric and idempotent), and\nM= ya P; (since each P; is psd, so is M). Then the rows of G are the top r (orthonormal)\n\neigenvectors of M, and U; = C7 Y;G\". Thus, at the minima of the objective, we can rewrite the\nreconstruction error as follows:\nNote that we can write the rank-1 decomposition of MZ as a1 AkGkG, \u00bb Furthermore, since the\nkth row of G is g;, and since the matrix product Gg;\nThus, minimizing the GCCA objective (w.r.t. the weights of the neural nets) means maximizing tk\nsum of eigenvalues )>\u2019_, \\;(/), which we will henceforth denote by L.\nJs\noe\nSOG - UF Yl = ~ Gy Cy\n7 YVilr = IG -GY'C,'Yllz\n\nj=l j=l\n\nJ\n=o |G \u2014 Pj) |i\njel\n\n= So 1GUy \u2014 PGT)\n\nj=l\n\nJ\n= SOT) - T(GMG\")\nj=l\n\n= Jr \u2014Tr(GMG\")\niv r\nIMG\" = S> AeGign(Ggn)\" = So Aree\nk=1\n\nk=1\nBut this is just an NV x N diagonal matrix containing the top r eigenvalues of (/, so we can write\nthe GCCA objective as\nIr = > di(M)\ni=l\nOL > OL OMea\nO\u00a5j)av 4, OMea OY} av\nN\naM,\nQa!\n= LO Ota,\n\n1\n\nc\nSince M = Vet Pj, and since the only projection matrix that depends on Y; is P;, A = ae\n\nSince P; = Y,'C;'Y;,\nJ\nck\n\n(Pyea = S> (\u00a5j)ee(CFj*)ue(\u00a5 ea\n\nk=1\nThus, by the product rule,\nThe derivative in the last term can also be computed using the chain rule:\n(CF; )ee NY A(CF ee (Cig)\nO(Y})ab O(Cij)mn O(Y;)ab\n\nmn=1\n\nN\n== OS [Ci )nm(Ci ne\n\nmn=1\n\n(Sam(\u00a5;)nb + San(\u00a5j)mo] }\n\n~~ \u00bb~ (Ciy Yea C7; )ne(\u00a5j)nb\n\nn=1\nN\n- sy (C7) km (C71 )ae(Y) mo\n\n= 63) )ka(Cj; LY; )e\n\u2014 (Cj; \\ae(Cj; TY; eb\nOCi; )ke \u2014 ACF; )ke (C55) mn\nO(Y})ab O(Cij)mn O(Y;)ab\n\nmn=1\nN\n\n== OS [Ci )nm(Ci ne\nmn=1\n\n(Sam(\u00a5;)nb + San(\u00a5j)mo] }\n\n~~ \u00bb~ (Ciy Yea C7; )ne(\u00a5j)nb\n\nn=1\n~ \u00bb (C55 Jam (C5; )ae(Vj) mb\nm=1\n\n= ~(Cij' )aa(Ciz' Vien\n\u2014 (C55 ae(Ciy' Yi )ew\n\\W Jed\n\nSubstituting this into the expression for \u00a9 a\n\nand simplifying matrix products, we find that\nO(P;) ea\nYar ba Ci; Naot\n\nSab Sel Ce +\n\nk=1\nek, A(C5)ue\norcas\n\n= b00(C5j'\u00a5j)aa + 540(Cjy'\u00a5j ac\n\nck OC) ke\n+ Ss (aca ary\nja\n\nk\u00e9=1\nma = bas(C5PY\u00a5))aa + Sav(CEY) ac\n\u2014 (Ci Yi)ac(\u00a5j\" C7\" Yi oa\n~ (Ci \u00a5jaal\u00a5j' C75\" Yi)be\n= (In \u2014 Pi)eo(Ciy'Vj)aa +\n(In \u2014 Pj)av(CZYj)ac\njled _ 5 (q-1\nYan eo(C7' Yj aa + San(Cy;'\u00a5)).\n\n= (Cy Yiacl\u00a5j\" CGY)\n_ Uy v PCY) 7 be\n(iy \u2014 P)a(Cs! ue\ni ar(Cj; Yj )ac\nFinally, substituting this into our expression for \u201424\u2014, we find that\naL N\n= v v \u2014 Py)eo(CatY;\nOY) av me GhealLw \u2014 Pj)eo(Ciz Yi)aa\n\n+@e Jea(In \u2014 P3)ao(C5;*\u00a53)ac\n\nc,d=1\n= 2[07,'YjG' Gn \u2014 Pilon\n\nOL \u2014_9n7-ly ators. \u2014 py\nBut recall that U; = C;,1Y;G\". Using this, the gradient simplifies as follows:\nL\nx = 2U;G \u2014 2U;U,'Y;\nThus, the gradient is the difference between the r-dimensional auxiliary representation G embedded\ninto the subspace spanned by the columns of U; (the first term) and the projection of the network\noutputs in Y; = f;(X;) onto said subspace (the second term). Intuitively, if the auxiliary repre-\n\nsentation G is far away from the view-specific representation U Al fj (X;), then the network weights\nshould receive a large update."}, {"section_index": "10", "section_name": "APPENDIX B DGCCA OPTIMIZATION PSEUDOCODE", "section_text": "Algorithm 1 Deep Generalized CCA\n\nInput: multiview data: X,, X2,...,X J,\nnumber of iterations T\u2019, learning rate 7\nOutput: O;,Oo,..., ,O7\n\nInitialize weights W,, W2,...,W7\nfor iteration t = 1,2,..., T do\nfor each view j = 1,2,..., J do\n\nO; < forward pass of X; with weights W;\nmean-center O;\n\nend for\nU,,...,U7,G & geca(Oy,...,O7)\nfor each view j = 1,2,..., J do\n\nOF /00; \u2014 U; U; 0; - \u2014U;G\nVW; packprop(0F/00;, W;)\nW; <\u2014 W; \u2014 VW;\nend for\nend for\nfor each view j = 1,2,..., J do\nO; + forward pass of X; with weights W;\nmean-center O;\nend for\nUj,..., ,U7,G + geca(O1,...,O7)\nfor each view j = 1...J do\nO; + U} 0;\nend for\nAlgorithm [I] contains the pseudocode for the DGCCA optimization algorithm. In practice we us\u00a2\n\nstocastic optimization with minibatches, following (2015c)."}, {"section_index": "11", "section_name": "NDIX C_ RECONSTRUCTION ERROR AND DOWNSTREAM\nPERFORMANCE", "section_text": "0.40\n\n0.35\n\n0.05\n\n0.00\n\n10+\n\n10?\n\n10?\n\ntune_err\n\n10*\n\n10\u00b0\nFigure 6: Tuning reconstruction error against Recall at 1000 for the hashtag prediction task. Eacl\npoint corresponds to a different setting of hyperparameters.\nCCA methods are typically evaluated intrinsically by the amount of correlation captured, or recon\nstruction error. These measures are dependent on the width of the shared embeddings and view\nspecific output layers, and do not necessarily predict downstream performance. Although recon\nstruction error cannot solely be relied on for model selection for a downstream task, we found tha\nit was a useful as a signal to weed out very poor models. Figure [shows the reconstruction erro\nagainst hashtag prediction Recall at 1000 for an initial grid search of DGCCA hyperparameters\nModels with tuning reconstruction error greater than 10\u00b0 can safely be ignored, while there is som\nvariability in the performance of models with achieving lower error.\nSince a DGCCA model with high reconstruction error suggests that the views do not agree with eacl\nother at all, it makes sense that the shared embedding will likely be noisy, whereas a relatively lowly\nreconstruction error suggests that the transformed views have converged to a stable solution."}]
rkYmiD9lg
[{"section_index": "0", "section_name": "EXPONENTIAL MACHINES", "section_text": "Alexander Novikov!:2\nnovikov@bayesgroup.ru\ni.oseledets@skoltech. ru\n\u2018National Research University Higher School of Economics, Moscow, Russi\n2Institute of Numerical Mathematics, Moscow, Russia\n\n3Moscow Institute of Physics and Technology, Moscow, Russia\n\n4Skolkovo Institute of Science and Technology, Moscow, Russia\nModeling interactions between features improves the performance of machine\nlearning solutions in many domains (e.g. recommender systems or sentiment\nanalysis). In this paper, we introduce Exponential Machines (ExM), a predictor that\nmodels all interactions of every order. The key idea is to represent an exponentially\nlarge tensor of parameters in a factorized format called Tensor Train (TT). The\nTensor Train format regularizes the model and lets you control the number of\nunderlying parameters. To train the model, we develop a stochastic Riemanniar\noptimization procedure, which allows us to fit tensors with 2'\u00a9\u00b0 entries. We show\nthat the model achieves state-of-the-art performance on synthetic data with high\norder interactions and that it works on par with high-order factorization machines\non a recommender system dataset MovieLens 100K."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Machine learning problems with categorical data require modeling interactions between the features\nto solve them. As an example, consider a sentiment analysis problem \u2014 detecting whether a review is\npositive or negative \u2014 and the following dataset: \u2018I liked it\u2019, \u2018I did not like it\u2019, \u2018I\u2019m not sure\u2019. Judging\nby the presence of the word \u2018like\u2019 or the word \u2018not\u2019 alone, it is hard to understand the tone of the\nreview. But the presence of the pair of words \u2018not\u2019 and \u2018like\u2019 strongly indicates a negative opinion.\nIf the dictionary has d words, modeling pairwise interactions requires O(d?) parameters and will\nprobably overfit to the data. Taking into account all interactions (all pairs, triplets, etc. of words)\nrequires impractical 2% parameters.\nIn this paper, we show a scalable way to account for all interactions. Our contributions are\nmikhail.trofimov@phystech.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We propose a predictor that models all 2\u00a2 interactions of d-dimensional data by representing\nthe exponentially large tensor of parameters in a compact multilinear format \u2014 Tenso1\nTrain (TT-format) (Sec. 3p. Factorizing the parameters into the TT-format leads to a better\ngeneralization, a linear with respect to d number of underlying parameters and inference\ntime (Sec.[5). The TT-format lets you control the number of underlying parameters through\nthe TT-rank \u2014 a generalization of the matrix rank to tensors.\n\n. Inouw\nis often\n\nec.[9p.\n\nWe show that the linear model (e.g. logistic regression) is a special case of our model with\nthe TT-rank equal 2 (Sec.[8.3)\n\nWe develop a stochastic Riemannian optimization learning algorithm (Sec\nexperiments, it outperformed the stochastic gradient descent baseline (Sec\nused for models parametrized by a tensor decomposition (see related wo\n\nWe extend the model to handle interactions between functions of the features, not just\nbetween the features themselves (Sec [7)."}, {"section_index": "3", "section_name": "2 LINEAR MODEL", "section_text": "In this section, we describe a generalization of a class of machine learning algorithms \u2014 the linear\nmodel. Let us fix a training dataset of pairs {(a/), yD), where aS) is a d-dimensional\nfeature vector of f-th object, and y\\/) is the corresponding target variable. Also fix a loss function\nL(G, y) : R? > R, which takes as input the predicted value 7 and the ground truth value y. We call\na model linear, if the prediction of the model depends on the features a only via the dot product\nbetween the features x and the d-dimensional vector of parameters w:\nTiinear(@) = (x, w) + b,\nOne of the approaches to learn the parameters w and b of the model is to minimize the following los:\nN\nS26 (few) +6, 9) +2 hol\nfel"}, {"section_index": "4", "section_name": "3. OUR MODEL", "section_text": "Before introducing our model equation in the general case, consider a 3-dimensional example. The\nequation includes one term per each subset of features (each interaction)\nNote that all permutations of features in a term (e.g. x; x2 and x22 ) correspond to a single term an\nhave exactly one associated weight (e.g. W110).\nIn the general case, we enumerate the subsets of features with a binary vector (71,...,ia), whe\n= 1 if the k-th feature belongs to the subset. The model equation looks as follows\nSM. oT\n=>.\n\nig=0\ni,=0\nNote that there is no need in a separate bias term, since it is already included in the model as th\nweight tensor element Wo__o\u00a2 (see the model equation example \u00ae@)).\nA d-dimensional tensor A is said to be represented in the Tensor Train (TT) format (Oseledets||2011),\nif each of its elements can be computed as the following product of d \u2014 2 matrices and 2 vectors\nAix.ig = Gili]... Galial,\nwhere 4 is the regularization parameter. For the linear model we can choose any regularization term\ninstead of DL\u00bb. but later the choice of the regularization term will become important (see Sec.|6.1).\nSeveral machine learning algorithms can be viewed as a special case of the linear model with an\nappropriate choice of the loss function \u00a2(\u00a5, y): least squares regression (squared loss), Support Vector\nMachine (hinge loss), and logistic regression (logistic loss).\ny(x) = Wooo + Wioo 21 + Wo10 x2 + Wooir3\n+ Wii0 \u00a91%_ + Wio1 2123 + Wor Lor\n+ Wii 112203.\ny(x) = Wooo + Wio0 21 + Wo10 \u00ae2 + Woo1r3\n+ Wii0 \u00a91%_ + Wio1 2123 + Wor Lor\n+Wii x1x%9x3.\nHere we assume that 0\u00b0 = 1. The model is parametrized by a d-dimensional tensor W, which consist\nof 24 elements.\nThe model equation (4) is linear with respect to the weight tensor W. To emphasize this fact and\nsimplify the notation we rewrite the model equation (4) as a tensor dot product 7(a) = (\u00a5%,W),\nwhere the tensor \u00a5 is defined as follows\nFigure 1: An illustration of the TT-format for a 3 x 4 x 4 x 3 tensor A with the TT-rank equal 3\nwhere for any k = 2,...,d \u2014 1 and for any value of ix, G[ix] is an r x r matrix, Gj [#1] is a\n1 x r vector and Gy lial is is an r x 1 vector (see Fig. [I We refer to the collection of matrices G;,\ncorresponding to the same dimension k (technically, a 3-dimensional array) as the k-th TT-core, where\nk = 1,...,d. The size r of the slices G),[i,] controls the trade-off between the representational\npower of the TT-format and computational efficiency of working with the tensor. We call r the\nTT-rank of the tensor A.\nAn attractive property of the TT-format is the ability to perform algebraic operations on tensors\nwithout materializing them, i.e. by working with the TT-cores instead of the tensors themselves. The\nT'T-format supports computing the norm of a tensor and the dot product between tensors; element-wise\nsum and element-wise product of two tensors (the result is a tensor in the TT-format with increased\n\n['T-rank), and some other operations (Oseledets| |201 I)."}, {"section_index": "5", "section_name": "5 INFERENCE", "section_text": "In this section, we return to the model proposed in Sec. Bland show how to compute the model\nequation (4) in linear time. To avoid the exponential complexity, we represent the weight tensor W\nand the data tensor \u00a5 (5) in the TT-format. The TT-ranks of these tensors determine the efficiency\nof the scheme. During the learning, we initialize and optimize the tensor W in the TT-format and\nexplicitly control its TT-rank. The TT-rank of the tensor \u00a5 always equals 1. Indeed, the following\nTT-cores give the exact representation of the tensor \u00a5\u201d\nGylix] =ai* CRM, k=1,...,.\nTheorem 1. The model response y(x) can be computed in O(r*d), where r is the TT-rank of the\nweloht tensor W.\nWe refer the reader to Ap pendix[A]where we propose an inference algorithm with O(r?d) complexity\nand thus prove Theorem [1]\nN\nw)=Se((a wy?) +Awie, Iwik= So wea\n\nfa i1=0 ia=0\nWe consider two approaches to solving problem (7). In a baseline approach, we optimize the objective\nL(W) with stochastic gradient descent applied to the underlying parameters of the TT-format of the\ntensor W.\nThe TT-rank of the weight tensor W is a hyper-parameter of our method and it controls the efficiency\nvs. flexibility trade-off. A small TT-rank regularizes the model and yields fast learning and inference\nbut restricts the possible values of the tensor W. A large TT-rank allows any value of the tensor W\nand effectively leaves us with the full polynomial model without any advantages of the TT-format.\nLearning the parameters of the proposed model corresponds to minimizing the loss under the TT-rank\nconstraint:\nA simple alternative to the baseline is to perform gradient descent with respect to the tensor W, that\nis subtract the gradient from the current estimate of W on each iteration. The TT-format indeed\nallows to subtract tensors, but this operation increases the TT-rank on each iteration, making this\napproach impractical.\nThe set of all d-dimensional tensors with fixed TT-rank r\nWe now describe how to implement each of the steps outlined above.\nproposed an algorithm to project a TT-tensor Z on the tangent space of\nVi, at a point which consists of two steps: preprocess W in O(dr*) and project Z in\n\nO(dr? TT-rank(Z)?). [Lubich et al.| (2015) also showed that the TT-rank of the projection is\n\nbounded by a constant that is independent of the TT-rank of the tensor Z:\nTT-rank(Pr,m,.(Z)) < 2TT-rank(W) = 2r.\nSince the resulting expression is a weighted sum of projections of individual data tensors 7\u2019\u2019, we\ncan project them in parallel. Since the TT-rank of each of them equals | (see Sec.5}, all N projections\ncost O(dr?(r + N)) in total. The TT-rank of the projected gradient is less or equal to 2r regardless\nof the dataset size N.\nNote that here we used the particular choice of the regularization term. For terms other than Lg (e.g\nL,), the gradient may have arbitrary large TT-rank.\nSince we aim for big datasets, we use a stochastic version of the Riemannian gradient descent: on\neach iteration we sample a random mini-batch of objects from the dataset, compute the stochastic\ngradient for this mini-batch, make a step along the projection of the stochastic gradient, and retract\nback to the manifold (Alg.{Ip.\nAn iteration of the stochastic Riemannian gradient descent consists of inference O(dr* M), projection\nO(dr2(r+M)), and retraction O(dr*), which yields O(dr?(r + )) total computational complexity\nTo improve upon the baseline and avoid the TT-rank growth, we exploit the geometry of the set of\ntensors that satisfy the TT-rank constraint (7) to build a Riemannian optimization procedure (Sec.|6\nWe experimentally show the advantage of this approach over the baseline in Sec./8.2}\nforms a Riemannian manifold (Holtz et al. . This observation allows us to use Riemannian\noptimization to solve problem (7). Riemannian gradient descent consists of the following steps which\nare repeated until convergence (see Fig. 2]for an illustration):\n. Project the gradient 24, on the tangent space of M,. taken at the point W. We denote the\ntangent space as TwM, and the projection as G = Pry, (2h).\n\n2. Follow along G with some step a (this operation increases the TT-rank).\n\n. Retract the new point W \u2014 aG back to the manifold M/,., that is decrease its TT-rank to r.\nOL oe\nach)\naw > 7 a + AW.\nf) r\n23 5PM (XD) + dw.\nPry, (an\nWe found that a random initialization for the TT-tensor W sometimes freezes the convergence of\noptimization method (Sec. . We propose to initialize the optimization from the solution of the\ncorresponding linear model (1).\nThe following theorem shows how to initialize the weight tensor W from a linear model.\nTheorem 2. For any d-dimensional vector w and a bias term b there exist a tensor W of TT-rank 2,\nsuch that for any d-dimensional vector x and the corresponding object-tensor X the dot products\n(a,w) and (X,W) coincide.\nIn the general case, to model interactions between n, functions gi, ..., Jn, of the features we redefin\nthe obiect-tensor as follows:\nd\n[] eles. in).\n\nk=1\nThe weight tensor W and the object-tensor % are now consist of (ny + 1)? elements. After this\nchange to the object-tensor 4\u2019, learning and inference algorithms will stay unchanged compared to\nthe original model (\n. 1, ifa, = it, orizn =0\nC(tp, tk) =\n\n0, otherwise.\nFigure 2: An illustration of one step\nof the Riemannian gradient descent.\nThe step-size a is assumed to be 1\nfor clarity of the figure.\nIn this section, we extend the proposed model to handle polynomials of any functions of the features.\nAs an example, consider the logarithms of the features in the 2-dimensional case:\n1, if i, = 0,\n. t.), ift, =1,\n(xp, ix) _ gil k)s k\n\nQn, (tk), ifip = 7g,\nCategorical features. Our basic model handles categorical features x, \u20ac {1,..., A} by converting\nthem into one-hot vectors 7% ,1,..-,%,\u00ab. The downside of this approach is that it wastes the model\ncapacity on modeling non-existing interactions between the one-hot vector elements 7,1, ...,0k,K\nwhich correspond to the same categorical feature. Instead, we propose to use one TT-core per\n\ncategorical feature and use the model extension technique with the following function\nThis allows us to cut the number of parameters per categorical feature from 2K'r? to (K + 1)r\nwithout losing any representational power."}, {"section_index": "6", "section_name": "8 EXPERIMENTS", "section_text": "We release a Python implementation of the proposed algorithm and the code to reproduce the\nexperimentd'] For the operations related to the TT-format, we used the TT-Toolbox\u2019|"}, {"section_index": "7", "section_name": "8.1 DATASETS", "section_text": "The datasets used in the experiments (see details in Appendix|C"}, {"section_index": "8", "section_name": "8.2 RIEMANNIAN OPTIMIZATION", "section_text": "In this experiment, we compared two approaches to training the model: Riemannian optimiza\nvs. the baseline (Sec. 6). In this and later experiments we tuned the learning rate of\nboth Riemannian and SGD optimizers with respect to the training loss after 100 iterations by the grid\nsearch with logarithmic grid.\nOn the Car and HIV datasets we turned off the regularization (A = 0) and used rank r = 4. We repor\nthat on the Car dataset Riemannian optimization (learning rate a = 40) converges faster and achieve:\nbetter final MG than the baseline (learning rate a = 0.03) both in terms of the training and tes\nlosses (Fig. [3] Bp. On the HIV dataset Riemannian optimization (learning rate a = 800) converge:\nto the value TO\" Ep one 20 times faster than the baseline (learning rate a = 0.001, see Fig.)3p), bu\n\nthe model overfitts to the data (Fig. 5p .\nThe results on the synthetic dataset with high-order interactions confirm the superiority of the\nRiemannian approach over SGD \u2014 we failed to train the model at all with SGD (Fig. |6).\nOn the MovieLens 100K dataset, we have only used SGD-type algorithms, because using the one-hot\nfeature encoding is much slower than using the categorical version (see Sec.[7p, and we have yet\nto implement the support for categorical features for the Riemannian optimizer. On the bright side\nprototyping the categorical version of ExM in TensorFlow allowed us to use a GPU accelerator."}, {"section_index": "9", "section_name": "8.3. INITIALIZATION", "section_text": "wo possible reasons for this effect are: a) the vanishing and exploding gradients problem (Bengic\n\nsal al et al {[T999) ) that arises when dealing with a product of a large number of factors (160 in the case o\n\ndataset); b) initializing the model in such a way that high-order terms dominate we may\n\nforce the gradient-based optimization to focus on high-order terms, while it may be more stable to\n\nstart with low-order terms instead. Type-2 initialization (a random linear model) indeed worked on\npar with the best linear initialization on the Car, HIV, and synthetic datasets (Fig. 3b,|6).\n1. UCI (Lichman} |2013) Car dataset is a classification problem with 1728 objects and 21\nbinary features (after one-hot encoding). We randomly splitted the data into 1382 training\nand 346 test objects and binarized the labels for simplicity.\n\n2. UCI HIV dataset is a binary classification problem with 1625 objects and 160 features,\nwhich we randomly splitted into 1300 training and 325 test objects.\n\n3. Synthetic data. We generated 100 000 train and 100 000 test objects with 30 features and\nset the ground truth target variable to a 6-degree polynomial of the features.\n\n4. MovieLens 100K is a recommender system dataset with 943 users and 1682 movies\n2015). We followed {Blondel et al.|(2016a) in preparing 2703 one-hot features\n\na In thrninga the nrahlem inta hinary elaccih#ratan\nIn this experiment, we compared random initialization with the initialization from the solution of the\ncorresponding linear problem (Sec. . We explored two ways to randomly initialize a TT-tensor:\n1) filling its TT-cores with independent Gaussian noise; 2) initializing W to represent a linear model\nwith random coefficients (sampled from a standard Gaussian). We report that on the Car dataset\ntype-1 random initialization slowed the convergence compared to initialization from the linear model\nsolution (Fig./3h), while on the HIV dataset the convergence was completely frozen (Fig./3p).\ntraining loss (logistic)\n\n10\n\nCores GD\nCores SGD 100\nCores SGD 500\nRiemann GD\n\nPI\n\nRiemann GD rand\n\n10\" 10\"\n\ntime (s)\n\n10?\n\n(a) Binarized Car dataset\n\ntraining loss (logistic)\n\nPst\n\n10\u00b0 10? 10?\ntime (3)\n\n(b) HIV dataset\n\nCores GD\nCores SGD 100\nCores SGD 500\n\nRiemann GD rand init 1\nRiemann GD rand init 2\ntraining loss (logistic)\n\ntime (s)\n\nPI\n\nCores GD\nCores SGD 100\n\nCores SGD 500\nRiemann GD\n\nRiemann 100\n\nRiemann 500\n\nRiemann GD rand init 1\n\ntraining loss (logistic)\n\n10\u00b0\ntime (s)\n\n10?\n\n10?\n\n\u2014 Cores GD\n\u2014 Cores SGD 100\n\n@e Riemann 100\nmann 500\n\nBa Riemann GD rand init 1\nmann GD rand init 2\nFigure 3: A comparison between Riemannian optimization and SGD applied to the underlying\nparameters of the TT-format (the baseline) for the rank-4 Exponential Machines. Numbers in the\nlegend stand for the batch size. The methods marked with \u2018rand init\u2019 in the legend (square and triangle\nmarkers) were initialized from a random TT-tensor from two different distributions (see Sec.\nall other methods were initialized from the solution of ordinary linear logistic regression. Type- 2\nrandom initialization is ommited from the Car dataset for the clarity of the figure.\nMethod Test AUC Taining Inference\ntime (s) time (s)\nLog. reg. 0.50 0.4 0.0\nRF 0.55 21.4 6.5\nNeural Network 0.50 47.2 0.1\nSVM RBF 0.50 2262.6 5380\nSVM poly. 2 0.50 1152.6 4260\nSVM poly. 6 0.56 4090.9 3774\n2-nd order FM 0.50 638.2 0.5\n6-th order FM 0.57 549 3\n6-th order FM 0.86 6039 3\n6-th order FM 0.96 38918 3\nExM rank 3 0.79 65 0.2\nExM rank 8 0.85 1831 1.3\nExM rank 16 0.96 48879 3.8\nTable 1: A comparison between models on synthetic data\nwith high-order interactions (Sec. . We report the\ninference time on 100000 test objects in the last column."}, {"section_index": "10", "section_name": "8.4 COMPARISON TO OTHER APPROACHES", "section_text": "On the synthetic dataset with high-order interactions we campared Exponential Machines (the\nproposed method) with scikit-learn implementation (Pedregosa et al.|/2011) of logistic regression,\nrandom forest, and kernel SVM; FastFM implementation (Bayer||2015) of 2-nd order Factorization\nMachines; our implementation of high-order Factorization Machine\" and a feed-forward neural\n\nnetwork implemented in TensorFlow (Abadi et al.|!2015). We used 6-th order FM with the Adam\nich we'd c\n\noptimizer (Kingma & Ba}|2014) for whi osen the best rank (20) and learning rate (0.003)\nbased on the training loss after the first 50 iterations. We tried several feed-forward neural networks\n\nwith ReLU activations and up to 4 fully-connected layers and 128 hidden units. We compared the\nmodels based on the Area Under the Curve (AUC) metric since it is applicable to all methods and is\nrobust to unbalanced labels (Tbl.|Tp.\nOn the MovieLens 100K dataset we used the categorical features representation described in Sec. [7\nOur model obtained 0.784 test AUC with the TT-rank equal 10 in 273 seconds on a Tesla K40 GP!\n\n(the inference time is 0.3 seconds per 78800 test objects); our implentation of 3-rd order FM obtained\n0.782; logistic regression obtained 0.782; and|Blondel et al.|(2016a) reported 0.786 with 3-rd order\nFM on the same data.\ntest AUC\n\n0.785\n\n0.784\n\n0.783\n\n0.782\n\n0.781\n\n0.780\n\n0.779\n\n0.778\n0\n\n10 15\nTT-rank\n\n20\n\n25\nFigure 4: The influence of the TT-rank on\nthe test AUC for the MovieLens 100K\ndataset.\nKernel SVM is a flexible non-linear predictor and, in particular, it can model interactions when used\n\nwith the polynomial kernel (Boser et al.|{I992). As a downside, it scales at least quadratically with\nthe dataset size (Bordes et al.|[2005) and overfits on highly sparse data.\nWith this in mind, developed Factorization Machine (FM), a general predictor that\nmodels pairwise interactions. To overcome the problems of polynomial SVM, FM restricts the rank\nof the weight matrix, which leads to a linear number of parameters and generalizes better on sparse\ndata. FM running time is linear with respect to the number of nonzero elements in the data, which\nallows scaling to billions of training entries on sparse problems.\nA number of works used full-batch or stochastic Riemannian optimization for data processin,\n\ntasks (Meyer et al.}/201 I} Tan et al} 2014} Xu & Ke| 2016} Zhang et al. 2016). The last work (Zhang\nour\n\nis especially interesting in the context o: method, since it improves the convergence\nochastic Riemannian gradient descent and is directly applicable to our learning procedure.\n\ntrate o\nIn a concurrent work, |Stoudenmire & Schwab] (2016) proposed a model that is similar to ours\nbut relies on the Teo aS Be CRC SES in contrast to polynomials (1, x) used in\nExponential Machines (see Sec.[7]for an explanation on how to change the basis). They also proposed\na different learning procedure inspired by the DMRG algorithm (Scholle 2071}, which allows to\nautomatically choose the ranks of the model, but is hard to adapt to the stochastic regime. One of the\n\npossible ways to combine strengths of the DMRG and Riemannian approaches is to do a full DMRG\nsweep once in a few epochs of the stochastic Riemannian eradient descent to adiust the ranks.\nOther relevant works include the model that approximates the decision function with a multidimen\nsional Fourier series whose coefficients lie in the TT-format (Wahls et al.|/2014); and models that are\n\nmilar to FM but include squares and other powers of the features: Tensor Machines (Yang & Gittens\nand Polynomial Networks 2014). Tensor Machines also enjoy a theoretica\u2019\nBrera\n\ngeneralization bound. In another relevant work, (2016b) boosted the efficiency of FM\nand Polynomial Networks by casting their training as a low-rank tensor estimation problem, thu:\nmaking it multi-convex and allowing for efficient use of Alternative Least Squares types of algorithms\nNote that Exponential Machines are inherently multi-convex."}, {"section_index": "11", "section_name": "10 DISCUSSION", "section_text": "We presented a predictor that models all interactions of every order. To regularize the model and to\nmake the learning and inference feasible, we represented the exponentially large tensor of parameters\nin the Tensor Train format. To train the model, we used Riemannian optimization in the stochastic\nregime and report that it outperforms a popular baseline based on the stochastic gradient descent\nHowever, the Riemannian learning algorithm does not support sparse data, so for dataset with\nhundreds of thousands of features we are forced to fall back on the baseline learning method. We\nfound that training process is sensitive to initialization and proposed an initialization strategy based\non the solution of the corresponding linear problem. The solutions developed in this paper for the\nstochastic Riemannian optimization may suit other machine learning models parametrized by tensors\nin the TT-format.\nThe TT-rank is one of the main hyperparameters of the proposed model. Two possible strategies can\nbe used to choose it: grid-search or DMRG-like algorithms (see Sec.[9). In our experiments we opted\nfor the former and observed that the model is fairly robust to the choice of the TT-rank (see Fig. (4)\nbut a too small TT-rank can hurt the accuracy (see Tbl.[I).\nFor high-order interactions FM uses CP-format (Caroll & Chang 1970) to represent\nthe tensor of parameters. The choice of the tensor factorization is the main difference between\n\nthe high-order FM and Exponential Machines. The TT-format comes with two advantages over\nthe CP-format: first, the TT-format allows for Riemannian optimization; second, the problem of\nfinding the best TT-rank r approximation to a given tensor always has a solution and can be solved in\n\npolynomial time. We found Riemannian optimization superior to the SGD baseline (S\nused in several other models parametrized by a tensor factorization (Rendle|\n\n207%} . Note that CP-format also allows for Riemannian optimizat\nor 2-order tensors (and thereafter 2-order FM)."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S\nCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew\nHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath\nKudlur, Josh Levenberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike\nSchuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent\nVanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Martin Wattenberg\nMartin Wicke, Yuan Yu, and Xiaogiang Zhe TensorFlow: Large-scale machine learning on\nheterogeneous systems, 2015. URL/http://tensorflow.org/| Software available from\n\ntensorflow ors.\nI. Bayer. Fastfm: a library for factorization machines. arXiv preprint arXiv: 1505.00641, 2015.\nM. Blondel, A. Fujino, N. Ueda, and M. Ishihata. Higher-order factorization machines. 2016a.\nine Journal of Macnine Learning Nesearcn, O-1I/7\u2014-1O1LA, LUV.\n\nB. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In\nProceedings of the fifth annual workshop on Computational learning theory, pp. 144-152, 1992.\n\nJ. D. Caroll and J. J. Chang. Analysis of individual differences in multidimensional scaling via n-way\ngeneralization of Eckart-Young decomposition. Psychometrika, 35:283-319, 1970.\n\nF. M. Harper and A. J. Konstan. The movielens datasets: History and context. ACM Transactions on\nInteractive Intelligent Systems (TiiS), 2015.\n\nR. A. Harshman. Foundations of the PARAFAC procedure: models and conditions for an explanatory\nmultimodal factor analysis. UCLA Working Papers in Phonetics, 16:1-84, 1970.\n\nS. Holtz, T. Rohwedder, and R. Schneider. On manifolds of tensors of fixed TT-rank. Numerische\nMathematik, pp. 701-731, 2012.\n\nD. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980,\n2014.\n\nV. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky. Speeding-up convolutional\nneural networks using fine-tuned CP-decomposition. In Jnternational Conference on Learning\nRepresentations (ICLR), 2014.\nM. Lichman. UCI machine learning repository, 2013.\nI. V. Oseledets. Tensor-Train decomposition. SIAM J. Scientific Computing, 33(5):2295-2317, 2011\nF. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten-\nhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and\nE. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research,\n12:2825-2830, 2011.\nR. Livni, S. Shalev-Shwartz, and O. Shamir. On the computational efficiency of training neural\nnetworks. In Advances in Neural Information Processing Systems 27 (NIPS), 2014.\n\nC. Lubich, I. V. Oseledets, and B. Vandereycken. Time integration of tensor trains. SJAM Journal on\nNumerical Analysis, pp. 917-941, 2015.\n\nG. Meyer, S. Bonnabel, and R. Sepulchre. Regression on fixed-rank positive semidefinite matrices: a\nRiemannian approach. The Journal of Machine Learning Research, pp. 593-625, 2011.\n\nA. Novikov, D. Podoprikhin, A. Osokin, and D. Vetrov. Tensorizing neural networks. In Advances in\nNeural Information Processing Systems 28 (NIPS). 2015.\nM. Tan, I. W. Tsang, L. Wang, B. Vandereycken, and S. J. Pan. Riemannian pursuit for big matrix\nrecovery. 2014.\nS. Wahls, V. Koivunen, H. V. Poor, and M. Verhaegen. Learning multidimensional fourier series wi\ntensor trains. In Signal and Information Processing (GlobalSIP), 2014 IEEE Global Conferen\u00ab\non, pp. 394-398. IEEE, 2014.\n\nZ. Xu and Y. Ke. Stochastic variance reduced riemannian eigensolver. arXiv _preprii\narXiv: 1605.08233, 2016.\n\nJ. Yang and A. Gittens. Tensor machines for learning target-specific polynomial features. arX\nnrenrint arXiv: 1504 01697 9015"}, {"section_index": "13", "section_name": ". PROOF OF THEOREM[L", "section_text": "Theorem|||states that the inference complexity of the proposed algorithm is O(r?d), where r is th\nTT-rank of the weight tensor W. In this section, we propose an algorithm that achieve the state\ncomplexity and thus prove the theorem.\nProof. Let us rewrite the definition of the model response (4) assuming that the weight tensor W is\nrepresented in the TT-format (6)\n: \u00abGy lig] = Ge [0] + ae\nAp = S> xi'Ge\n\nip=O\nThe final value (a) can be computed from the matrices A, via d\u20141 matrix-by-vector multiplications\nand 1 vector-by-vector multiplication, which yields O(r?d) complexity.\nence on, 1, pp. 995-1000, 2010.\n\nU. Schollw\u00e9ck. The density-matrix renormalization group in the age of matrix product states. Annals\nof Physics, 326(1):96-192, 2011.\n\nE. Stoudenmire and D. J. Schwab. Supervised learning with tensor networks. In Advances in Neural\nInformation Processing Systems 29 (NIPS). 2016.\nSO at Gilil...2\ndi ysha\nA, A... Aa,\n\n\u201cGalta] =\n\ni=0\n\n(Save Gali\n\nia=0\n\n)\nNote that the proof is constructive and corresponds to an implementation of the inference algorithm.\ntest loss (logistic)\n\n2b\n\n10\n\ntime (s)\n\nPI\n\nCores GD\nCores SGD 100\n\nCores SGD 500\nRiemann GD\n\nRiemann 100\n\nRiemann 500\n\nRiemann GD rand init 1\n\ntest loss (logistic)\n\nTP3sti\n\nCores GD\nCores SGD 100\n\nCores SGD 500\nRiemann GD\n\njemann 100\n\nRiemann 500\n\njemann GD rand init 1\nRiemann GD rand init 2\nFigure 5: A comparison between Riemannian optimization and SGD applied to the underlying\nparameters of the TT-format (the baseline) for the rank-4 Exponential Machines. Numbers in the\nlegend stand for the batch size. The methods marked with \u2018rand init\u2019 in the legend (square and triangle\nmarkers) were initialized from a random TT-tensor from two different distributions, all other methods\nwere initialized from the solution of ordinary linear logistic regression. See details in Sec. [8.2jand[8.2\nTheorem|2|states that it is possible to initialize the weight tensor W of the proposed model from the\nweights w of the linear model.\nTheorem. For any d-dimensional vector w and a bias term b there exist a tensor W of TT-rank \u2018\nsuch that for any d-dimensional vector x and the corresponding object-tensor X the dot product\n(a, w) and (X%,W) coincide.\nTo proof the theorem, in the rest of this section we show that the tensor W from Theorem [2];\nrepresentable in the TT-format with the following TT-cores\nG[0] = [ 1 0\n\nBo\n\nGa[0] =\n\nGif] = [ 0\nGifj)=[1 0], GifJ=[0 wi J,\ncall=| 9]. @atl=[ 9],\nV2<k<d-1\n\nclol=[6 2]. @tt=[ FY.\nand thus the TT-rank of the tensor W equals 2.\n[1 0], if Via ty = 0\n. > J) [9 0], Ff Vig = 2.\nGili]... @plip] = [0 we], if =, ig= 1.\n\nandi, = 1.\nProof. We prove the lemma by induction. Indeed, for p = 1 the statement of the lemma become:\n{1 0], if =0,\nGila) ={ {0 wi], if =1,\nwhich holds by definition of the first TT-core Gj |).\nIf 7,, = 1, then there are 3 options:\n10\u00b0\n\n10+\n\nCores GD\nCores SGD 100\nCores SGD 500\nRiemann GD\nmann 100\nRiemann 500\nRiemann GD rand\n\nPI\n\n10 10\" 10\" 10?\ntime (s)\n\n(a) Binarized Car dataset\n\ntest loss (logistic)\n\nTP3sti\n\n107 10\u00b0 10\u00b0 10? 10?\ntime (3)\n\n(b) HIV dataset\n\nCores GD\nCores SGD 100\n\nCores SGD 500\nRiemann GD\n\njemann 100\n\nRiemann 500\n\njemann GD rand init 1\nRiemann GD rand init 2\nb, if a1 ig = 0\n\n. d .\nWis ia = Gil\u00e9s]---Ga-aliaaGalia] = 4 % Hf Lear? ?\nwe, if oar tg = 1\nand i, = 1.\nThe TT-rank of the obtained tensor equal 2 since its TT-cores are of size 2 x 2\n\u2018training less (logistic)\n\n0.70\n\n\u2014 Riemann SGD 2000, LR 0.05\n0.68 \u2014 Riemann SGD 1000, LR 0.05\no6s \u2014 Riemann SGD 500, LR 0.05\n\u2014 Riemann SGD 1000, LR 0.02\n0.64 \u2014 Riemann SGD 1000, LR 0.1\n0.62 \u2014 SGD 128, LR 0.005\n\u00a5 Riemann SGD 1000, LR 0.05, rand init 2\n0.60\n058\n056\n10 10? 10?\n\ntime(s)\n\n(a) Training set\n\n0.90\n\noes\n0.80\n075\n0.70\n0.65\n0.60\n055\n0.50\n0.45,\n\nRiemann SGD 2000, LR 0.05.\nRiemann SGD 1000, LR 0.05.\n\nRiemann SGD 500, LR 0.05,\n\nRiemann SGD 1000, LR 0.02\n\nRiemann SGD 1000, LR 0.1\n\nSGD 128, LR 0,005\n\nRiemann SGD 1000, LR 0.08, rand init 2\n\nPll til\n\nior\n\nTo?\n\ntime(s)\n\nTo?\n\n(b) Test set\nFigure 6: A comparison between Riemannian optimization and SGD applied to the underlying\nparameters of the TT-format (the baseline) for the rank-3 Exponential Machines on the synthetic\ndataset with high order interactions. The first number in each legend enrty stands for the batch size\nThe method marked with \u2018rand init\u2019 in the legend (triangle markers) was initialized from a random\nlinear model, all other methods were initialized from the solution of ordinary linear logistic regression\nSee details in Sec.|8-2Jand[8.3]\nGili] ...Gplip] = [9 90 |G,[1]\n\n[0 0].\nGifts]... Gp[ip] = [0 we | Gf]\n1. UCI 2013) Car dataset is a classification problem with 1728 objects and 21\n\nbinary features (after one-hot encoding). We randomly splitted the data into 1382 training\nand 346 test objects. For simplicity, we binarized the labels: we picked the first class\n(unacc\u2019) and made a one-versus-rest binary classification problem from the original Car\ndataset.\n\n2. UCI 2013) HIV dataset is a binary classification problem with 1625 objects\nand 160 features, which we randomly splitted into 1300 training and 325 test objects.\n\n3. Synthetic data. We generated 100000 train and 100000 test objects with 30 features.\nEach entry of the data matrix X was independently sampled from {\u20141, +1} with equal\nprobabilities 0.5. We also uniformly sampled 20 subsets of features (interactions) of order 6:\nTheses deseo Fos 58\u00b0 ~ ULL, ...,30}. We set the ground truth target variable to a\ndeterministic function of the input: y(a) = aan Ez TI, xjz, and sampled the weights\nof the interactions from the uniform distribution: \u20ac1,...,\u20ac29 ~ U(\u20141, 1).\n4. MovieLens 100K. MovieLens 100K is a recommender system dataset with 943 users and\n1682 movies . We folleweBlondt et 26a) in preparing\nthe features and in turning the problem into binary classification. For users, we converted\nage (rounded to decades), living area (the first digit of the zipcode), gender and occupation\ninto a binary indicator vector using one-hot encoding. For movies, we used the release year\n(rounded to decades) and genres, also encoded. This process yielded 49+ 29 = 78 additional\none-hot features for each user-movie pair (943 + 1682 + 78 features in total). Original\nratings were binarized using 5 as a threshold. This results in 21200 positive samples, half of\nwhich were used for traininig (with equal amount of sampled negative examples) and the\nrest were used for testing."}]
Sy6iJDqlx
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "One of the goals of Artificial Intelligence (AI) is to build autonomous agents that can learn and\nadapt to new environments. Reinforcement Learning (RL) is a key technique for achieving such\nadaptability. The goal of RL algorithms is to learn an optimal policy for choosing actions that\nmaximize some notion of long term performance. Transferring knowledge gained from tasks solved\nearlier to solve a new target task can help, either in terms of speeding up the learning process o1\nin terms of achieving a better solution, among other performance measures. When applied to RL.\ntransfer could be accomplished in many ways (see {Taylor & Stone] (2009} [2011 for a very good\nsurvey of the field). One could use the value function from the source task as an initial estimate in\n\nthe target task to cut down exploration [Sorg & Singh|(2009)]. Alternatively one could use policies\n\nfrom the source task(s) in the target task. This can take one of two forms - (i) the derived policies\n\ncan be used as initial exploratory trajectories [Atkeson & Schaal (1997); Niekum et al. (2013)] in\n\nthe target task and (ii) the derived policy could be used to define macro-actions which may then be\nused by the agent in solving the target task [Mannor et al.](2004);/Brunskill & Li|(2014)]."}, {"section_index": "1", "section_name": "ATTEND, ADAPT AND TRANSFER:\n\nATTENTIVE DEEP ARCHITECTURE FOR ADAPTIVE\nTRANSFER FROM MULTIPLE SOURCES IN THE SAME\nDOMAIN", "section_text": "Aravind S. Lakshminarayanan *\nAravind 5S. Lakshminarayanan ~\nIndian Institute of Technology Madra:\naravindsrinivas@gmail.com\nprasanna.p@cs.mcgill.c\u00e9"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The key contribution in the architecture is a deep attention network, that decides which solutions tc\nattend to, for a given input state. The network learns solutions as a function of current state thereby\naiding the agent in adopting different solutions for different parts of the state space in the target task.\nTo this end, we propose A2T: Attend, Adapt and Transfer, an Attentive Deep Architecture for Adap\ntive Transfer, that avoids negative transfer while performing selective transfer from multiple sourc\ntasks in the same domain. In addition to the tennis example, A2T is a fairly generic framework tha\ncan be used to selectively transfer different skills available from different experts as appropriate t\nthe situation. For instance, a household robot can appropriately use skills from different expert\nfor different household chores. This would require the skill to transfer manipulation skills acros\nobjects, tasks and robotic actuators. With a well developed attention mechanism, the most appropri\nate and helpful combination of object-skill-controller can be identified for aiding the learning on |\nrelated new task. Further, A2T is generic enough to effect transfer of either action policies or action\nvalue functions, as the case may be. We also adapt different algorithms in reinforcement learnin;\nas appropriate for the different settings and empirically demonstrate that the A2T is effective fo\ntransfer learning for each setting."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "As mentioned earlier, transfer learning approaches could deal with transferring policies or value\nfunctions. For example,|Banerjee & Stone! (2007) describe a method for transferring value functions\nby constructing a Game tree. Similarly,|Sorg & Singh (2009) use the value function from a source\ntask as the initial estimate of the value function in the target task.\nAnother method to achieve transfer is to reuse policies derived in the source task(s) in the target\n\ntask. Probabilistic Policy Reuse as discussed in|Fernandez & Veloso} (2006) maintains a library of\npolicies and selects a policy based on a similarity metric, or a random policy, or a max-policy from\n\nthe knowledge obtained. This is different from the proposed approach in that the proposed approach\nWhile transfer in RL has been much explored, there are two crucial issues that have not been ad-\nequately addressed in the literature. The first is negative transfer, which occurs when the transfer\nresults in a performance that is worse when compared to learning from scratch in the target task.\nThis severely limits the applicability of many transfer techniques only to cases for which some mea-\nsure of relatedness between source and target tasks can be guaranteed beforehand. This brings us\nto the second problem with transfer, which is the issue of identifying an appropriate source task\nfrom which to transfer. In some scenarios, different source tasks might be relevant and useful for\ndifferent parts of the state space of the target task. As a real world analogy, consider multiple players\n(experts) who are good at different aspects of a game (say, tennis). For example, Player | is good at\nplaying backhand shots while Player 2 is good at playing forehand shots. Consider the case of a new\nplayer (agent) who wants to learn tennis by selectively learning from these two experts. We handle\nsuch a situation in our architecture by allowing the agent to learn how to pick and use solutions from\nmultiple and different source tasks while solving a target task, selectively applicable for different\nparts of the state space. We call this selective transfer. Our agent can transfer knowledge from\nPlayer 1 when required to play backhand shots and Player 2 for playing forehand shots. Further,\nlet us consider consider the situation that both Player 1 and Player 2 are bad at playing drop shots.\nApart from the source tasks, we maintain a base network that learns from scratch on the target task.\nThe agent can pick and use the solution of the base network when solving the target task at the parts\nthe state space where transferring from the source tasks is negative. Such a situation could arise\nwhen the source task solutions are irrelevant for solving the target task over a specific portion of the\nstate space, or when the transferring from the source tasks is negative over a specific portion of the\nstate space (for example, transferring the bad drop shot abilities of Players 1 and 2). This situation\nalso entails the first problem of avoiding negative transfer. Our framework allows an agent to avoid\ntransferring from both Players 1 and 2 while learning to play drop shots, and rather acquire the drop\nshot skill by learning to use the base network. The architecture is trained such that the base network\nuses not just the experience obtained through the usage of its solutions in the target task, but the\noverall experience acquired using the combined knowledge of the source tasks and itself. This en-\nables the base network solutions to get closer to the behavior of the overall architecture (which uses\nthe source task solutions as well). This makes it easier for the base network to assist the architecture\nto fine tune the useful source task solutions to suit the target task perfectly over time.\n\n1]\ncan transfer policies at the granularity of individual states which is not possible in policy-reus\nrendering it unable to learn customized policy at that granularity/Atkeson & Schaal] (1997); Niekun\nevaluated the idea of having the transferred policy from the source tasks as explorativ\npolicies instead of having a random exploration policy. This provides better exploration behavio\nprovided the tasks are similar. try to find the promising policy from a se\nof candidate policies that are generated using different action mapping to a single solved task. I\ncontrast, we make use of one or more source tasks to selectively transfer policies at the granularit\nof state. Apart from policy transfer and value transfer as discussed above, |Ferguson & Mahadevai\n(2006) discuss representation transfer using Proto Value Functions.\nThe idea of negative and selective transfer have been discussed earlier in the literature. For example.\n\nLazaric & Restelli|( address the issue of negative transfer in transferring samples for a related\n\ntask in a multi-task setting. |Konidaris et al.|( discuss the idea of exploiting shared common\nfeatures across related tasks. They learn a shaping function that can be used in later tasks.\nThe two recent works that are very relevant to the proposed architecture are discussed in\n\nSs\n\n) and (2016) (2015) explore transfer learning in RL across\nAtari games by trying fo learn a mu\n\nti-task network over the source tasks available and directly fine-\ntune the learned multi-task network on the target task. However, fine-tuning as a transfer paradigm\ncannot address the issue of negative transfer which they do observe in many of their experiments.\ntry to address the negative transfer issue by proposing a sequential learning mech-\nanism where the filters of the network being learned for an ongoing task are dependent through\nlateral connections on the lower level filters of the networks learned already for the previous tasks.\nThe idea is to ensure that dependencies that characterize similarity across tasks could be learned\nthrough these lateral connections. Even though they do observe better transfer results than direct\nfine-tuning, they are still not able to avoid negative transfer in some of their experiments."}, {"section_index": "4", "section_name": "3 PROPOSED ARCHITECTURE", "section_text": "Ss Wis = 1, wi, \u20ac [0,1]\n\n=1\nw;.s is the weight given to the ith solution at state s.\nThe agent uses K\u2019r to act in the target task. Figure|1a]shows the proposed architecture. While the\nsource task solutions /t,,..., Ay remain fixed, the base network solutions are learnt and hence Kp\ncan change over time. There is a central network which learns the weights (w;,5,7 \u20ac 1,2,..., N+),\ngiven the input state s. We refer to this network as the attention network. The (0, 1| weights deter-\nmine the attention each solution gets allowing the agent to selectively accept or reject the different\nsolutions, depending on the input state. We adopt a soft-attention mechanism whereby more than\none weight can be non-zero [Bahdanau et al] (2014)] as opposed to a hard-attention mechanism\n[Mnih et al.]}(2014)] where we are forced to have only one non-zero weight.\nexp (\u20aci,s)\nN41\n\ndo exp (\u00a2,s)\n\nj=l\n\n6\u20ac {1,2,...,N4+1}\n\nWis =\nLet there be N source tasks and let Ky, K2,... Ky be the solutions of these source tasks 1,....N\nrespectively. Let Kr be the solution that we learn in the target task T. Source tasks refer to tasks\nthat we have already learnt to perform and target task refers to the task that we are interested in\nlearning now. These solutions could be for example policies or state-action values. Here the source\ntasks should be in the same domain as the target task, having the same state and action spaces. We\npropose a setting where [r is learned as a function of K),..., yy, Kg, where Kz is the solution\nof a base network which starts learning from scratch while acting on the target task. In this work,\nwe use a convex combination of the solutions to obtain K7.\nKr(s) = wn 41,sK p(s) + om sK;(s)\nFigure 1: (a) A2T architecture. The doted arrows represent the path of back propagation. (b) Actor-\nCritic using A2T.\n(\u20ac1,5, EQ syee s\u20acN+1,s) = f(s: Oa)\nDepending on the feedback obtained from the environment upon following /\u2018r, the attention net\nwork\u2019s parameters 9, are updated to improve performance.\nEven though the agent follows Kr, we update the parameters of the base network that produce:\nKg, as if the action taken by the agent was based only on /\u2019g. Due to this special way of updating\nKg, apart from the experience got through the unique and individual contribution of Kg to Kr ir\nparts of the state space where the source task solutions are not relevant, Kg also uses the valuable\nexperience got by using AK, which uses the solutions of the source tasks as well.\nThis also means that, if there is a source task whose solution A; is useful for the target task in\nsome parts of its state space, then K\u2019g tries to replicate A\u2019; in those parts of the state space. In\npractise, the source task solutions though useful, might need to be modified to suit perfectly for the\ntarget task. The base network takes care of these modifications required to make the useful source\ntask solutions perfect for the target task. The special way of training the base network assists the\narchitecture in achieving this faster. Note that the agent could follow/use A; through Kr even when\nKg does not attain its replication in the corresponding parts of the state space. This allows for a\ngood performance of the agent in earlier stages training itself, when a useful source task is available\nand identified.\nSince the attention is soft, our model has the flexibility to combine multiple solutions. The use of\ndeep neural networks allow the model to work even for large, complex RL problems. The deer\nattention network, allows the agent to learn complex selection functions, without worrying abou!\nBase : :\nnetwork : : Source Task\n\n\u2014_ Solutions\ntate\n\n(a) (b)\nHere, f(s;@.) is a deep neural network (attention network), which could consist of convolution\nlayers and fully connected layers depending on the representation of input. It is parametrised by 0,\nand takes as input a state s and outputs a vector of length N + 1, which gives the attention scores\nfor the N + 1 solutions at state s. Eq.(3) normalises this score to get the weights that follow Eq:\nIf the ith source task solution is useful at state s, then w;,, is set to a high value by the attention\nnetwork. Working at the granularity of states allows the attention network to attend to different\nsource tasks, for different parts of the state space of the target task, thus giving it the ability to\nperform selective transfer. For parts of the state space in the target task, where the source task\nsolutions cause negative transfer or where the source task solutions are not relevant, the attention\nnetwork learns to give high weight to the base network solution (which can be learnt and improved),\nthus avoiding negative transfer.\nAs mentioned earlier, the source task solutions, k1,..., A remain fixed. Updating these source\ntask\u2019s parameters would cause a significant amount of unlearning in the source tasks solutions and\nresult in a weaker transfer, which we observed empirically. This also enables the use of source task\n\nsolutions, as long as we have the outputs alone, irrespective of how and where they come from.\nrepresentation issues a priori. To summarise, for a given state, A2T learns to attend to specific\nsolutions and adapts this attention over different states, hence attaining useful transfer. A2T is\ngeneral and can be used for transfer of solutions such as policy and value."}, {"section_index": "5", "section_name": "3.1 POLICY TRANSFER", "section_text": "The solutions that we transfer here are the source task policies, taking advantage of which, we learn\na policy for the target task. Thus, we have Ky,...,Ky,Kp,Kr \u00a9 \u2122,...0Nn,7B,77. Here 7\nrepresents a stochastic policy, a probability distribution over all the actions. The agent acts in the\ntarget task, by sampling actions from the probability distribution 77. The target task policy mr is got\nas described in Eq.{I) and Eq.(2). The attention network that produces the weights for the different\nsolutions, is trained by the feedback got after taking action following mr. The base network that\nproduces 7g is trained as if the sampled action came from 7g (though it originally came from 77).\nthe implications of which were discussed in the previous section. When the attention network\u2019s\nweight for the policy 7g is high, the mixture policy 77 is dominated by 7g, and the base network\nlearning is nearly on-policy. In the other cases, 7g undergoes off-policy learning. But if we look\nclosely, even in the latter case, since 7g moves towards 7, it tries to be nearly on-policy all the\ntime. Empirically, we observe that 7g converges. This architecture for policy transfer can be used\nalongside any algorithm that has an explicit representation of the policy. Here we describe two\ninstantiations of A2T for policy transfer, one for direct policy search using REINFORCE algorithm\nand another in the Actor-Critic setup."}, {"section_index": "6", "section_name": "3.1.1 POLICY TRANSFER IN REINFORCE ALGORITHMS USING A2T:", "section_text": "REINFORCE algorithms (1552) can be used for direct policy search by making weigh\nadjustments in a direction that lies along the gradient of the expected reinforcement. The full ar\nchitecture is same as the one shown in Fig{Ta] with i \u00a9 a. We do direct policy search, and the\nparameters are updated using REINFORCE. Let the attention network be parametrized by 6, anc\nthe base network which outputs 7 be parametrized by 6,. The updates are given by:\n\u2014b) aie zlog(mr(se, ae)\n06,\n2d 1 log(ta(st, at))\nAG.\n\nO, <\u2014 A, + a9, (7 \u2014\nWe use A2T for the actor part of the Actor-Critic. The architecture is shown in Fig The actor,\nA2T is aware of all the previous learnt tasks and tries to use those solution policies for its benefit.\nThe critic evaluates the action selection from 7 on the basis of the performance on the target task.\nWith the same notations as REINFORCE for s;, a1, a, 95, %,,0,,7B-; TT; let action a, dictated\nby wr lead the agent to next state s,., with a reward r;,1 and let V(s;) represent the value of state\ns, and +\u00a5 the discount factor. Then, the update equations for the actor are as below:\nwhere ag, , 9, are non-negative factors, r is the return obtained in the episode, b is some baseline\nand MM is the length of the episode. a; is the action sampled by the agent at state s, following mr.\nNote that while 77(s;,@,) is used in the update of the attention network, 7(s;, az) is used in the\nupdate of the base network.\nActor-Critic methods [Konda & Tsitsiklis] (2000)] are Temporal Difference (TD) methods that have\n\nwo Separate components, viz., an actor and a critic. The actor proposes a policy whereas the critic\n>stimates the value function to critique the actor\u2019s policy. The updates to the actor happens through\n[D-error which is the one step estimation error that helps in reinforcing an agent\u2019s behaviour.\n6: =Tr41 + WV (si41) \u2014 V(se)\ndlog wr (se a4)\n00,\n\n0. \u2014 04 +.00,5\n\n\") alog mr (se.a1)\n00,\nHere, 6; is the TD error. The state-value function V of the critic is learnt using TD learning.\nIn this case, the solutions being transferred are the source tasks\u2019 action-value functions, which we\nwill call as Q functions. Thus, K1,...,Kn,Ke,Kr \u00ab\u2014 Qi,...,\n\nthe discrete action space for the tasks and Q;(s) = {Q(s,a;) V aj \u20ac A}. The agent acts by using\n(Qr in the target task, which is got as described in Eq.{I) and Eq.\n\nbase network of A2T are updated as described in the architecture.\nThe state-action value Q function is used to guide the agent to selecting the optimal action a at a\nstate s, where ()(s, a) is a measure of the long-term return obtained by taking action a at state s. One\nway to learn optimal policies for an agent is to estimate the optimal Q(s, a) for the task. Q-learning\n(Watkins & Dayan] {1992)] is an off-policy Temporal Difference (TD) learning algorithm that does\n\nso. The Q-values are updated iteratively through the Bellman optimality equation (Puterman]{1994)]\nwith the rewards obtained from the task as below:\nIn high dimensional state spaces, it is infeasible to update Q-value for all possible state-action pairs\nOne way to address this issue is by approximating Q(s, a) through a parametrized function approx-\nimator Q(s, a; 0),thereby generalizing over states and actions by operating on higher level features\n\n(1998)]. The DQN [Mnih et al.|(2015)] approximates the Q-value function with \u00a2\n\ndeep neural network to be able to predict Q(s, a) over all actions a, for all states s.\nThe loss function used for learning a Deep Q Network is as below:\nL(0) = Es,a,r,s\"[(y??\u2122% \u2014 Q(s, a; 0)) J,\nPON = (r + ymaxa\u2019Q(s\u2019,a\u2019, 0)\nVoL(0) = Es.a,r.s\u2019[(yPo\u00ae \u2014 Q(s,a;0))VoQ(s,a)]\nVoL(0) = Esars' (yo \u2014 Q(s, a; 0))VoQ(s, a)|\nTo avoid correlated updates from learning on the same transitions that the current network simulates,\nan experience replay [Lin| 3)] D (of fixed maximum capacity) is used, where the experiences\nare pooled in a FIFO fashion.\nWe use DQN to learn our experts Q;,7 \u20ac 1,2....N on the source tasks. Q-learning is used to ensure\nQr(s) is driven to a good estimate of Q functions for the target task. Taking advantage of the off-\npolicy nature of Q-learning, both Q and Qr can be learned from the experiences gathered by an\ne-greedy behavioral policy based on Qr. Let the attention network that outputs w be parametrised\nby @, and the base network outputting Q be parametrised by 0). Let 0, and 0, represent the\nparameters of the respective target networks. Note that the usage of target here is to signify the\nparameters (6; , 0, ) used to calculate the target value in the Q-learning update and is different from\nits usage in the context of the target task. The update equations are:\nL\u00b07 (64, %) = Exs.ars'[(y?? \u2014 Qr(s, a; 6a, 4))\u00b0]\nO log 7B (st,a4)\n00\u00bb\n\nMy \u2014 Oo + 1009 Foe (ausaa)\nOO,\nQ(s,a) \u2014 Elr(s,a,s\u00b0) + ymaxqQ(s\u2019,a\u2019)]\nHere, L represents the expected TD error corresponding to current parameter estimate 0. @~ rep-\nresents the parameters of a separate target network, while @ represents the parameters of the online\nnetwork. The usage of a target network is to improve the stability of the learning updates. The\n\ngradient descent step is shown below:\ny2? = (r + ymax Qr(s',a';0q~. 95)\nVo,L2? =El(y?? \u2014 Qr(s,a))Vo,Qr(s,a)]\nVo, L22 =El(y\u00ae? \u2014 Qz(s,a))Vo,Qr(s,a)]"}, {"section_index": "7", "section_name": "{| EXPERIMENTS AND DISCUSSION", "section_text": "Puddle worlds: Figures [2b] and show the discrete version of the stan\n\nlard puddle world tha\n\nis widely used in Reinforcement Learning literature. In this world, the goal of the agent is to g\nfrom a specified start position to the goal position, maximising its return. At each state the agen\ncan choose one of these four actions: move one position to the north, south, east or west.With 0.\nprobability the agent moves in the chosen direction and with 0.1 probability it moves in a randon\ndirection irrespective of its choice of action. On reaching the goal state, the agent gets a rewar\nof +10. On reaching other parts of the grid the agent gets different penalties as mentioned in thi\nlegend of the figures. . We evaluate the performance of our architecture on value transfer using th\n\nArcade Learning Environment (ALE) platform [Bellemare et al.|(2012)]. Atari 2600: ALE provide\n\na simulator for Atari 2600 games. This is one of the most commonly used be:\n\nreinforcement learning algorithms (2015), (2016),\n\nnchmark tasks for deey\nParisotto et al.\n\n2016)]. We perform our adaptive transfer learning experiments on the Atari 2600 gam\nIn this section, we consider the case when multiple partially favorable source tasks are available\nsuch that each of them can assist the learning process for different parts of the state space of the\ntarget task. The objective here is to first show the effectiveness of the attention network in learning\nto focus only on the source task relevant to the state the agent encounters while trying to complete\nthe target task and then evaluating the full architecture with an additional randomly initialised base\nnetwork.\nol\n\nsf\n\ns\n\n2\n\n32\n\n| -\n\n33\n\nrid\n\n(b) Puddle World 1\n\n(c) Puddle World 2\n\u2018igure 2: Different worlds for policy transfer experiments\nL22 (,) = Es.ar.s\u2019[(y@\" \u2014 Qp(s,a; ))?]\nQ,, and 6, are updated with the above gradients using RMSProp. Note that the Q-learning updates for\nboth the attention network (Eq.{11)) and the base network (Eq.{12}) use the target value generated\nby Qr. We use target networks for both Q and Q, to stabilize the updates and reduce the non-\nstationarity as in DQN training. The parameters of the target networks are periodically updated to\nthat of the online networks.\nWe evaluate the performance of our architecture A2T on policy transfer using two simulated worlds,\nviz., chain world and puddle world as described below. The main goal of these experiments is to test\nthe consistency of results with the algorithm motivation. Chain world: Figure hows the chain\nworld where the goal of the agent is to go from one point in the chain (starting state) to another\npoint (goal state) in the least number of steps. At each state the agent can choose to either move\none position to the left or to the right. After reaching the goal state the agent gets a reward that is\ninversely proportional to the number of steps taken to reach the goal.\nThis is illustrated for the Policy Transfer setting using the chain world shown in (Fig. [2ap. Considet\nthat the target task LT is to start in A or B with uniform probability and reach C in the least number\nof steps. Now, consider that two learned source tasks, viz., L1 and L2, are available. 1 is the\nsource task where the agent has learned to reach the left end (A) starting from the right end (B). In\ncontrast, [2 is the source task where the agent has learned to reach the right end (B) starting from\nthe left end (A). Intuitively, it is clear that the target task should benefit from the policies learnt for\ntasks D1 and L2. We learn to solve the task LT using REINFORCE given the policies learned for\nD1 and \u00a32. Figure [3al (i) shows the weights given by the attention network to the two source task\npolicies for different parts of the state space at the end of learning. We observe that the attention\nnetwork has learned to ignore L1, and [2 for the left, and right half of the state space of the target\ntask, respectively. Next, we add base network and evaluate the full architecture on this task. Figure\nBal(ii) shows the weights given by the attention network to the different source policies for different\nparts of the state space at the end of learning. We observe that the attention network has learned to\nignore 11, and L2 for the left, and right half of the state space of the target task, respectively. As the\nbase network replicates 77 over time, it has a high weight throughout the state space of the target\ntask.\nWe do a similar evaluation of the attention network, followed by our full architecture for value\ntransfer as well. We create partially useful source tasks through a modification of the Atari 2600\ngame Pong. We take inspiration from a real world scenario in the sport Tennis, where one could\nimagine two different right-handed (or left) players with the first being an expert player on the\nforehand but weak on the backhand, while the second is an expert player on the backhand but weak\non the forehand. For someone who is learning to play tennis with the same style (right/left) as the\nexperts, it is easy to follow the forehand expert player whenever he receives a ball on the forehand\nand follow the backhand expert whenever he receives a ball on the backhand.\nWe try to simulate this scenario in Pong. The trick is to blur the part of the screen where we want\nto force the agent to be weak at returning the ball. The blurring we use is to just black out all pixels\nin the specific region required. To make sure the blurring doesn\u2019t contrast with the background, we\nmodify Pong to be played with a black background (pixel value 0) instead of the existing gray (pixel\nvalue 87). We construct two partially helpful source task experts Z1 and \u00a32. L1 is constructed by\n1400 Learning from scratch\n\nA2T with\ni) Attention Weights \u2014 2200 base network and L1\nBj\nbal 2 AQT with\n7 2 base network and 12\n\u2018States 5 10 1s 20 g 800 AQT with\n(ii) Attention Weights : \u2014 \u2018base network,L1 and L2\nu 5 600\nie Fi\nR g 400\nstates 5 10 15 20 \u00e9\nColor bar 200}!\n\u2014\u2014\u2014~7Y_ 3\n\nol 02 03 \u00a96\u00a90406\u00ab\u00a9605)06CTt\u00ab 50 100 150 200 250 300\nEpisode number\nWe also evaluate our architecture in a relatively more complex puddle world shown in Figure|2c} In\nhis case, L1 is the task of moving from $1 to G1, and L2 is the task of moving from $2 to GL\nIn the target task LT, the agent has to learn to move to G1 starting from either $1 or $2 choser\nwith uniform probability. We learn the task LT\u2019 using Actor-Critic method, where the following are\nwailable (i) learned policy for D1 (ii) learned policy for \u00a32 and (iii) a randomly initialized policy\n1etwork (the base network). Figure [3b] shows the performance results. We observe that actor-critic\nising A2T is able to use the policies learned for 11, and L2 and performs better than a network\nlearning from scratch without any knowledge of source tasks.\nFigure 4: Visualisation of the attention weights in the Selective Transfer with Attention Network\nexperiment: Green and Blue bars signify the attention probabilities for Expert-1 (Z1) and Expert-\n2 (\u00a32) respectively. We see that in the first two snapshots, the ball is in the lower quadrant and\nas expected, the attention is high on Expert-1, while in the third and fourth snapshots, as the ball\nbounces back into the upper quadrant, the attention increases on Expert-2.\ntraining a DQN on Pong with the upper quadrant (the agent\u2019s side) blurred, while L2 is constructec\nby training a DQN with the lower quadrant (the agent\u2019s side) blurred. This essentially results it\nthe ball being invisible when it is in the upper quadrant for L1 and lower quadrant for L2. We\ntherefore expect L1 to be useful in guiding to return balls on the lower quadrant, and L2 for the\nupper quadrant. The goal of the attention network is to learn suitable filters and parameters so that i\nwill focus on the correct source task for a specific situation in the game. The source task experts D1\nand [2 scored an average of 9.2 and 8 respectively on Pong game play with black background. Witt\nan attention network to suitably weigh the value functions of 1 and D2, an average performance o\n17.2 was recorded just after a single epoch (250,000 frames) of training. (The score in Pong is in the\nrange of [\u201421, 21]). This clearly shows that the attention mechanism has learned to take advantag\nof the experts adaptively. Fig. /4]shows a visualisation of the attention weights for the same.\nWe then evaluate our full architecture (A2T) in\nthis setting, i.e with an addition of DQN learn-\ning from scratch (base network) to the above set-\nting. The architecture can take advantage of the\nknowledge of the source task experts selectively\nearly on during the training while using the ex-\npertise of the base network wherever required, to\nperform well on the target task. Figure [5] sum-\nmarizes the results, where it is clear that learn-\ning with both the partially useful experts is better\nthan learning with only one of them which in turn\nis better than learning from scratch without any\nadditional knowledge.\nsource task is available such that its solution Ky,\n\n(policy or value) can hamper the learning process of the new target task. We refer to such a source\ntask as an unfavorable source task. In such a scenario, the attention network shown in Figure [Ia\nshould learn to assign a very low weight (ignore) to i; . We also consider a modification of this\nsetting by adding another source task whose solution J/g is favorable to the target task. In such a\nscenario, the attention network should learn to assign high weight (attend) to A\u00bb while ignoring K,.\nWe now define an experiment using the puddle world from Figure[2b]for policy transfer. The target\ntask in our experiment is to maximize the return in reaching the goal state G1 starting from any one\nof the states S11, 52,53, 54. We artificially construct an unfavorable source task by first learning\nto solve the above task and then negating the weights of the topmost layer of the actor network.\nWe then add a favorable task to the above setting. We artificially construct a favorable source task\nAverage Score\n\n\u2014 Learning from scratch\n\u2014 A2T with base network and LL\n\u2014 A2T with base network and L2\n\u2014_ A2T with base network, Li and L2\n\n10\n\nFy 20 25\nEpoch\nFigure 5: Selective Value Transfer.\n20 20\n\n\u2018Average Score\n\n15 15\n\n10 10\n\n\u2018Average Score\n\nect arte wth ute task -20 ect ranster wth unfavorable tsk\nwn tase eter duno task ar wih base networ an unfavorable ask\nAart as eon vrale and uiverable ts aren base networ,vorble ang unfavorable tsk\n-25,\n5 10 15 20 25 30 0 5 10 15 20 25 30\n\noo oo\n(a) Avoiding negative transfer(Pong) and transferring(b) Avoiding negative transfer(Freeway) and transfer-\nfrom a favorable task ring from a favorable task\nFigure 7: Avoiding negative transfer and transferring value from a favorable task(higher the better)\nSpecific training and architecture details are mentioned in APPENDIX. The plots are averaged ove:\ntwo runs with different random seeds.\nsimply by learning to solve the target task and using the learned actor network. Figure 4 show:\nthe results. The target task for the value transfer experiment is to reach expert level performanc\u00a2\non Pong. We construct two kinds of unfavorable source tasks for this experiment. Inverse-Pong\nA DQN on Pong trained with negated reward functions, that is with R\u2019(s,a) = \u2014R(s,a) wher\nR(s,a) is the reward provided by the ALE emulator for choosing action a at state s. Freeway\nAn expert DQN on another Atari 2600 game, Freeway, which has the same range of optimal valuc\nfunctions and same action space as Pong. We empirically verified that the Freeway expert DQN\nleads to negative transfer when directly initialized and fine-tuned on Pong which makes this a gooc\nproxy for a negative source task expert even though the target task Pong has a different state space\nWe artificially construct a favorable source task\nby learning a DQN to achieve expertise on the\ntarget task (Pong) and use the learned network.\ncompares the performance of the var-\nious scenarios when the unfavorable source task\nis Inverse-Pong, while Figure |7b]offers a similar\ncomparison with the negative expert being Free-\nway.\nFrom all the above results, we can clearly see that\nA2T does not get hampered by the unfavorable\nsource task by learning to ignore the same and\nperforms competitively with just a randomly ini-\ntialized learning on the target task without any ex-\npert available. Secondly, in the presence of an ad-\nditional source task that is favorable, A2T learns\nto transfer useful knowledge from the same while\nignoring the unfavorable task, thereby reaching\nexpertise on the target task much faster than the\nother scenarios.\nWe present the evolution of attention weights for the experiment described in Section 4.2 where\nwe focus on the efficacy of the A2T framework in providing an agent the ability to avoid negative\ntransfer and transfer from a favorable source task (perfect expert). Figure|8|depicts the evolution o:\nAverage number of steps to goal\n\n1200\n\n\u2014 Learning from scratch\n\n1000 \u2014 Direct transfer with unfavorable task\n_ A2T with\n300 base network and unfavorable task\nAQT with\nbase network, favorable and unfavorable tasks\n600\n\n400\n\n30 100 150 200\nEpisode number\n1 the\nvork.\nvar-\ntask\nnilar\nFree-\n\n> that\nrable\n- and\ny ini-\ny ex-\n\nAverage number of steps to goal\n\nacu\n\n1000\n\n800\n\n600\n\n400\n\nLearning from scratch\n\nDirect transfer with unfavorable task\n\nAQT with\n\nbase network and unfavorable task\n\nAQT with\n\nbase network, favorable and unfavorable tasks\n\n50 100\nEpisode number\nFigure 6: Avoiding negative transfer and trans-\nferring policy from a favorable task(lower the\nbetter).\nthe attention weights (normalised in the range of [0, 1]) during the training of the A2T framework.\nThe corresponding experiment is the case where the target task is to solve Pong, while there are two\nsource task experts, one being a perfect Pong playing trained DQN (to serve as positive expert), and\nthe other being the Inverse-Pong DQN trained with negated reward functions (to serve as negative\nexpert). Additionally, there\u2019s also the base network that learns from scratch using the experience\ngathered by the attentively combined behavioral policy from the expert networks, the base network\nand itself.\nNe train the framework for 30 epochs, and the\nlot illustrates the attention weights every second\npoch. We clearly see from figure [8|that there is\n10 weird co-adaptation that happens in the train-\nng, and the attention on the negative expert is\nniformly low throughout. Initially, the frame-\nvork needs to collect some level of experience\no figure out that the positive expert is optimal\nor close to optimal). Till then, the attention is\nnostly on the base network, which is learning\nrom scratch. The attention then shifts to the pos-\ntive expert which in turn provides more reward-\nng episodes and transition tuples to learn from.\n\u2018inally, the attention drifts slowly to the base net-\nvork from the positive expert again, after which\nhe attention is roughly random in choosing be-\nween the execution of positive expert and the\nase network. This is because the base network\nas acquired sufficient expertise as the positive\nxpert which happens to be optimal for the tar-\net task. This visualization clearly shows that A2\u2019\nxpert throughout and using a positive expert ap\nrathered and acquire sufficient expertise on the tar\n4.4 WHEN A PERFECT EXPERT IS NOT AVAILABLE AMONG THE SOURCE TASKS\nIn our experiments in the previous subsection\ndealing with prevention of negative transfer and\nusing a favorable source task, we consider the\npositive expert as a perfect (close to optimal) ex-\npert on the same task we treat as the target task.\nThis raises the question of relying on the pres-\nence of a perfect expert as a positive expert. If\nwe have such a situation, the obvious solution is\nto execute each of the experts on the target task\nand vote for them with probabilities proportional\nto the average performance of each.\nintended to just do source task selection. We il-\nlustrate this with an additional baseline experi- i + io 5\nment, where the positive source task is an im- Epoch\nperfect expert on the target task. In such a case, . . ..\njust having a weighted average voting among the Figure 9: Partial Positive Expert\navailable source task networks based on their in-\n\ndividual average rewards is upper bounded by the\n\nperformance of the best available positive expert, which happens to be an imperfect exy\nget task. Rather, the base network has to acquire new skills not present in the source t\nWe choose a partially trained network on Pong, that scores an average of 8 (max: 21\nin figure clearly shows that the A2T framework with a partial Pong expert and a n\u00ab\nperforms better than i) learning from scratch, ii) A2T with only one negative expert,\nworse than A2T with one perfect positive expert and one negative expert. This is exp\n\n20\nEpoch\n\nEvery second\n\u2018epoch from 1 to 30\n\nPositive Expert Negative Expert Base Network\n\nColor bar\n\n00 01 02 03 04 OS 06 O7 08 09\nFigure 8: Evolution of attention weights with\none positive and one negative expert.\nAverage Score\n\n10\n\n15 20 25\nEpoch\n\n30\nFigure 9: Partial Positive Expert Experiment\na partial expert cannot provide as much of expert knowledge as a perfect expert, but still provide:\nsome useful knowledge in speeding the process of solving the target task. An important conclusiot\nfrom this experiment is that the A2T framework is capable of discovering new skills not availabl.\namong any of the experts when such skills are required for optimally solving the target task. Tc\nmaintain consistency, we perform the same number of runs for averaging scores and experimentec\nwith both learning rates and pick the better performing one (0.00025)."}, {"section_index": "8", "section_name": "5 CONCLUSION AND FUTURE WORK", "section_text": "While in this work we focused on transfer between tasks that share the same state and action spaces\nand are in the same domain, the use of deep networks opens up the possibility of going beyond this\nsetting. For example, a deep neural network can be used to learn common representations\nlet al.|(2015)] for multiple tasks thereby enabling transfer between related tasks that could po:\nhave different state-action spaces. A hierarchical attention over the lower level filters across source\ntask networks while learning the filters for the target task network is another natural extension to\ntransfer across tasks with different state-action spaces. The setup from Progressive Neural Networks\nRusu et al. could be borrowed for the filter transfer, while the A2T setup can be retained for\nthe policy/value transfer. Exploring this setting for continuous control tasks so as to transfer from\nmodular controllers as well avoid negative transfer is also a potential direction for future research.\nThe nature of tasks considered in our experiments is naturally connected to Hierarchical Reinforce\nment Learning and Continual Learning. For instance, the blurring experiments inspired from Tenni\nbased on experts for specific skills like Forehand and Backhand could be considered as learning fron\nsub-goals (program modules) like Forehand and Backhand to solve a more complex and broade\ntask like Tennis by invoking the relevant sub-goals (program modules). This structure could be ver\nuseful to build a household robot for general purpose navigation and manipulation whereby specifi\nskills such as manipulation of different objects, navigating across different source-destination points\n2tc could be invoked when necessary. The attention network in the A2T framework is essential\na soft meta-controller and hence presents itself as a powerful differentiable tool for Continual an\nMeta Learning. Meta-Controllers have typically been been designed with discrete decision struc\nture over high level subgoals. This paper presents an alternate differentiable meta-controller with |\nsoft-attention scheme. We believe this aspect can be exploited for differentiable meta-learning ar\nchitectures for hierarchical reinforcement learning. Over all, we believe that A2T is a novel way t\napproach different problems like Transfer Learning, Meta-Learning and Hierarchical Reinforcemen\nLearning and further refinements on top of this design can be a good direction to explore."}, {"section_index": "9", "section_name": "ACKNOWLEDGEMENTS", "section_text": "Thanks to the anonymous reviewers of ICLR 2017 who have provided thoughtful remarks and\nhelped us revise the paper. We would also like to thank Sherjil Ozair, John Schulman, Yoshua\nBengio, Sarath Chandar, Caglar Gulchere and Charu Chauhan for useful feedback about the work.\nIn this paper we present a very general deep neural network architecture, A2T, for transfer learning\nthat avoids negative transfer while enabling selective transfer from multiple source tasks in the same\ndomain. We show simple ways of using A2T for policy transfer and value transfer. We empirically\nevaluate its performance with different algorithms, using simulated worlds and games, and show\nthat it indeed achieves its stated goals. Apart from transferring task solutions, A2T can also be used\nfor transferring other useful knowledge such as the model of the world."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by joint\nlearning to align and translate. arXiv preprint arXiv: 1409.0473, 2014.\nMarc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environ-\nment: An evaluation platform for general agents. arXiv preprint arXiv: 1207.4708, 2012.\nFernando Fernandez and Manuela Veloso. Probabilistic policy reuse in a reinforcement learning\nagent. In Proceedings of the fifth international joint conference on Autonomous agents and mul-\ntiagent systems, pp. 7220-727. ACM, 2006.\nVijay Konda and John Tsitsiklis. Actor-critic algorithms. In SIAM Journal on Control and Opti\nmization, pp. 1008-1014. MIT Press, 2000.\nGeorge Konidaris, Ilya Scheidwasser, and Andrew G Barto. Transfer in reinforcement learning via\nshared features. The Journal of Machine Learning Research, 13(1):1333-1371, 2012.\nAlessandro Lazaric and Marcello Restelli. Transfer from multiple mdps. In Advances in Neural\nInformation Processing Systems, pp. 1746-1754, 2011.\nLong-Ji Lin. Reinforcement learning for robots using neural networks. Technical report, DTIC\nDocument, 1993.\nVolodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In\nAdvances in Neural Information Processing Systems, pp. 2204-2212, 2014.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle-\nmare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level\ncontrol through deep reinforcement learning. Nature, 518(7540):529-533, 2015.\nEmilio Parisotto, Jimmy Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfe1\nreinforcement learning. CoRR, abs/1511.06342, 2015.\nMartin L Puterman. Markov decision processe\n\n: Discrete stochastic dynamic programming. 1994\nBikramjit Banerjee and Peter Stone. General game learning using knowledge transfer. In In The\n0th Internatinngal Inint Conterencre ann Artificial Intellicence IN7\nKimberly Ferguson and Sridhar Mahadevan. Proto-transfer learning in markov decision processes\nusing spectral methods. Computer Science Department Faculty Publication Series, pp. 151, 2006.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, loannis Antonoglou, Daan Wier-\nstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint\narXiv: 1312.5602, 2013.\nAndrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick\nKoray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. CoRR\nabs/1606.04671, 2016.\nJonathan Sorg and Satinder Singh. Transfer via soft homomorphisms. In Proceedings of The 8th\nInternational Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 741-748\nInternational Foundation for Autonomous Agents and Multiagent Systems, 2009.\nRichard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press.\nCambridge, MA, USA, Ist edition, 1998. ISBN 0262193981.\nMatthew E Taylor and Peter Stone. An introduction to intertask transfer for reinforcement learning.\nAl Magazine, 32(1):15, 2011.\nChristopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3):279-292, 1992.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement\nlearning. Machine learning, 8(3-4):229-256, 1992.\nMatthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey\nThe Journal of Machine Learning Research, 10:1633\u20141685, 2009."}, {"section_index": "11", "section_name": "APPENDIX A: DETAILS OF THE NETWORK ARCHITECTURE IN VALUE\nTRANSFER EXPERIMENTS", "section_text": "For the source task expert DQNs, we use the same architecture as (2015)] where the\ninput is 84 x 84 x 4 with 32 convolution filters, dimensions 8 x 8, stride 4 x 4 followed by 64\nconvolution filters with dimensions 4 x 4 and stride 2 x 2, again followed by 64 convolution filters\nof size 3 x 3 and stride 1 x 1. This is then followed by a fully connected layer of 512 units and finally\nby a fully connected output layer with as many units as the number of actions in Pong (Freeway)\nwhich is 3. We use ReLU nonlinearity in all the hidden layers.\nWith respect to the A2T framework architecture, we have experimented with two possible architec\ntures:\nSpecifically, the NIPS architecture of Mnih et al.| (2013) takes in a batch of 84 x 84 x 4 inputs,\nfollowed by 16 convolution filters of dimensions 8 x 8 with stride 4 x 4, 32 convolution filters with\n\ndimensions 4 x 4 and stride 2 x 2, a fully connected hidden layer of 256 units, followed by the\noutput layer. For the Selective Transfer with Blurring experiments described in Section 4.1, we use\nthe second option above. For the other experiments in Section 4.2 and the additional experiments in\nAppendix, we use the first option. The attention network has N + 1 outputs where N is the number\nof source tasks."}, {"section_index": "12", "section_name": "APPENDIX C: BLURRING EXPERIMENTS ON PONG", "section_text": "The experts are trained with blurring (hiding the ball) and black background as illustrated in AP\nPENDIX A. Therefore, to compare the learning with that of a random network without any addi\ntional knowledge, we ran the baseline DQN on Pong with a black background too. Having a blac!\nbackground provides a rich contrast between the white ball and the black background, thereby mak\ning training easier and faster, which is why the performance curves in that setting are different t\nthe other two settings reported for Inverse Pong and Freeway Negative transfer experiments wher\nno blacking is done and Pong is played with a gray background. The blurring mechanism in Pon;\nis illustrated in APPENDIX E.\ne The base and attention networks following the NIPS architecture of |Mnih et al.\nexcept that the output layer is softmax for the attention network.\n\ne The base and attention networks following the Nature architecture of (2015\n\nwith a softmax output layer for the attention network.\nFor all our experiments in Value Transfer, we used RMSProp as in [Mnih et al.|(2015)] for updating\ngradient. For Policy Transfer, since the tasks were simple, stochastic gradient descent was sufficient\nto provide stable updates. We also use reward clipping, target networks and experience replay for our\nvalue transfer experiments in exactly the same way (all hyper parameters retained) as [Mnih et al.\n(2015)]. A training epoch is 250,000 frames and for each training epoch, we evaluate the networks\nwith a testing epoch that lasts 125,000 frames. We report the average score over the completed\nepisodes for each testing epoch. The average scores obtained this way are averaged over 2 runs with\ndifferent random seeds. In the testing epochs, we use \u00ab = 0.05 in the e-greedy policy.\nIn all our experiments, we trained the architecture using the learning rates, 0.0025 and 0.0005. In\ngeneral, the lower learning rate provided more stable (less variance) training curves. While com-\nparing across algorithms, we picked the best performing learning rate out of the two (0.0025 and\n0.0005) for each training curve.\nAPPENDIX E: BLURRING MECHANISM IN PONG - DETAILS\n(a) Ball in upper quad (b) Blurred upper quad (c) Ball in lower quad (d) Blurred lower quad"}, {"section_index": "13", "section_name": "APPENDIX D: BLURRING EXPERIMENTS ON BREAKOUT", "section_text": "Similar to our Blurring experiment on Pong, we additionally ran another experiment on the Atari\n2600 game, Breakout, to validate the efficiency of our attention mechanism. We consider a setup\nwith two experts L1 and L2 along with our attention network. The experts 11 and L2 were trained\nby blurring the lower left and right quadrants of the breakout screen respectively. We don\u2019t have\nto make the background black like in the case of Pong because the background is already black in\nBreakout and direct blurring is sufficient to hiding the ball in the respective regions without any\ncontrasts introduced. We blur only the lower part so as to make it easy for the agent to at least\nanticipate the ball based on the movement at the top. We empirically observed that blurring the top\nhalf (as well) makes it hard to learn any meaningful partially useful experts 21 and L2.\nThe goal of this experiment is to show that the attention network can learn suitable filters so as tc\ndynamically adapt and learn to select the expert appropriate to the situation (game screen) in the\ntask. The expert L1 which was blurred on the left bottom half is bound to weak at returning balls or\nthat region while [2 is expected to be weak on the right. This is in the same vein as the forehand.\nbackhand example in Tennis and its synthetic simulation for Pong by blurring the upper and lowe:\nquadrants. During game play, the attention mechanism is expected to ignore L2 when the ball i:\non the bottom right half (while focusing on \u00a31) and similarly ignore L2 (while focusing on L1\nwhen the ball is on the left bottom half. We learn experts L1 and L2 which score 42.2 and 39.8\nrespectively. Using the attention mechanism to select the correct expert, we were able to achieve\na score of 94.5 after training for 5 epochs. Each training epoch corresponds to 250,000 decisior\nsteps, while the scores are averaged over completed episodes run for 125, 000 decision steps. Thi:\nshows that the attention mechanism learns to select the suitable expert. Though the performance i:\nlimited by the weaknesses of the respective experts, our goal is to show that the attention paradigr\nis able to take advantage of both experts appropriately. This is evident from the scores achieved by\nstandalone experts and the attention mechanism. Additionally, we also present a visualization of the\nattention mechanism weights assigned to the experts L1 and L2 during game play in APPENDIX\nG. The weights assigned are in agreement with what we expect in terms of selective attention. The\nblurring mechanism is visually illustrated in APPENDIX F.\nFigure 10: The figures above explain the blurring mechanism for selective transfer experiments on\nPong. The background of the screen is made black. Let X (84 x 84) denote an array containing\nhe pixels of the screen. The paddle controlled by the agent is the one on the right. We focus on\nhe two quadrants X1 = X[: 42,42 :] and X2 = X[42 :, 42 :] of the Pong screen relevant to the\nigent controlled paddle. To simulate an expert that is weak at returning balls in the upper quadrant,\nhe portion of X1 till the horizontal location of agent-paddle, ie X1[:,: 31] is blacked out, while\nsimilarly, for simulating weakness in the bottom quadrant, we blur the portion of X2 till the agent-\npaddle\u2019s horizontal location, ie X2[:,: 31] = 0. Figures and illustrate the scenarios of\nolurring the upper quadrant before and after blurring; and similarly do[i0c| and[10d] for blurring the\nlower quadrant. Effectively, blurring this way with a black screen is equivalent to hiding the ball\nwhite pixel) in the appropriate quadrant where weakness is to be simulated. Hence, Figures [10b\nind [10d] are the mechanisms used while training a DQN on Pong to hide the ball at the respective\njuadrants, so to create the partially useful experts which are analogous to forehand-backhand experts\nn Tennis. X[: a,: b] indicates the subarray of X with all rows upto row index a and all columns\nypto column index b.\nAPPENDIX F: BLURRING MECHANISM IN BREAKOUT - DETAILS\nFigure 11: The figures above explain the blurring mechanism used for selective transfer experiment\non Breakout. The background of the screen is already black. Let X (84 x 84) denote an arra\u2018\ncontaining the pixels of the screen. We focus on the two quadrants X1 = X[31 : 81,4 : 42] an\nX2 = X[31: 81,42 : 80]. We perform blurring in each case by ensuring X1 = 0 and X2 = 0 fo\nall pixels within them for training L1 and L2 respectively. Effectively, this is equivalent to hidin;\nthe ball in the appropriate quadrants. Blurring X1 simulates weakness in the lower left quadrant\nwhile blurring X2 simulates weakness in the lower right quadrant. We don\u2019t blur all the way dow:\nupto the last row to ensure the paddle controlled by the agent is visible on the screen. We also don\u2019\nblack the rectangular border with a width of 4 pixels surrounding the screen. Figures [I Ta]and[ITl\nillustrate the scenarios of blurring the lower left quadrant before and after blurring; and similarly d\nT1c]and|I1d]for blurring the lower right quadrant.\nAPPENDIX G: BLURRING ATTENTION VISUALIZATION ON BREAKOUT\nFigure 12: Visualisation of the attention weights in the Selective Transfer with Attention for Break:\nout: Green and Blue bars signify the attention probabilities for Expert-1 (\u00a31) and Expert-2 (L2\nrespectively on a scale of [0, 1]. We see that in the first two snapshots, the ball is in the lower righ\nquadrant and as expected, the attention is high on Expert-1, while in the third and fourth snapshots\nthe ball is in the lower right quadrant and hence the attention is high on Expert-2.\n(a) Ball in lower-left quad (b) Blurred lower-left quad (c) Ball in lower-right quad (d) Blurred lower-right quad\n20 15\n\nearning from scratch\n15 10|| \u2014 827 with fav. and unfav. task\nJct Transfer trom unfavorable task\n10\n5\n5\ne 2 0\ngo 3\n& g 5\ng g\n2 2-10\n-10\n\u201cts \u201cis\n20 Normal Pong 20\nSparse Pong\n-25 -25\n0 $ 10 5 30 25 0 5 10 5 30\nEpoch Epoch\n\n(a) Comparison of Sparse Pong to Normal Pong (b) A2T with a positive and negative expert\n10\n\na1ors abeiony\n\n-20\n\n2s Le\nFigure 13: This experiment 1s a case study on a target task where the performance 1s limited by date\navailability. So far, we focused on experiments where the target task is to solve Pong (normal o1\nblack background) for Value Transfer, and Puddle Worlds for Policy Transfer. In both these cases, <\nrandomly initialized value (or policy) network learning without the aid of any expert network is able\nto solve the target task within a reasonable number of epochs (or iterations). We want to illustrate <\ncase where solving the target task in reasonable time is hard and the presence of a favorable source\ntask significantly impacts the speed of learning. To do so, we consider a variant of Pong as our targe\ntask. In this variant, only a small probability p of transition tuples (s, a, 1, s\u2019) with non-zero reward 1\nare added to the Replay Memory (and used for learning through random batch sampling). This way\nthe performance on the target task is limited by the availability of rewarding (positive or negative:\ntransitions in the replay memory. This synthetically makes the target task of Pong a sparse rewarc\nproblem because the replay memory is largely filled with transition tuples that have zero reward. We\ndo not use any prioritized sampling so as to make sure the sparsity has a negative effect on learning\nto solve the target task. We use a version of Pong with black background (as used in Section 4.1\nfor the Blurring experiments) for faster experimentation. p = 0.1 was used for the plots illustratec\nabove. Figure|I3a|clearly shows the difference between a normal Pong task without any synthetic\nsparsity and the new variant we introduce. The learning is much slower and is clearly limited by date\navailability even after 20 epochs (20 million frames) due to reward sparsity. Figure [13b] describes\na comparison between the A2T setting with one positive expert which expertly solves the targe\ntask and one negative expert, learning from scratch, and direct fine-tuning on a negative expert. We\nclearly see the effect of having the positive expert in one of the source tasks speeding up the learning\nprocess significantly when compared to learning from scratch, and also see that fine-tuning on tof\nof a negative expert severely limits learning even after 20 epochs of training. We also see that the\nA2T framework is powerful to work in sparse reward settings and avoids negative transfer even ir\nsuch cases, while also clearly learning to benefit from the presence of a target task expert among\nthe source task networks. Importantly, this experiment demonstrates that transfer learning has <\nsignificant effect on tasks which may be hard (infeasible to solve within a reasonable training time\u2019\nwithout any expert available. Further, A2T is also beneficial for such (sparse reward) situations wher\naccessing the weights of an expert network is not possible, and only outputs of the expert (policy\nor value-function) can be used. Such synthetic sparse variants of existing tasks is a good way tc\nexplore future directions in the intersection of Inverse Reinforcement Learning and Reward-Basec\nLearning, with A2T providing a viable framework for off-policy and on-policy learning."}]