forum_id
stringlengths
9
9
sections
stringlengths
13.5k
97.5k
Sk8csP5ex
[{"section_index": "0", "section_name": "THE LOSS SURFACE OF RESIDUAL NETWORKS: ENSEMBLES & THE ROLE OF BATCH NORMALIZATION", "section_text": "Etai Littwin & Lior Wolf\nEq.33|provides the asymptotic mean total number of critical points with non-diverging index k. It is presumed that the SGD algorithm will easily avoid critical points with a high index that have many descent directions, and maneuver towards low index critical points. We, therefore, investigate how the mean total number of low index critical points vary as the ensemble distribution embodied in er Jr>2 changes its shape by a steady increase in 3.\nFig.1(f) shows that as the ensemble progresses towards deeper networks, the mean amount of low index critical points increases, which might cause the SGD optimizer to get stuck in local minima This is, however, resolved by the the fact that by the time the ensemble becomes deep enough the loss function has already reached a point of low energy as shallower ensembles were more dominant earlier in the training. In the following theorem, we assume a finite ensemble such tha 1 Er2r ~ 0.\nTheorem 5. For any k E N, p > 1, we denote the solution to the following constrained optimization nroblems.\np e = 1 e* = argmax0g(R,e) s.t E r=2\nr = p otherwise\nThm.5|implies that any heterogeneous mixture of spin glasses contains fewer critical points of a. finite index, than a mixture in which only p interactions are considered. Therefore, for any distribu tion of e that is attainable during the training of a ResNet of depth p, the number of critical points is. lower than the number of critical points for a conventional network of depth p.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Residual Networks (He et al.]2015) (ResNets) are neural networks with skip connections. Thes. networks, which are a specific case of Highway Networks (Srivastava et al.]2015), present state. of the art results in the most competitive computer vision tasks including image classification anc object detection."}, {"section_index": "2", "section_name": "6 DISCUSSION", "section_text": "In this work, we use spin glass analysis in order to understand the dynamic behavior ResNets dis. play during training and to study their loss surface. In particular, we use at one point or another the. assumptions of redundancy in network parameters, near uniform distribution of network weights, in. dependence between the inputs and the paths and independence between the different copies of the. nput as described in Choromanska et al.[(2015a). The last two assumptions, i.e., the two indepen dence assumptions, are deemed in Choromanska et al.[(2015b) as unrealistic, while the remaining. are considered plausible\nOur analysis of critical points in ensembles (Sec. 5) requires all of the above assumptions. However, Thm. 1 and 2, as well as Lemma. 4, do not assume the last assumption, i.e., the independence between the different copies of the input. Moreover, the analysis of the dynamic behavior of residual nets (Sec. 4) does not assume any of the above assumptions.\nOur results are well aligned with some of the results shown in Larsson et al.(2016), where it is noted empirically that the deepest column trains last. This is reminiscent of our claim that the deeper networks of the ensemble become more prominent as training progresses. The authors of Larsson et al.(2016) hypothesize that this is a result of the shallower columns being stabilized at a certain point of the training process. In our work, we discover the exact driving force that comes into play.\nOur analysis reveals the mechanism for this dynamic behavior and explains the driving force behind it. This mechanism remarkably takes place within the parameters of Batch Normalization (Ioffe & Szegedy2015), which is mostly considered as a normalization and a fine-grained whitening mechanism that addresses the problem of internal covariate shift and allows for faster learning rates\nWe show that the scaling introduced by batch normalization determines the depth distribution in the virtual ensemble of the ResNet. These scales dynamically grow as training progresses, shifting the. effective ensemble distribution to bigger depths.\nIn addition, our work offers an insight into the mechanics of the recently proposed densely connecte. networks (Huang et al.[2016). Following the analysis we provide in Sec. 3, the additional shortcu paths decrease the initial capacity of the network by offering many more short paths from inpu to output, thereby contributing to the ease of optimization when training starts. The driving forc mechanism described in Sec. 4.2 will then cause the effective capacity of the network to increase\nThe main tool we employ in our analysis is spin glass models.Choromanska et al.(2015a) have created a link between conventional networks and such models, which leads to a comprehensive study of the critical points of neural networks based on the spin glass analysis of|Auffinger et al. (2013). In our work, we generalize these results and link ResNets to generalized spin glass models. These models allow us to analyze the dynamic behavior presented above. Finally, we apply the results of Auffinger & Arous (2013) in order to study the loss surface of ResNets.\nNote that the analysis presented in Sec. 3 can be generalized to architectures with arbitrary skip connections, including dense nets. This is done directly by including all of the induced sub networks in Eq.9] The reformulation of Eq.[10|would still holds, given that I, is modified accordingly.\n0k(R,e) W +w"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Deep Residual Networks present a premium in performance in comparison to con- ventional networks of the same depth and are trainable at extreme depths. It has recently been shown that Residual Networks behave like ensembles of relatively shallow networks. We show that these ensembles are dynamic: while initially the virtual ensemble is mostly at depths lower than half the network's depth, as training progresses, it becomes deeper and deeper. The main mechanism that con- trols the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch Normalization technique. We explain this behavior and demonstrate the driving force behind it. As a main tool in our analysis, we employ generalized spin glass models. which we also use in order to study the number of critical points in the optimization of Residual Networks.\nThe success of residual networks was attributed to the ability to train very deep networks when employing skip connections (He et al.| 2016). A complementary view is presented byVeit et al. (2016), who attribute it to the power of ensembles and present an unraveled view of ResNets that depicts ResNets as an ensemble of networks that share weights, with a binomial depth distribution around half depth. They also present experimental evidence that short paths of lengths shorter than half-depth dominate the ResNet gradient during training\nThe analysis presented here shows that ResNets are ensembles with a dynamic depth behavior When starting the training process, the ensemble is dominated by shallow networks, with depths. lower than half-depth. As training progresses, the effective depth of the ensemble increases. This. Increase in depth allows the ResNet to increase its effective capacity as the network becomes more and more accurate."}, {"section_index": "4", "section_name": "7 CONCLUSION", "section_text": "Ensembles are a powerful model for ResNets, which unravels some of the key questions that have. surrounded ResNets since their introduction. Here, we show that ResNets display a dynamic en semble behavior, which explains the ease of training such networks even at very large depths, while. still maintaining the advantage of depth. As far as we know, the dynamic behavior of the effective. capacity is unlike anything documented in the deep learning literature. Surprisingly, the dynamic mechanism typically takes place within the outer multiplicative factor of the batch normalization. module.\nA simple feed forward fully connected network N, with p layers and a single output unit is consid ered. Let n; be the number of units in layer i, such that no is the dimension of the input, and n, = 1 It is further assumed that the ReLU activation functions denoted by R( are used. The output Y of the network given an input vector x E Rd can be expressed as"}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "d p Y= (k) W. i=1 j=1 k=1\nAntonio Auffinger and Gerard Ben Arous. Complexity of random smooth functions on the high dimensional sphere. Annals of Probability, 41(6):4214-4247, 11 2013.\nDefinition 1. The mass o. f the network N is defined as i Y\nAnna Choromanska, Yann LeCun, and Gerard Ben Arous. Open problem: The landscape of the los surfaces of multilayer networks. In COLT, pp. 1756-1760, 2015b\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\n(w) =EA[max(0,1-YxY)] La(w) =EA[[Yx-Y]]\nGao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks arXiv preprint arXiv:1608.06993, 2016\nwhere Y, is a random variable corresponding to the true label of sample x. In order to equate either loss with the hamiltonian of the p-spherical spin glass model, a few key approximations are made:\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 448-456, 2015\nGustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net works without residuals. arXiv preprint arXiv:1605.07648, 2016\nA4 Spherical constraint - The following is assumed:\nGenevieve B Orr and Klaus-Robert Muller. Neural networks: tricks of the trade. Springer, 2003\nThese assumptions are made for the sake of analysis, and do not necessarily hold. The validity of these assumption was posed as an open problem in|Choromanska et al.[(2015b), where a different degree of plausibility was assigned to each. Specifically, A1, as well as the independence assumption of Aij, were deemed unrealistic, and A2 - A4 as plausible. For example, A1 does not hold since. each input x; is associated with many different paths and x1 = x2 = ...xiy. See|Choromanska. et al.(2015a) for further justification of these approximations."}, {"section_index": "6", "section_name": "A SUMMARY OF NOTATIONS", "section_text": "Table[1presents the various symbols used throughout this work and their meaning\nWe briefly summarize [Choromanska et al.(2015a), which connects the loss function of multilayer networks with the hamiltonian of the p spherical spin glass model, and state their main contributions and results. The notations of our paper are summarized in Appendix|A|and slightly differ from those inChoromanska et al.(2015a).\nwhere the first summation is over the network inputs x1...xd, and the second is over all paths from input to output. There are = I=1n such paths and Vi, xi1 = x2 = ...xiy. The variable Aij E {0,1} denotes whether the path is active, i.e., whether all of the ReLU units along this path are producing positive activations, and the product II%-1 wf' represents the specific weight .(k) confi guration w1, ..w?, multiplying x, given path j. It is assumed throughout the paper that the input variables are sampled i.i.d from a normal Gaussian distribution.\nAnna Choromanska. Mikael Henaff. Michael Mathieu, Gerard Ben Arous. and Yann LeCun. The loss surfaces of multilayer networks. In A1STATS, 2015a..\nThe variables A,; are modeled as independent Bernoulli random variables with a success probability p, i.e., each path is equally likely to be active. Therefore,\nd Y p EA[Y]= k Xij P i=1 j=1 k=1\nThe task of binary classification using the network V with parameters w is considered, using either the hinge loss Lh. r or the absolute loss L%:\nA2 Redundancy in network parameterization - It is assumed the set of all the network weights. [w1, w2...w contains only A unique weights such that A < N.. A3 Uniformity - It is assumed that all unique weights are close to being evenly distributed on the graph of connections defining the network N. Practically, this means that we assume every. node is adjacent to an edge with any one of the A unique weights..\nA 1 < w: Y i=1\nAndreas Veit, Michael Wilber, and Serge Belongie. Residual networks behave like ensembles of relatively shallow networks. In NIPS, 2016.\nUnder A1-A4, the loss takes the form of a centered Gaussian process on the sphere SA-1(/A) Specifically, it is shown to resemble the hamiltonian of the a spherical p-. spin glass model given by:\nA 1 r 11 Hp,A(w) = Xi1 Wik A p- 2 i1...ip k=1"}, {"section_index": "7", "section_name": "SYMBOL", "section_text": "The dimensionality of the input x The output of layer i of network given input x The final output of the network V True label of input x Loss function of network V Hinge loss Absolute loss The depth of network V Weights of the network w E RA A positive scale factor such that ||w||2 = C Scaled weights such that w =- w The number of units in layers l > 0 The number of unique weights in the network The total number of weights in the network V The weight matrix connecting layer l - 1 to layer l in V. The hamiltonian of the p interaction spherical spin glass model. The hamiltonian of the general spherical spin glass model. A Total number of paths from input to output in network V yd Total number of paths from input to output in network N of length r Yr d ReLU activation function Bernoulli random variable associated with the ReLU activation functio Parameter of the Bernoulli distribution associated with the ReLU unit 3) multiplier associated with paths of length r in V. pnC VA Normalization factor Batch normalization multiplicative factor in layer l. The mean of the estimated standard deviation various elements in R(W\nwhere xi1... are independent normal Gaussian variables\nIn Auffinger et al.(2013), the asymptotic complexity of spherical p spin glass model is analyzed based on random matrix theory. In Choromanska et al.[(2015a) these results are used in order to shed light on the optimization process of neural networks. For example, the asymptotic complexity of spherical spin glasses reveals a layered structure of low-index critical points near the global op timum. These findings are then given as a possible explanation to several central phenomena found in neural networks optimization, such as similar performance of large nets, and the improbability of getting stuck in a \"bad' local minima.\nAs part of our work, we follow a similar path. First, a link is formed between residual networks anc the hamiltonian of a general multi-interaction spherical spin glass model as given by:.\np A Hp,(w)= Er II Xi1,i2...ir Wik A 2 r= i1,i2...ir=1 k=1\nwhere e1...ep are positive constants. Then, usingAuffinger & Arous(2013), we obtain insights or residual networks. The other part of our work studies the dynamic behavior of residual networks where we relax the assumptions made for the spin glass model.\nWe begin by establishing a connection between the loss function of deep residual networks and the hamiltonian of the general spherical spin glass model. We consider a simple feed forward fully connected network N, with ReLU activation functions and residual connections. For simplicity oi notations without the loss of generality, we assume n1 = ... = np = n. no = d as before. In our ResNet model, there exist p -- 1 identity connections skipping a single layer each, starting from the first hidden layer. The output of layer l > 1 is given by:\nNi(x) =R(W'Ni-1(x))+Ni-1(x\nProof of Lemma[1] There are a total of r paths of length r from input to output, and a total of Ar unique r length configurations of weights. The uniformity assumption then implies that each. configuration of weights is repeated Ir times. By summing over the unique configurations, and re. indexing the input we arrive at Eq.10.\np d Yr r y=LL r)(k W r=1 i=1 j=1 k=1\nProof of Lemma[] From[12 we have that S1,2.., is defined as a sum of r inputs. Since there are only p distinct inputs, it holds that for each 1,i2..., there exists a sequence Q = (at)i=1 E N such that -1 Q; = Xr, and Si1,2.i, = 1 Q,x. We, therefore, have that E[?,...,] = |||l3 Note that the minimum value of E[&? ?, 2..r] is a solution to the following:\nmin(E[?,...]) = mina(||a|2) s.ta1 -1 E N.\nDefinition 2. The mass of a depth r subnetwork in N is defined as wr =dY\nThe properties of redundancy in network parameters and their uniform distribution, as described ir Sec.2] allow us to re-index Eq.9"}, {"section_index": "8", "section_name": "DESCRIPTION", "section_text": "A 1 w?=1 i=1\nwhere W, denotes the weight matrix connecting layer l - 1 with layer l. Notice that the first hidden layer has no parallel skip connection, and so N1(x) = R(W' x). Without loss of generality, the scalar output of the network is the sum of the outputs of the output layer p and is expressed as\nwhereA.?) E {0,1} denotes whether path j of length r is open, and Vj, j', r, r' x, = x. The i3/. residual connections in W imply that the output Y is now the sum of products of different Iengths indexed by r. Since our ResNet model attaches a skip connection to every layer except the first.. 1 < r < p. See Sec.6[regarding models with less frequent skip connections..\nEach path of length r includes r 1 non-skip connections (those involving the first term in Eq.8 and not the second, identity term) out of layers l = 2.p. Therefore, ~r = (-1)nr. We define the following measure on the network:\nLemma 1. Assuming assumptions A2 - A4 hold, and E Z, then the output can be expressed after reindexing as:\nyr p ^ AT Y = i) Wik 12... r=1i1,i2...ir=1 j=1 k=1\nlim Bap) = H() + alog() Og p->0\nProof of Thm.2 For brevity, we provide a sketch of the proof. It is enough to show that limp->00 O17 = 0 for < 1. Ignoring the constants in the binomial terms. we have\nQ1P Q1p Q1 1 lim lim 9.- lim p->0o p->0o p->0o r=1\nXr p EA[Y] = II Wik r=1i1,i2...ir=1j=1 k=1\n/here z2 which can be expressed using the Legendre polynomial of order p:\nIn order to connect ResNets to generalized spherical spin glass models, we denote the variables\nA Si1,i2...ir Xi1,i2...ir I1,i2...ix En[?,12 j=1\nLemma 2. Assuming A2 - A3 hold, and n E N then V he following holds..\nProof of Lemma|4 For simplicity, we ignore the constants in the binomial coefficient, and assume er = () r. Notice that for * = (), we have that arg max,(er(B*)) = p, arg max,(er(*)) = 1 and arg max,(er(1)) = . From the monotonicity and continuity of r, any value 1 k p can be attained. The linear dependency (C) = pnC completes the proof. A\nOLN(x,w- g) dLN(x,w) dLN(x,w) aai 9 aai aai\nThe independence assumption A1 was not assumed yet, and[14|holds regardless. Assuming A4 and denoting the scaled weights w, = w;, we can link the distribution of Y to the distribution on x:\nA I Xi1,i2...ir Wik /d A i1,i2...ir=1 k=1 A > I Xi1,i?...ir W i1,i2...ir=1 k=1\nOLN(x,w - g) gp + gpl dai\nwhere C1, C2 are positiye constants that do not. ffect the optimization process\n-\nNote that since the input variables x1...xd are sampled from a centered Gaussian distribution (de pendent or not), then the set of variables x1,i2.... are dependent normal Gaussian variables.\nWe approximate the expected output EA(Y) with Y by assuming the minimal value in|13|holds. all weight configurations of a particular length in Eq. [10|will appear the same number of times. When A n, the uniformity assumption dictates that each configuration of weights would appear approximately equally regardless of the inputs, and the expectation values would be very close to\nJsing taylor series expansion:. dLn(x, w- g). dLN(x,w) dLN(x,w) (40) aLN(x,w) Substituting Vw - (gm + gp) in40|we have: dLN(x,w- gw) I < 0 (41) 9m a 9p 9m. + And hence: dLN(x,w - gw) (42 Finally: (43) 1 OLN(x,w) 2. Since paths of length m skip layer l, we have that .. I, 9p. Therefore: dLv(x,w - g) (44) 9m9p - n? The condition ||gpl|2 > ||gm||2 implies that gmgp + l|gpll2 > 0, completing the proof.\np A L I = I1 Wik r=1 i1,2...iz=1 k=1\ndLN(x,w- gw) 9m + gp)'(gm + gp) = lgm +gplI2 < 0\ngm+gp|l2)]=|i|(1+\nThe following lemma gives a generalized expression for the binary and hinge losses of the network\nLN(x) = C1 + CY\nWe denote the important quantities:\nn\ndLN(x,w) g = (mLm(x,w) +pLp(x,w)) ngm + pgp)' dc = (mLm(x,w) +pLp(x, lgm + pgp) mgm + pgp)\nTheorem 1. Assuming p E N, we have that.. 1\n0LN(x,w - gw mgm + pgp)'(gm + gp) dC -(m||gp|l2 + p||gp|l2 + (m +p)gp gm\n1 lim -arg max( p->0o\nTheorem 2. For any Q1 Q2, and assuming Q1p, Q2p p E N, it holds that.. 1+B\nQ2P lim -1 p->0 r=Q1P\nThm.2 implies that for deep residual networks, the contribution of weight products of order far. an ensemble of potentially shallow conventional nets. The next Lemma shows that we can shift the effective depth to any value by simply controlling C..\neT(V\"_V)e eT(V+ V)\nLemma 4. For any integer 1 < k < p there exists a global scaling parameter C such tha arg max,(er(C)) = k.\nnaxe0k(R,e) < max\nThe expression for the output of a residual net in Eq.15 provides valuable insights into the machinery at work when optimizing such models. Thm.|1|and|2Jimply that the loss surface resembles that of ar ensemble of shallow nets (although not a real ensemble due to obvious dependencies), with variou depths concentrated in a narrow band. As noticed inVeit et al.(2016), viewing ResNets as ensembles of relatively shallow networks helps in explaining some of the apparent advantages of these models particularly the apparent ease of optimization of extremely deep models, since deep paths barely affect the overall loss of the network. However, this alone does not explain the increase in accuracy of deep residual nets over actual ensembles of standard networks. In order to explain the improvec performance of ResNets, we make the following claims:\nFig. 1(d) and 1(e) report the experimental results of a straightforward setting, in which the task is to classify a mixture of 10 multivariate Gaussians in 50D. The input is therefore of size 50. The loss employed is the cross entropy loss of ten classes. The network has 10 blocks, each containing. 20 hidden neurons, a batch normalization layer, and a skip connection. Training was performed on. 10,000 samples, using SGD with minibatches of 50 samples..\nAs noted in Sec. 4.2, the dynamic behavior can be present in the Batch Normalization multiplica. tive coefficient or in the weight matrices themselves. In the following experiments, it seems that\nThe model in Eq.16 has the form of a spin glass model, except for the dependency between the variables i1,i2...tr. We later use an assumption similar to A1 of independence between these vari- ables in order to link the two binary classification losses and the general spherical spin glass model However, for the results in this section, this is not necessary.\nis orthogonal to the weights. We have that L(x,w) (mLm(x, w) +pLp(x, w)). Using taylor ac series expansion we have: aLN(x,w- g) dLN(x,w) dLN(x,w) uVw (45) ac ac ac For the last term we have: dLN(x,w) V w g = (mLm(x,w) + pLp(x, w ac =(mLm(x,w) + pLp(x, W mgm + pgp)'g,(46) d n45 we have: dLN(x,w- gw) 0-(mgm+pgp)(gm+ gp) aC -(m|gp|l2+p|gp|l2+(m+p)ggm) (47) Proof of Thm[5]Inserting Eq.31|into Eq.[33|we have that: oq(r=2e?r(r-1) _=r(r-2) (48) r=2 e?r r=2e?r2 We denote the matrices V' and V\" such that Vf, = ro, and V/f = r(r -- 1)oj. We then have: eT(V\" _V')e (49) eT(V\"+ V')e maxe0k(R,e) < max min (V! - V) nax =0k(R,e*) (50)\nOLN(x,w- g) dLN(x,w) dLN(x,w) 9 ac ac ac\nThe series (er)P-1 determines the weight of interactions of a specific length in the loss surface. No- tice that for constant depth p and large enough , arg max. (er) = p. Therefore, for wide networks, where n and, therefore, are large, interactions of order p dominate the loss surface, and the effect of the residual connections diminishes. Conversely, for constant and a large enough p (deep net- works), we have that arg max,(er) < p, and can expect interactions of order r < p to dominate the loss. The asymptotic behavior of e is captured by the following lemma:\nAs the next theorem shows. the epsilons are concentrated in a narrow band near the maximal value\n2r(r-2\nA simple global scaling of the weights is, therefore, enough to change the loss surface, from an ensemble of shallow conventional nets, to an ensemble of deep nets. This is illustrated in Fig.1(a-c) for various values of . In a common weight initialization scheme for neural networks, C = - (Orr & Muller2003f[Glorot & Bengio|2010). With this initialization and A = n, = p and the maximal weight is obtained at less than half the network's depth limp->oo arg max,(er) < . Therefore, at the initialization, the loss function is primarily influenced by interactions of considerably lower order than the depth p, which facilitates easier optimization.\n1. The distribution of the depths of the networks within the ensemble is controlled by th scaling parameter C.\nFig.2|depicts the results. There are two types of plots: Fig. 2(a,c) presents for CIFAR-10 and CIFAR-100 respectively the magnitude of the various convolutional layers for multiple epochs (sim ilar in type to Fig. 1(d) in the paper). Fig.2(b,d) depict for the two datasets the mean of these norms over all convolutional layers as a function of epoch (similar to Fig. 1(e))\np d Yr LLL LN(x,w) =C1 +C2 r)k W r=1 i=1 j=1 k=1\nAs can be seen, the dynamic phenomenon we describe is very prominent in the public ResNe implementation when applied to these conventional datasets: the dominance of paths with fewe. skip connections increases over time. Moreover, once the learning rate is reduced in epoch 81 the phenomenon we describe speeds up\nIn Fig. 3|we present the multiplicative coefficient of the Batch Normalization when not absorbed. As future work, we would like to better understand why these coefficients start to decrease once the learning rate is reduced. As shown above, taking the magnitude of the convolutions into account the dynamic phenomenon we study becomes even more prominent at this point. The change o1 location from the multiplicative coefficient of the Batch Normalization layers to the convolutions themselves might indicate that Batch Normalization is no longer required at this point. Indeed Batch Normalization enables larger training rates and this shift happens exactly when the training. rate is reduced. A complete analysis is left for future work.\nNotice that the addition of a multiplier r indicates that the derivative is increasingly influenced by deeper networks."}, {"section_index": "9", "section_name": "4.1 BATCH NORMALIZATION", "section_text": "Batch normalization has shown to be a crucial factor in the successful training of deep residua networks. As we will show, batch normalization layers offer an easy starting condition for the. network, such that the gradients from early in the training process will originate from extremely. shallow paths.\nWe consider a simple batch normalization procedure, which ignores the additive terms, has the out- put of each ReLU unit in layer l normalized by a factor oj and then is multiplied by some parameter A. The output of layer l > 1 is therefore:\nR(WNi-1(x))+Ni-1(x) Ni(x) = 0\nwhere oj is the mean of the estimated standard deviations of various elements in the vector R(W,' Ni-1(x)). Furthermore, a typical initialization of batch normalization parameters is to set. Vi, i = 1. In this case, providing that units in the same layer have equal variance ot, the recursive relation E[Wi+1(x)?] = 1 + E[W(x)?] holds for any unit j in layer l. This, in turn, implies that the. output of the ReLU units should have increasing variance o? as a function of depth. Multiplying the weight parameters in deep layers with an increasingly small scaling factor , effectively reduces the influence of deeper paths, so that extremely short paths will dominate the early stages of opti-. mization. We next analyze how the weight scaling, as introduced by batch normalization, provides. a driving force for the effective ensemble to become deeper as training progresses..\nWe consider a simple network of depth p, with a single residual connection skipping p - m layers. We further assume that batch normalization is applied at the output of each ReLU unit as described in Eq.22 We denote by l1...lm the indices of layers that are not skipped by the residual connection.\nuntil the learning rate is reduced, the dynamic behavior is manifested in the Batch Normaliza- tion multiplicative coefficients and then it moves to the convolution layers themselves. We there- fore absorb the BN coefficients into the convolutional layer using the public code of https: //github.com/e-lab/torch-toolbox/tree/master/BN-absorber Note that the multiplicative coefficient of Batch Normalization is typically refereed to as y. However, throughout our paper, since we follow the notation of|Choromanska et al.[(2015a), y refers to the number of paths. The multiplicative factor of Batch normalization appears as A in Sec. 4.\n2. During training, C changes and causes a shift of focus from a shallow ensemble to deeper and deeper ensembles, which leads to an additional capacity. 3. In networks that employ batch normalization, C is directly embodied as the scale parameter X. The starting condition of X = 1 offers a good starting condition that involves extremely shallow nets.\nFor the remainder of Sec.4, we relax all assumptions, and assume that at some point in time the loss can be expressed:\nwhere C1, C2 are some constants that do not affect the optimization process. In order to gain addi tional insight into this dynamic mechanism, we investigate the derivative of the loss with respect to the scale parameter C. Using Eq.[9[for the output, we obtain:\np d 2r 0LN(x,w) rx(?A(g) II r)(k W ac r=1 i=1 j=1 k=1\n0.45 0.35 0.35 0.4 0.3 0.3 0.35 0.25 0.25 0.3 0.25 0.2 0.2 0.2 0.15 0.15 0.15 0.1 0.1 0.1 0.05 0.05 0.05 0 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 (a) (b) (c) 1.2 0.8 0.6 0.4 0.20 5000 10000 15000 20000 500 1000 1500 2000 (d) (e) (f)\nFigure 1: (a) A histogram of er(), r = 1..p, for = 0.1 and p = 100 . (b) Same for = 0.5. (c) Same for = 2. (d) Values (y-axis) of the batch normalization parameters X, (x-axis) for. 10 layers ResNet trained to discriminate between 50 multivariate Gaussians (see Appendix |C|for. more details). Higher plot lines indicate later stages of training. (e) The norm of the weights of a residual network, which does not employ batch normalization, as a function of the iteration. (f) The. asymptotic of the mean number of critical points of a finite index as a function of 3..\ndYm d Yp p N(x,w) =Xm m (m) II (m)(k) (m) s(p) II ,(p)(k) wij W xij 4 i=1 j=1 k=1 i=1 j=1 k=1 Lm(x,w) + Lp (x.u\nWe denote by w, the derivative operator with respect to the parameters w, and the gradient g = VwL(x, w) = gm + gp evaluated at point w..\nNorm of the weights of the convolution layers for multiple epochs for cifar10 Mean norm of convolution layers as a function of epoch for cifar1o 240 21 220 30 200 (pequosqe s!1 141 25 180 20 4C an 10 120 100 15 20 20 40 60 80 100 120 140 160 180 conv layer epoch (b) (a Norm of the weights of the convolution layers for multiple epochs for cifar100 Mean norm of convolution layers as a function of epoch for cifar100 350 21 41 60 300 250 S 40 30 150 100 20 25 20 40 60 80 100 120 140 160 180 conv layer epoch (d) c\n0Ln(x,w - g)\nFigure 2: (a,c) The Norm of the convolutional layers once the factors of the subsequent Batch. Normalization layers are absorbed, shown for CIFAR-10 and CIFAR-100 respectively. Each graph. s a different epoch, see legend. Waving is due to the interleaving architecture of the convolutiona ayers. (b,d) Respectively for CIFAR-10 and CIFAR-100, the mean of the norm of the convolutional. ayers' weights per epoch.\naLn(x,w- g) da\nThm.3 suggests that || will increase for layers l that do not have skip-connections. Conversely, if layer l has a parallel skip connection, then || will increase if ||gp||2 > l|gm|[2, where the later condition implies that shallow paths are nearing a local minima. Notice that an increase in |Aigl...lm results in an increase in [p], while [m] remains unchanged, therefore shifting the balance into deeper ensembles.\nThis steady increase of |], as predicted in our theoretical analysis, is also backed in experimen. tal results, as depicted in Fig.1[d). Note that the first layer, which cannot be skipped, behaves differently than the other layers. More experiments can be found in Appendix|C.\nIt is worth noting that the mechanism for this dynamic property of residual networks can also be. observed without the use of batch normalization, as a steady increase in the L2 norm of the weights as shown in Fig.1[e). In order to model this, consider the residual network as discussed above. without batch normalization layers. Recalling, ||w||2 = CA, w = w, the loss of this network is. expressed as:\nd Ym d Yp p LN(x,w) =Cm m) (m) (m)(k LL m) 1(p) I1 37(p)(k) xij xij wij i in i=1 j=1 k=1 i=1 j=1 k=1 Lm(x,w) + Lp(x,W\n0LN(x,w - g (m|gm|l2 + pl|gp|l2 + (m + p)gp gm) dc\nThm.4|indicates that if either l|gpl|2 or l|gml|2 is dominant (for example, near local minimas of the shallow network, or at the start of training), the scaling of the weights C will increase. This expansion will, in turn, emphasize the contribution of deeper paths over shallow paths, and in- crease the overall capacity of the residual network. This dynamic behavior of the effective depth of residual networks is of key importance in understanding the effectiveness of these models. While optimization starts off rather easily with gradients largely originating from shallow paths, the overall advantage of depth is still maintained by the dynamic increase of the effective depth.\nWe now present the results of[Auffinger & Arous(2013) regarding the asymptotic complexity in the case of limA->oo of the multi-spherical spin glass model given by:.\nA He,^=- Er A r- 2 r=2 i1,...ir=1\nA 8 1 e=1 w=1, ^ i=1 r=2\nFigure 3: The norms of the multiplicative Batch Normalization coefficient vectors. (a,c) The Norn of the coefficients, shown for CIFAR-10 and CIFAR-100 respectively. Each graph is a differen. epoch (see legend). Since there is no monotonic increase between the epochs in this graph, it i harder to interpret. (b,d) Respectively for CIFAR-10 and CIFAR-100, the mean of the norm of the. multiplicative factors per epoch.\nOX = er(r-1) a2 = r=2 r=2\nNote that for the single interaction spherical spin model a2 = 0. The index of a critical point of He,A is defined as the number of negative eigenvalues in the hessian V2 He.A evaluated at the critical. point w.\nDefinition 4. For any O < k < A and u E R, we denote the random number Crtx.k(u, e) as the number of critical points of the hamiltonian in the set BX = {AX|X E (-oo, u)} with index k\nCrtA.k(u, e) = 1{He,A E Au}1{i(V2He,A)=k w:VHe,A=0\nBatch Normalization gamma per layer for multiple epochs for cifar10 Mean norm of Batch Normalization gamma vectors as a function of epoch for cifar10 160 1 10 150 161 140 30 120 110 100 10 20 25 30 35 0 20 40 80 100 120 140 15 60 160 180 conv layer epoch (a (b) Batch Normalization gamma per layer for multiple epochs for cifar100 Mean norm of Batch Normalization gamma vectors as a function of epoch for cifar100 20 200 21 18 41 190 16 180 14 162 12 170 mnea 160 150 140 25 130 10 15 30 35 20 40 80 20 60 100 120 140 160 180 conv layer epoch (d) c\nwhere J,... are independent centered standard Gaussian variables, and e = (er)r>2 are positive. real numbers such that r=2 er2r < oo. A configuration w of the spin spherical spin-glass model is a vector in RA satisfying the spherical constraint:.\nA 1. (29) =1 A =1 r=2 Note that the variance of the process is independent of e: A E[H?.A] =A1-re? 2 = A =A (30) Definition 3. We define the following:. 8 U' =) e,r, v\" =er(r-1), Q =v\" + v' (31)\n8 A 8 E[H?,A]= A1-r r e? w?)=^ e=A r=2 i=1 r=1"}]
BJbD_Pqlg
[{"section_index": "0", "section_name": "HUMAN PERCEPTION IN COMPUTER VISION / CONFERENCE E SUBMISSIONS", "section_text": "Poggio, 1999; Serre, 2014). However, the computation in trained DNN models is quite general- purpose (Huh et al., 2016; Yosinski et al., 2014) and offers unparalleled accuracy in recognition tasks (LeCun et al., 2015). Since visual computations are, to some degree, task- rather than architecture- dependent, an accurate and general-purpose DNN model may better resemble biological processing than less accurate biologically plausible ones (Kriegeskorte, 2015; Yamins & DiCarlo, 2016). We support this view by considering a controlled condition in which similarity is not confounded with task difficulty or categorization consistency.\nGoogLeNet ResNet-152 CaffeNet 90 90 86 89 90 82 90 90 65 60 57 57 60 60 4347 33 33 30 30 25 30 Figure 7: Background context for different DNN models (following figure CaffeNet iter 1 CaffeNet iter 50K CaffeNet iter 310K 90 90 86 89 90 75 90 90 r 74 61 57 60 50 60 60 10 33 30 30 30 0 0 eonnnnnnonns Gabor Decomposition Steerable Pyramid 82 85 90 90 80 62 60 55 60 448 Consistent 35 Inconsistent 30 30 10 'wnu 0 Figure 8: Background context for baseline DNN models (following figure 2). \"Caff is reproduced from Figure 7.\nRon Dekel\nDepartment of Neurobiology. Weizmann Institute of Science Rehovot. PA 7610001. Israel"}, {"section_index": "1", "section_name": "6.3.2 USE IN PSYCHOPHYSICS", "section_text": "Our results imply that trained DNN models have good predictive value for outcomes of psychophys ical experiments, permitting a zero-cost first-order approximation. Note, however, that the scope of such simulations may be limited, since learning (Sagi, 2011) and adaptation (Webster, 2011) were not considered here.\nComputer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprece- dented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now be solved more easily based on previous learning. Bio- logical vision (learned in life and through evolution) is also accurate and general- purpose. Is it possible that these different learning regimes converge to similar problem-dependent optimal computations? We therefore asked whether the hu- man system-level computation of visual perception has DNN correlates and con- sidered several anecdotal test cases. We found that perceptual sensitivity to image changes has DNN mid-computation correlates, while sensitivity to segmentation. crowding and shape has DNN end-computation correlates. Our results quantify the applicability of using DNN computation to estimate perceptual loss, and are consistent with the fascinating theoretical view that properties of human percep tion are a consequence of architecture-independent visual learning.\nFigure 7: Background context for different DNN models (following figure 2)"}, {"section_index": "2", "section_name": "6.3.3 USE IN ENGINEERING (A PERCEPTUAL LOSS METRIC", "section_text": "As proposed previously (Dosovitskiy & Brox, 2016; Johnson et al., 2016; Ledig et al., 2016), the saliency of small image changes can be estimated as the representational distance in trained DNNs Here, we quantified this approach by relying on data from a controlled psychophysical experiment (Alam et al., 2014). We found the metric to be far superior to simple image statistical properties and on par with a detailed perceptual model (Alam et al., 2014). This metric can be useful in image. compression, whereby optimizing degradation across image sub-patches by comparing perceptual loss may minimize visual artifacts and content loss.."}, {"section_index": "3", "section_name": "OUICK EXPERT SUMMARY", "section_text": "CaffeNet iter 1 CaffeNet iter 50K CaffeNet iter 310K 90 90 86 89 90 74 75 90 90 61 50 57 60 60 60 40 33 29 30 16 15 30 30 0 0 4 1 0 0 0 connnnnnonns Gabor Decomposition Steerable Pyramid 82 85 90 90 80 62 55 60 60 Consistent 4248 35 28 Inconsistent 30 30 8 19 'wnu 5 0 O Shape rowding Segmentation"}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "Considering the learned computation of ImageNet-trained DNNs, we find\nWe thank Yoram Bonneh for his valuable questions which led to much of this work\nLarge computation changes for perceptually salient image changes (Figure 1). Gestalt: segmentation, crowding, and shape interactions in computation (Figure 2) Contrast constancy: bandpass transduction in first layers is later corrected (Figure 3)."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Md Mushfiqul Alam. Kedarnath P Vilankar, David J Field, and Damon M Chandler. Local masking in natural images: A database and analysis. Journal of vision, 14(8):22-, jan 2014. ISSN 1534- 7362. doi: 10.1167/14.8.22\nThese properties are reminiscent of human perception, perhaps because learned general-purpos classifiers (human and DNN) tend to converge\nDeep neural networks (DNNs) are a class of computer learning algorithms that have become widely used in recent years (LeCun et al., 2015). By training with millions of examples, such models achieve unparalleled degrees of task-trained accuracy (Krizhevsky et al., 2012). This is not unprece. dented on its own - steady progress has been made in computer vision for decades, and to some degree current designs are just scaled versions of long-known principles (Lecun et al., 1998). In pre- vious models, however, only the design is general-purpose, while learning is mostly specific to the context of a trained task. Interestingly, for current DNNs trained to solve a large-scale image recog nition problem (Russakovsky et al., 2014), the learned computation is useful as a building block for drastically different and untrained visual problems (Huh et al., 2016; Yosinski et al., 2014).\nFigure 8: Background context for baseline DNN models (following figure 2). \"CaffeNet iter 310K' is reproduced from Figure 7.\nFor example, orientation- and frequency-selective features (Gabor patches) can be considered general-purpose visual computations. Such features are routinely discovered by DNNs (Krizhevsky et al., 2012; Zeiler & Fergus, 2013), by other learning algorithms (Hinton & Salakhutdinov, 2006.\nMatteo Carandini, Jonathan B Demb, Valerio Mante, David J Tolhurst, Yang Dan, Bruno A Ol shausen, Jack L Gallant, and Nicole C Rust. Do we know what the early visual system does? The Journal of Neuroscience, 25(46):10577-97, nov 2005. ISsN 1529-2401. doi: 10.1523/JNEUROSCI.3726-05.2005."}, {"section_index": "6", "section_name": "ABSTRACT", "section_text": "Another fascinating option is the formation of hypotheses in terms of mathematically differentiable trained-DNN constraints, whereby it is possible to efficiently solve for the visual stimuli that opti mally dissociate the hypotheses (see Gatys et al. 2015a;b; Mordvintsev et al. 2015 and note Goodfel low et al. 2014; Szegedy et al. 2013). The conclusions drawn from such stimuli can be independent of the theoretical assumptions about the generating process (for example, creating new visual illu sions that can be seen regardless of how they were created).\nAntoine Del Cul, Sylvain Baillet, and Stanislas Dehaene. Brain dynamics underlying the nonlinea. threshold for access to consciousness. PLoS Biol, 5(10):e260, 2007. ISsN 1545-7885\nAs an extension, general-purpose computations are perhaps of universal use. For example, a dimen. sionality reduction transformation that optimally preserves recognition-relevant information may constitute an ideal computation for both DNN and animal. More formally, different learning algo. rithms with different physical implementations may converge to the same computation when similar (or sufficiently general) problems are solved near-optimally. Following this line of reasoning, DNN. models with good general-purpose computations may be computationally similar to biological vi. sual systems, even more so than less accurate and less general biologically plausible simulations (Kriegeskorte, 2015; Yamins & DiCarlo, 2016).\nGoker Erdogan and Robert A Jacobs. A 3D shape inference model matches human visual objec similarity judgments better than deep convolutional neural networks. In Proceedings of the 38th Annual Conference of the Cognitive Science Society. Cognitive Science Society Austin, TX, 2016\nVGG-19 GoogLeNet ResNet-152 101 t Easy (ref.). 10 X 10 W8 + *+ X a 80 X X +* * * * * + * 102 x++ Hard + X 102 * 10 10-2 101 10~2 101 10-2 101 f1 CaffeNetiter 1. CaffeNetiter soK CaffeNetiter 310K 101 101 10 1 Q$8KEK O D \\0+* *< f2 Ox|* *+ X * XX 10 0 b ** 10 10 X X 10~2 10~1 10~2 101 102 101 * d GaborDecomposition SteerablePyramid Humanperception 70 0 101 * (%) errreeennney x e 101 x + 50 + +T x C X 0 40 10-2 10~1 10~2 101 60 62 64 66 68 70 MI Easy (bits) AccuracyEasy (%)\n10 Easy 10 B X X tOx+#K + a D X X +* * \\O + * * * + 10 +x Hard + + Xx + X * 102 102 10-2 101 102 10~1 102 101\nRO 10 O X X XOx+RK + a 80 X + O +* * * + \\C * 10 X Hard + X C 10 10 10~2 10~1 102 101 102 10~1 f1 CaffeNetiter1 CaffeNetiter soK CaffeNetiter 310K 101 10 101 O&KEK 10 O 0o f2 * OX* A 0 XX +x $+ 0 b 10 ** 10 104 X 10~2 101 10-2 10~1 10-2 101 * GaborDecomposition SteerablePyramid Humanperception 70 0 * 10 (%) perreeennney X e 10 60 x + 50 + x 1 c 104 10 40 10-2 10-1 102 101 60 62 64 66 68 70 AccuracyEasy (%) MI Easy (bits)\nMichele Fabre-Thorpe, Ghislaine Richard, and Simon J Thorpe. Rapid categorization of natural images by rhesus monkeys. Neuroreport, 9(2):303-308, 1998. 1SSN 0959-4965\nDavid J Field, Anthony Hayes, and Robert F Hess. Contour integration by the human visual system. evidence for a local association field. Vision research. 33(2):173-193. 1993. 1SsN 0042-6989.\nItzhak Fogel and Dov Sagi. Gabor filters as texture discriminator. Biological cybernetics, 61(2): 103-113. 1989. ISSN 0340-1200.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. A Neural Algorithm of Artistic Style aug 2015a.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks. may 2015b.\nHere, we quantify the similarity between human visual perception, as measured by psychophys ical experiments, and individual computational stages (layers) in feed-forward DNNs trained on a large-scale image recognition problem (ImageNet LSVRC). Comparison is achieved by feeding the experimental image stimuli to the trained DNN and comparing a DNN metric (mean mutual information or mean absolute change) to perceptual data. The use of reduced (simplified and typi cally non-natural) stimuli ensures identical inherent task difficulty across compared categories and prevents confounding of categorization consistency with measured similarity. Perception, a system level computation, may be influenced less by the architectural discrepancy (biology vs. DNN) than are neural recordings\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672-2680. 2014.\nJon Gottesman, Gary S Rubin, and Gordon E Legge. A power law for perceived contrast in human vision. Vision research, 21(6):791-799, 1981. ISSN 0042-6989.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. dec 2015.\nFrom a perceptual perspective, an image change of fixed size has different saliency depending on im age context (Polat & Sagi, 1993). To investigate whether the computation in trained DNNs exhibits similar contextual modulation, we used the Local Image Masking Database (Alam et al., 2014), ir which 1080 partially-overlapping images were subjected to different levels of the same random ad ditive noise perturbation, and for each image, a psychophysical experiment determined the thresholc noise level at which the added-noise image is discriminated from two noiseless copies at 75% (Fig ure 1a). Threshold is the objective function that is compared with an L1-distance correlate in the DNN representation. The scale of measured threshold was:\nMinyoung Huh, Pulkit Agrawal, and Alexei A. Efros. What makes ImageNet good for transfe learning? aug 2016.\nstd (noise) 20 : l0g10 T\nwhere std (noise) is the standard deviation of the additive noise, and T is the mean image pixe value calculated over the region where the noise is added (i.e. image center).\nLee et al., 2008: 2009; Olshausen & Field, 1997), and are extensively hard-coded in computer vision (Jain & Farrokhnia, 1991). Furthermore, a similar computation is believed to underlie the spatial re-. sponse properties of visual neurons of diverse animal phyla (Carandini et al., 2005; DeAngelis et al., 1995; Hubel & Wiesel, 1968; Seelig & Jayaraman, 2013), and is evident in human visual perception (Campbell & Robson, 1968; Fogel & Sagi, 1989; Neri et al., 1999). This diversity culminates in sat- isfying theoretical arguments as to why Gabor-like features are so useful in general-purpose vision (Olshausen, 1996; Olshausen & Field, 1997).\nRelated work seems to be consistent with computation convergence. First, different DNN training regimes seem to converge to a similar learned computation (Li et al., 2015; Zhou et al., 2014). Sec. ond, image representation may be similar in trained DNN and in biological visual systems. That is, when the same images are processed by DNN and by humans or monkeys, the final DNN com putation stages are strong predictors of human fMRI and monkey electrophysiology data collected from visual areas V4 and IT (Cadieu et al., 2014; Khaligh-Razavi & Kriegeskorte, 2014; Yamins et al., 2014). Furthermore, more accurate DNN models exhibit stronger predictive power (Cadieu et al., 2014; Dubey & Agarwal, 2016; Yamins et al., 2014), and the final DNN computation stage is even a strong predictor of human-perceived shape discrimination (Kubilius et al., 2016). However, some caution is perhaps unavoidable, since measured similarity may be confounded with catego rization consistency, view-invariance resilience, or similarity in the inherent difficulty of the tasks undergoing comparison. A complementary approach is to consider images that were produced by optimizing trained DNN-based perceptual metrics (Gatys et al., 2015a;b; Johnson et al., 2016; Ledig et al., 2016), which perhaps yields undeniable evidence of non-trivial computational similarity, al- though a more objective approach may be warranted.\nFigure 9: Background context for Shape. Shown for each model is the measured MI for the six 'Hard\"' shapes as a function of the MI for the \"Easy\" shape. The last panel shows an analagous comparison measured in human subjects by Weisstein & Harris (1974). A data point which lies below the dashed diagonal indicates a configuration for which discriminating line location is easier for the Easy shape compared with the relevant Hard shape.\nHinton and Salakhutdinov. Reducing the dimensionality of data with neural networks. Science (Nes York. N.Y.). 313(5786):504-7. iul 2006. ISSN 1095-9203. doi: 10.1126/science.1127647\na b 50 R2=0.6 L1 = 37.8 etne 30 E 10 L1 = 23.4 -60 -40 -20 0 Perceptual threshold (dB) -40 dB -30 dB -20 dB -10 dB 0 dB Perceptual threshold c d <-45 dB -40 dB -30 dB -20 dB >-15 dB 50% 0.6 orooreeor 0.4 r 66% 0.2 100% Computational stage\nJustin Johnson., Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer an super-resolution. arXiv preprint arXiv:1603.08155, 2016\ndata conv1 fc8 2 10-prob 6 2 8 1.5 Caarer 1.5 6 1 4 2 0.5 0.5 0 0 0 0 1 7 75 7 75 1 7 75 7 75 Contrast 1 1 1 data conv1 cIs3 fc 10-prob 0.72 6 0.5 15 0.36 1.5 0.26 1.5 10 0.18 0.126 1 1 0.086 5 0.062 0.5 0.5 0.046 0.032 0 0 0 0 0.024 1 7 75 1 7 75 1 7 75 1 7 75 0.0156 0.0078 data conv1 fc1000 orob 6 8 1.5 1.5 6 1 7 4 2 0.5 0.5 0 0 0 0 1 7 75 1 7 75 1 7 75 1 7 75 Frequency (cycles/image)\nFigure 1: Predicting perturbation thresholds. a, For a fixed image perturbation, perceptual detection. threshold (visualized by red arrow) depends on image context. b, Measured perceptual threshold is. correlated with the average L1 change in DNN computation due to image perturbation (for DNN. model VGG-19, image scale=100%). c, Explained variability (R2) of perceptual threshold data. when L1 change is based on isolated computational layers for different input image scales. Same VGG-19 model as in (b). X-axis labels: data refers to raw image pixel data, conv*_1 and fc_* are. the before-ReLU output of a convolution and a fully-connected operation, respectively, and prob. is the output class label probabilities vector. d, Example images for whcih predicted threshold in b. is much higher than perceptually measured (\"Overshoot\"', where perturbation saliency is better than. predicted), or vise versa (\"Undershoot'). Examples are considered from several perceptual threshold. ranges (2 dB of shown number).\nYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436-444 may 2015. ISSN 0028-0836. doi: 10.1038/nature14539.\nThe DNN correlate of perceptual threshold we used was the average L1 change in DNN computatio. between added-noise images and the original, noiseless image. Formally,.\nLin(I)=aiI+ noise(n))-ai(I)\nFigure 10: Contrast sensitivity (following Figure 3) for DNN architectures CaffeNet, GoogLeNet and ResNet-152.\nwhere a, (X) is the activation value of neuron i during the DNN feedforward pass for input image. X, and the inner average (denoted by bar) is taken over repetitions with random n-sized noise (noise. is introduced at random phase spectra in a fixed image location, an augmentation that follows the between-image randomization described by Alam et al., 2014; the number of repetitions was 10 or more). Unless otherwise specified, the final L1 prediction is Ln averaged across noise levels (-40. to 25 dB with 5-dB intervals) and computational neurons (first within and then across computa tional stages). Using L1 averaged across noise levels as a correlate for the noise level of perceptua. threshold is a simple approximation with minimal assumptions..\nYixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent Learning: Dc. different neural networks learn the same representations? arXiv preprint arXiv:1511.07543. 2015\nYucheng Liu and Jan P. Allebach. Near-threshold perceptual distortion prediction based on optimal structure classification. In 2016 IEEE International Conference on Image Processing (ICIP), pp 106-110. IEEE. sep 2016. 1SBN 978-1-4673-9961-6. doi: 10.1109/ICIP.2016.7532328\nResults show that the L1 metric is correlated with the perceptual threshold for all tested DNN archi tectures (Figure 1b, 4a-c). In other words, higher values of the L1 metric (indicating larger changes in DNN computation due to image perturbation, consistent with higher perturbation saliency) are\nTomer Livne and Dov Sagi. Configuration influence on crowding. Journal of Vision, 7(2):4, 2007 ISSN 1534-7362\nHonglak Lee, Chaitanya Ekanadham, and Andrew Y. Ng. Sparse deep belief net model for visual area V2. In Advances in Neural Information Processing Systems, pp. 873-880, 2008.\nHonglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09, pp. 1-8, New York. New York, USA, jun 2009. ACM Press. ISBN 9781605585161. doi: 10.1145/1553374.1553453\nPeter Neri, Andrew J Parker, and Colin Blakemore. Probing the human stereoscopic system witl reverse correlation. Nature, 401(6754):695-698, 1999. ISSN 0028-0836.\nBruno A Olshausen. Emergence of simple-cell receptive field properties by learning a sparse cod for natural images. Nature, 381(6583):607-609, 1996. 1SSN 0028-0836.\nDenis G Pelli, Melanie Palomares, and Najib J Majaj. Crowding is unlike ordinary masking: Distin guishing feature integration from detection. Journal of vision, 4(12):12, 2004. ISsN 1534-7362\nNoga Pinchuk-Yacobi, Ron Dekel, and Dov Sagi. Expectation and the tilt aftereffect. Journal o vision, 15(12):39, sep 2015. 1SSN 1534-7362. doi: 10.1167/15.12.39\na Human VGG-19, prob 1 Connst connst 0.1 0.1 0.01 0.01 0.18 2.18 26.79 0.18 2.18 26.79 Frequency Frequency (cycles/deg. of vis. field) (0.18 * cycles/image) b Human VGG-19, conv1 1 coonest connst 0.1 0.1 0.01 0.01 0.09 1.09 13.39 0.09 1.09 13.39 Frequency Frequency (cycles/deg. of vis. field) (0.09 * cycles/image)\nNoga Pinchuk-Yacobi, Hila Harris, and Dov Sagi. Target-selective tilt aftereffect during texture learning. Vision research, 124:44-51, 2016. 1SSN 0042-6989\nU Polat and D Sagi. Lateral interactions between spatial channels: suppression and facilitation revealed by lateral masking experiments. Vision research, 33(7):993-9, may 1993. ISSN 0042- 6989. R T Pramod and S P Arun. Do computational models differ systematically from human object perception? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1601-1609, 2016.\nTo quantify and compare predictive power, we considered the percent of linearly explained vari- ability (R2). For all tested DNN architectures, the prediction explains about 60% of the perceptual. variability (Tables 1, 2; baselines at Tables 3-5), where inter-person similarity representing theoreti- cal maximum is 84% (Alam et al., 2014). The DNN prediction is far more accurate than a prediction based on simple image statistical properties (e.g. RMS contrast), and is on par with a detailed per- ceptual mode1 that relies on dozens of psychophysically collected parameters (Alam et al., 2014). The Spearmann correlation coefficient is much higher compared with the perceptual model (with an absolute SROCC value of about 0.79 compared with 0.70, Table 1), suggesting that the L1 metric gets the order right but not the scale. We did not compare these results with models that fit the. experimental data (e.g. Alam et al., 2015; Liu & Allebach, 2016), since the L1 metric has no explicit. parameters. Also, different DNN architectures exhibited high similarity in their predictions (R2 of. about 0.9, e.g. Figure 4d).\nPrediction can also be made from isolated computational stages, instead of across all stages as before. This analysis shows that the predictive power peaks mid-computation across all tested image scales (Figure 1c). This peak is consistent with use of middle DNN layers to optimize perceptual metrics (Gatys et al., 2015a;b; Ledig et al., 2016), and is reminiscent of cases in which low- tc mid-level vision is the performance limiting computation in the detection of at-threshold stimuli (Campbell & Robson, 1968; Del Cul et al., 2007).\nJohannes D Seelig and Vivek Jayaraman. Feature detection and orientation tuning in the Drosophila central complex. Nature. 503(7475):262-266. 2013. 1SSN 0028-0836\nFigure 11: Comparison of contrast sensitivity. Shown are iso-output curves, for which perceived contrast is the same (Human), or for which the L1 change relative to a gray image is the same (DNN. model VGG-19). To obtain a correspondence between human frequency values (given in cycles pet degree of visual field) to DNN frequency values (given in cycles per image), a scaling was choser. such that the minima of the blue curve is given at the same frequency value. Human data is foi subject M.A.G. as measured by Georgeson & Sullivan (1975).\nThomas Serre. Hierarchical Models of the Visual System. In Encyclopedia of Computational Neu roscience, pp. 1-12. Springer, 2014. ISBN 1461473209\nFinally, considering the images for which the L1-based prediction has a high error suggests a factor which causes a systematic inconsistency with perception (Figures 1d, 6). This factor may be related to the mean image luminance: by introducing noise perturbations according to the scale of Equation 1, a fixed noise size (in dB) corresponds to smaller pixel changes in dark compared with bright images. (Using this scales reflects an assumption of multiplicative rather than additive conservation: this assumption may be justified for the representation at the final but perhaps not the intermediate computational stages considering the log-linear contrast response discussed in Section 5). Another factor may the degree to which image content is identifiable.\nEero P Simoncelli and William T Freeman. The steerable pyramid: a flexible architecture for multi scale derivative computation. In ICIP (3), pp. 444- 447, 1995.\nKaren Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. sep 2014.\nChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow. and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013\nAndrew B Watson and Albert J Ahumada. Predicting visual acuity from wavefront aberrations Journal of vision, 8(4):17.1-19, ian 2008. 1SSN 1534-7362. doi: 10.1167/8.4.17.\nTable 1: Prediction accuracy. Percent of linearly explained variability (R2), absolute value of Spear man rank-order correlation coefficient (SROCC), and the root mean squared error of the linear prediction (RMsE) are presented for each prediction model. Note the measurement scale of the threshold data being predicted (Eq. 1). (*) Thresholds linearized through a logistic transform be- fore prediction (see Larson & Chandler, 2010), possibly increasing but not decreasing measured predictive strength. (**) Average of four similar alternatives.\nMichael A Webster. Adaptation and visual coding. Journal of vision, 11(5), jan 2011. ISSN 1534 7362."}, {"section_index": "7", "section_name": "8.1 DNN MODELS", "section_text": "The previous analysis suggested gross computational similarity between human perception anc trained DNNs. Next, we aimed to extend the comparison to more interpretable properties of per ception by considering more highly controlled designs. To this end, we considered cases in which a static background context modulates the difficulty of discriminating a foreground shape, despite nc spatial overlap of foreground and background. This permits interpretation by considering the cause of the modulation.\nN. Weisstein and C. S. Harris. Visual Detection of Line Segments: An Object-Superiority Effect Science, 186(4165):752-755, nov 1974. ISsN 0036-8075. doi: 10.1126/science.186.4165.752\nTo collect DNN computation snapshots, we used MATLAB with MatConvNet version 1.0-beta2. (Vedaldi & Lenc, 2015). All MATLAB code will be made available upon acceptance of thi. manuscript. The pre-trained DNN models we have used are: CaffeNet (which is a variant of AlexNe provided in Caffe, Jia et al., 2014), GoogLeNet (Szegedy et al., 2014), VGG-19 (Simonyan & Zis. serman, 2014), and ResNet-152 (He et al., 2015). The models were trained on the same ImageNe. LSVRC. The CaffeNet model was trained using Caffe with the default ImageNet training parame. ters (stopping at iteration 310, 000) and imported into MatConvNet. For the GoogLeNet model, w. used the imported pre-trained reference-Caffe implementation. For VGG-19 and ResNet-152, w used the imported pre-trained original versions. In all experiments input image size was 224 22. 0r 227 x 227.\nDaniel L K Yamins and James J DiCarlo. Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience, 19(3):356-365, 2016. ISsN 1097-6256.\nWe first consider segmentation, in which arrangement is better discriminated for arrays of consis. tently oriented lines compared with inconsistently oriented lines (Figure 2a) (Pinchuk- Yacobi et al 2016). Crowding is considered next, where surround clutter that is similar to the discriminate. arget leads to deteriorated discrimination performance (Figure 2b) (Livne & Sagi, 2007). Last t be addressed is object superiority, in which a target line location is better discriminated when i is in a shape-forming layout (Figure 2c) (Weisstein & Harris, 1974). In this case, clutter is con trolled by having the same fixed number of lines in context. To measure perceptual discriminatior these works introduced performance-limiting manipulations such as location jittering, brief presen ation, and temporal masking. While different manipulations showed different measured values order-of-difficulty was typically preserved. Here we changed all the original performance-limiting. nanipulations to location jittering (whole-shape or element-wise, see Section 8.4).\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep. neural networks? In Advances in neural information processing systems, pp. 3320-3328, 2014."}, {"section_index": "8", "section_name": "8.2 BASELINE MODELS", "section_text": "Matthew D Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. nov 2013.\nAs baselines to compare with pre-trained DNN models, we consider: (a) a multiscale linear filter. bank of Gabor functions, (b) a steerable-pyramid linear filter bank (Simoncelli & Freeman, 1995), (c) the VGG-19 model for which the learned parameters (weights) were randomly scrambled within layer, and (d) the CaffeNet model at multiple time points during training. For the Gabor decom-. position, the following Gabor filters were used: all compositions of = {1, 2, 4, 8, 16, 32, 64}px. = {1, 2} , orientation= {0, /3, 2/3, , 4/3, 5/3}, and phase= {0, /2}.\nZhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object Detectors Emerg in Deep Scene CNNs. pp. 12, dec 2014.\nTo quantify discrimination difficulty in DNNs, we measured the target-discriminative information of isolated neurons (where performance is limited by location jittering noise), then averaged across all neurons (first within and then across computational layer stages). Specifically, for each neuron, we measured the reduction in categorization uncertainty due to observation, termed mutual information (MI):"}, {"section_index": "9", "section_name": "8.3 IMAGE PERTURBATION EXPERIMENT", "section_text": "where H stands for entropy, and A, is a random variable for the value of neuron i when the DNN. processes a random image from a category defined by the random variable C. For example, if a. neuron gives a value in the range of 100.0 to 200.0 when the DNN processes images from category. A, and 300.0 to 400.0 for category B, then the category is always known by observing the value, and. so mutual information is high (MI=1 bits). On the other extreme, if the neuron has no discriminative task information, then MI=0 bits. To measure MI, we quantized activations into eight equal-amount. bins, and used 500 samples (repetitions having different location jittering noise) across categories. The motivation for this correlate is the assumption that the perceptual order-of-difficulty reflects the. quantity of task-discriminative information in the representation..\nThe noiseless images were obtained from Alam et al. (2014). In main text, \"image scale\"' refers t percent coverage of DNN input. Since size of original images (149 149) is smaller than DNN input of (224 224) or (227 227), the images were resized by a factor of 1.5 so that 100% imag. scale covers approximately the entire DNN input area.\nHuman psychophysics and DNN experiments were done for nearly identical images. A slight dis. crepancy relates to how the image is blended with the background in the special case where the. region where noise is added has no image surround at one or two side. In these sides (which depenc. on the technical procedure with which images were obtained, see Alam et al., 2014), the surrounc blending here was hard, while the original was smooth..\nResults show that, across hundreds of configurations (varying pattern element size, target location. jitter magnitude, and DNN architecture; see Section 8.4), the qualitative order of difficulty in terms. of the DNN MI metric is consistent with the order of difficulty measured in human psychophysica. experiments, for the conditions addressing segmentation and crowding (Figures 2d, 7; for baseline. models see Figure 8). It is interesting to note that the increase in similarity develops gradually along. different layer types in the DNN computation (i.e. not just pooling layers), and is accompaniec. by a gradual increase in the quantity of task-relevant information (Figure 2e-g). This indicates a. link between task relevance and computational similarity for the tested conditions. Note that unlike. the evident increase in isolated unit task information, the task information from all units combine. decreases by definition along any computational hierarchy. An intuition for this result is that the. total hidden information decreases, while more accessible per-unit information increases.."}, {"section_index": "10", "section_name": "8.4.1 SEGMENTATION", "section_text": "The images used are based on the Texture Discrimination Task (Karni & Sagi, 1991). In the variant considered here (Pinchuk-Yacobi et al., 2015), subjects were presented with a grid of lines, all of which were horizontal, except two or three that were diagonal. Subjects discriminated whether the arrangement of diagonal lines is horizontal or vertical, and this discrimination was found to be more difficult when the central line is horizontal rather than diagonal ('Hard' vs. 'Easy' in Figure 2a) To limit human performance in this task, two manipulations were applied: (a) the location of each line in the pattern was jittered, and (b) a noise mask was presented briefly after the pattern. Here we only retained (a).\nFor shape formation, four out of six shapes consistently show order of difficulty like perception, anc two shapes consistently do no (caricature at Figure 2h; actual data at Figure 9)..\nA total of 90 configurations were tested, obtained by combinations of the following alternatives\nMI(A;C) = H(C)- H(CA)\nThree scales: line length of 9, 12.3, or 19.4 px (number of lines co-varied with line length,. see Figure 12). Three levels of location jittering, defined as a multiple of line length: {1, 2, 3} : 0.0625 : l. px, where l is the length of a line in the pattern. Jittering was applied separately to each. line in the pattern. Ten locations of diagonal lines: center, random, four locations of half-distance from center to corners, four locations of half-distance from center to image borders..\na b d Segmentation Crowding Shape Consistent Connnnnonns Inconsistent 90 88 90 Ase A B 66 60 1 30 24 0 2 C wnu N N uip TAS TBS MO Shap M M e f g h Segmentation Crowding Shape Inconsistent 0.3r 0.6 Easy ref (siiq) 0.2 Hard 0.2 6 0.4 NNC 0.1 Consistent 0. 0.2 Perception Computational stage\nFor each configuration, the discriminated arrangement of diagonal lines was either horizontal o vertical, and the central line was either horizontal or diagonal (i.e. hard or easy).\na b C d 5 0.2 5 R2=0.87 Ccaret 0.15 3 3 3 0.1 7 0.05 1 -60 -40 -20 0 60 -40 -20 0 -60 -40-20 0 10 30 50 Perceptual threshold Perceptual threshold Perceptual threshold L VGG-19 (dB) (dB) (dB)\nFigure 12: Pattern scales used in the different configurations of the Segmentation condition. Actual images used were white-on-black rather than black-on-white\na b VGG-19 ResNet-152 0.6 0.6 0.4 0.4 Y 0.2 0.2 Best isolated Computational stage\nhe images used are motivated by the crowding effect (Livne & Sagi, 2007; Pelli et al., 2004)\nVGG-19 ResNet-152 . 0.4 0.4 0.2 0.2 Best isolated\nFigure 2: Background context. a-c, Illustrations of reproduced discrimination stimuli for three psychophysical experiments (actual images used were white-on-black rather than black-on-white and pattern size was smaller, see Figures 12-14). d, Number of configurations for which order- of-difficulty in discrimination is qualitatively consistency with perception according to a mutual information DNN metric. Configurations vary in pattern (element size, target location, and jitter magnitude; see Section 8.4) and in DNN architecture used (CaffeNet, GoogLeNet, VGG-19, and ResNet-152). DNN metric is the average across neurons of the isolated neuron target-discriminative information (averaged first within, and then across computational layer stages), where performance is limited by location jittering (e.g. evident jitter in illustrations). e-g, The value of the MI metric across computational layers of model VGG-19 for a typical pattern configuration. The six \"'hard\" (gray) lines in Shape MI correspond to six different layouts (see Section 8.4.3). Analysis shows that for isolated computation stages, similarity to perception is evident only at the final DNN computation stages. h, A caricature summarizing the similarity and discrepancy of perception and the MI-based DNN prediction for Shape (see Figure 9).\nFor each configuration, the discriminated letter was either A, B, C, D, E, or F, and the background was either blank (easy) or composed of the letters M, N, S, and T (hard)..\nFigure 5: Prediction accuracy as a function of computational stage. a, Predicting perceptual sen sitivity for model VGG-19 using the best single kernel (i.e. using one fitting parameter, no cross validation), vs. the standard L1 metric (reproduced from Figure 1). b, For non-branch computa tional stages of model ResNet-152.\nR2 Model SROCC RMSE Recognition accuracy CaffeNet .59 .78 5.44 56% GoogLeNet .59 .79 5.45 66% VGG-19 .60 .79 5.40 70% ResNet-152 .53 .74 5.82 75%\nModel R2 SROCC RMSE Recognition accuracy CaffeNet .59 .78 5.44 56% GoogLeNet .59 .79 5.45 66% VGG-19 .60 .79 5.40 70% ResNet-152 .53 .74 5.82 75%\nA cornerstone of biological vision research is the use of sine gratings at different frequencies, ori- entations, and contrasts (Campbell & Robson, 1968). Notable are results showing that the lowest perceivable contrast in human perception depends on frequency. Specifically, high spatial frequen. cies are attenuated by the optics of the eye, and low spatial frequencies are believed to be attenuated due to processing inefficiencies (Watson & Ahumada, 2008), so that the lowest perceivable contrast. is found at intermediate frequencies. (To appreciate this yourself, examine Figure 3a). Thus, for. low-contrast gratings, the physical quantity of contrast is not perceived correctly: it is not preserved. across spatial frequencies. Interestingly, this is corrected for gratings of higher contrasts, for which. perceived contrast is more constant across spatial frequencies (Georgeson & Sullivan, 1975)..\nTable 2: Accuracy of perceptual sensitivity prediction and task-trained ImageNet center-crop top-1 validation accuracy for different DNN models (following Table 1 from which third row is repro duced; used scale: 100%). The quality of prediction for ResNet-152 improves dramatically if only the first tens of layers are considered (see Figure 5b)..\nFigure 13: Pattern scales used in the different configurations of the Crowding condition. Actua images used were white-on-black rather than black-on-white\nThe images used are based on the object superiority effect by Weisstein & Harris (1974), where. discriminating a line location is easier when combined with surrounding lines a shape is formed.\nThe DNN correlate we considered is the mean absolute change in DNN representation between a gray image and sinusoidal gratings, at all combinations of spatial frequency and contrast. Formally for neurons in a given layer, we measured:.\nA total of 90 configurations were tested, obtained by combinations of the following alternatives.\nNneurons 1 L1(contrast, frequency) = ai (contrast, frequency) - ai (0, 0) Nneurons i=1\nFigure 4: Predicting perceptual sensitivity to image changes (following Figure 1). a-c, The Lj. change in CaffeNet, GoogLeNet, and ResNet-152 DNN architectures as a function of perceptual. threshold. d, The L1 change in GoogLeNet as a function of the L1 change in VGG-19\nThree scales: font size of 15.1, 20.6, or 32.4 px (see Figure 13). Three levels of discriminated-letter location jittering, defined as a multiple of font size: {1, 2, 3} . 0.0625 . l px, where l is font size. The jitter of surround letters (M, N, S, and T). was fixed (i.e. the background was static).. Ten locations: center, random, four locations of half-distance from center to corners, four locations of half-distance from center to image borders..\nA A A\nThree scales: discriminated-line length of 9, 15.1, or 22.7 px (see Figure 14) Five levels of whole-pattern location jittering, defined as a multiple of discriminated-lin length: {1, 2, 5, 10, 15} : 0.0625 . l px, where l is the length of the discriminated line.\nModel R2 SROCC RMSE VGG-19, scrambled weights .18 .39 7.76 Gabor filter bank. .32 .12 8.03 Steerable-pyramid filter bank .37 .15 7.91\nwhere a; (contrast, frequency) is the average activation value of neuron i to 250 sine images. (random orientation, random phase), a (0, 0) is the response to a blank (gray) image, and Nneurons. is the number of neurons in the layer. This measure reflects the overall change in response vs. the gray image.\nResults show a bandpass response for low-contrast gratings (blue lines strongly modulated by fre quency, Figures 3, 10), and what appears to be a mostly constant response at high contrast for. end-computation layers (red lines appear more invariant to frequency), in accordance with percep. tion.\nTable 3: Accuracy of perceptual sensitivity prediction for baseline models (see Section 8.2; used scale: 100%).\nWe next aimed to compare these results with perception. Data from human experiments is generall iso-output (i.e. for a pre-set output, such as 75% detection accuracy, the input is varied to find th value which produce the preset output). However, the DNN measurements here are iso-input (i.e for a fixed input contrast the Lj is measured). As such, human data should be compared to the inter poalted inverse of DNN measurements. Specifically, for a set output value, the interpolated contras value which produce the output is found for every frequency (Figure 11). This analysis permit quantifying the similarity of iso-output curves for human and DNN, measured here as the percent o log-Contrast variability in human measurements which is explained by the DNN predictions. Thi showed a high explained variability at the end computation stage (prob layer, R2 = 94%), bu importantly, a similarly high value at the first computational stage (conv1_1 layer, R2 = 96%) Intiutively, while the 'internal representation\"' variability in terms of Lj is small, the iso-outpu number-of-input-contrast-cahnges variability is still high. For example. for the prob layer, abou the same L1 is measured for (Contrast=1.freq=75) and for (Contrast=0.18.freq=12).\nModel R2 SROCC RMSE Recognition accurac CaffeNet iter 1 .46 .67 6.30 0% CaffeNet iter 50K .59 .79 5.43 37% CaffeNet iter 100K .60 .79 5.41 39% CaffeNet iter 150K .60 .78 5.43 53% CaffeNet iter 200K .59 .78 5.45 54% CaffeNet iter 250K .59 .78 5.43 56% CaffeNet iter 300K .59 .78 5.44 56% CaffeNet iter 310K .59 .78 5.44 56%\nCaffeNet iter 1 .46 .67 6.30 0% CaffeNet iter 50K .59 .79 5.43 37% CaffeNet iter 100K .60 .79 5.41 39% CaffeNet iter 150K .60 .78 5.43 53% CaffeNet iter 200K .59 .78 5.45 54% CaffeNet iter 250K .59 .78 5.43 56% CaffeNet iter 300K .59 .78 5.44 56% CaffeNet iter 310K .59 .78 5.44 56%\nFigure 14: Pattern scales used in the different configurations of the Shape condition. Actual images used were white-on-black rather than black-on-white.\nTable 4: Accuracy of perceptual sensitivity prediction during CaffeNet model standard training (use. scale: 100%). Last row reproduced from Table 2.\nAn interesting, unexpected observation is that the logarithmically spaced contrast inputs are linearly spaced at the end-computation layers. That is, the average change in DNN representation scales logarithmically with the size of input change. This can be quantified by the correlation of output L1 with log Contrast input, which showed R2 = 98% (averaged across spatial frequencies) for prob, while much lower values were observed for early and middle layers (up to layer fc7). The same computation when scrambling the learned parameters of the model showed R2 = 60%. Because the degree of log-linearity observed was extremely high, it may be an important emergent property of the learned DNN computation, which may deserve further investigation. However, this property is only reminiscent and not immediately consistent with the perceptual power-law scaling (Gottesman et al., 1981)."}, {"section_index": "11", "section_name": "8.5 CONTRAST SENSITIVITY EXPERIMENT", "section_text": "Used images depicted sine gratings at different contrast, spatial frequency, sine phase, and sine orientation combinations.\nTable 5: Robustness of perceptual sensitivity prediction for varying prediction parameters for mode VGG-19. First three rows reproduced from Table 1. Measurements for the lower noise range of -60:-40 dB were omitted by mistake.\n0.72 a b data fc8 0.5 conv1 1 x10-prob 2 0.36 6 2 8 0.26 1.5 0.18 connnst 1.5 4 6 0.126 1 0.086 1 0.062 2 7 0.5 0.5 0.046 2 0.032 0 0 0 0 0.024 1 7 75 1 7 75 1 7 75 1 7 75 0.015 Frequency Frequency (cycles/image) 0.007\nFigure 3: Contrast sensitivity. a. Perceived contrast is strongly affected by spatial frequency a low contrast, but less so at high contrast (which preserves the physical quantity of contrast and thu termed constancy). b. The L1 change in VGG-19 representation between a gray image and image depicting sinusoidal gratings at each combination of sine spatial frequency (x-axis) and contras (color) (random orientation, random phase), considering the raw image pixel data representatior (data), the before-ReLU output of the first convolutional layer representation (conv1_1), the out put of the last fully-connected layer representation (fc8), and the output class label probabilitie. representation (prob).\nSix 'hard' background line layouts (patterns b-f of their Figure 2 and the additional patterr f of their Figure 3 in Weisstein & Harris, 1974). The \"easy\"' layout was always the same (pattern a).\nFor each configuration, the line whose location is discriminated had four possible locations (two locations are shown in Figure 2c), and the surrounding background line layout could compose a shape (easy) or not (hard)\nScale Metric Augmentation Noise range R- SROCC RMSE 100% L1 noise phase -40:25 dB .60 .79 5.40 66% L1 noise phase -40:25 dB .60 .79 5.42 50% L1 noise phase -40:25 dB .57 .77 5.57 100% L2 noise phase -40:25 dB .62 .80 5.29 100% L1 None -40:25 dB .58 .77 5.55 100% L1 noise phase -20:25 dB .59 .78 5.46 100% L1 noise phase -40:5 dB .59 .79 5.43\n0.72 a b 0.5 data conv1_1 fc8 2 x10~pr0b 0.36 8 2 0.26 Connnst 1.5 0.18 1.5 4 0.126 0.086 4 1 0.062 2 0.5 0.5 0.046 0.032 0 0 0 0 0.024 1 7 75 1 7 75 1 7 75 1 7 75 0.0156 Frequency 0.0078 Frequency (cycles/image)\nModel Day 1 Days 2-4 Masked VGG-19 .36 .37 .15 GoogLeNet .31 .22 .16 MRSA-152 .26 .26 .11 CaffeNet iter 1. .32 .29 .39 CaffeNet iter 50K .15 .19 .16 CaffeNet iter 310K .16 .12 .18 Gabor Decomposition .26 .27 .48 Steerable Pyramid .24 .32 .25\n.36 .37 .15 .31 .22 .16 .26 .26 .11 .32 .29 .39 50K .15 .19 .16 310K .16 .12 .18 .26 .27 position .48 mid .24 .32 .25\nTable 6: Background context for Shape. Shown is the Spearmann correlation coefficient (SROCC) of perceptual data vs. model-based MI prediction across shapes (i.e. considering all shapes rather than only Easy vs. Hard; note that the original robust finding the superiority of the Easy shape) Perceptual data from Weisstein & Harris (1974), where 'Day 1\" and 'Days 2-4\" (averaged) are for the reduced-masking condition depicted in their Figure 3.).\nIt may be tempting to believe that what we see is the result of a simple transformation of visua input. Centuries of psychophysics have, however, revealed complex properties in perception, by crafting stimuli that isolate different perceptual properties. In our study, we used the same stimuli tc investigate the learned properties of deep neural networks (DNNs), which are the leading compute vision algorithms to date (LeCun et al., 2015).\nThe DNNs we used were trained in a supervised fashion to assign labels to input images. To some degree, this task resembles the simple verbal explanations given to children by their parents. Since human perception is obviously much richer than the simple external supervision provided, we were not surprised to find that the best correlate for perceptual saliency of image changes is a part of the DNN computation that is only supervised indirectly (i.e. the mid-computation stage). This similarity is so strong, that even with no fine-tuning to human perception, the DNN metric is competitively accurate, even compared with a direct model of perception.\nThis strong, quantifiable similarity to a gross aspect of perception may, however, reflect a mix of sim ilarities and discrepancies in different perceptual properties. To address isolated perceptual effects we considered experiments that manipulate a spatial interaction, where the difficulty of discrimi nating a foreground target is modulated by a background context. Results showed modulation o DNN target diagnostic, isolated unit information, consistent with the modulation found in percep tual discrimination. This was shown for contextual interactions reflecting grouping/segmentatior (Harris et al., 2015), crowding/clutter (Livne & Sagi, 2007; Pelli et al., 2004), and shape superiority (Weisstein & Harris, 1974). DNN similarity to these groupings/gestalt phenomena appeared at the end-computation stages.\nNo less interesting, are the cases in which there is no similarity. For example, perceptual effects related to 3D (Erdogan & Jacobs, 2016) and symmetry (Pramod & Arun, 2016) do not appear to have a strong correlate in the DNN computation. Indeed, it may be interesting to investigate the influence of visual experience in these cases. And, equally important, similarity should be considered in terms of specific perceptual properties rather than as a general statement.\nIn the human hierarchy of visual processing areas, information is believed to be processed in a feed. forward sweep, followed by recurrent processing loops (top-down and lateral) (Lamme & Roelf-. sema, 2ooo). Thus, for example, the early visual areas can perform deep computations. Since. mapping from visual areas to DNN computational layers is not simple, it will not be considered. here. (Note that ResNet connectivity is perhaps reminiscent of unrolled recurrent processing)\nInterestingly, debate is ongoing about the degree to which visual perception is dependent on re. current connectivity (Fabre-Thorpe et al., 1998; Hung et al., 2005): recurrent representations ar. obviously richer, but feedforward computations converge much faster. An implicit question her regarding the extent of feasible feed-forward representations is, perhaps: Can contour segmentation. contextual influences, and complex shapes be learned? Based on the results reported here for feed. forward DNNs, a feedforward representation may seem sufficient. However, the extent to which thi. is true may be very limited. In this study we used small images with a small number of lines, whil. effects such as contour integration seem to take place even in very large configurations (Field et al. 1993). Such scaling seems more likely in a recurrent implementation. As such, a reasonable hy. pothesis may be that the full extent of contextual influence is only realizable with recurrence, whil. feedforward DNNs learn a limited version by converging towards a useful computation..\nThe use of DNNs in modeling of visual perception (or of biological visual systems in general) is subject to a tradeoff between accuracy and biological plausibility. In terms of architecture, other deep models better approximate our current understanding of the visual system (Riesenhuber &\nPerceptual threshold <-45 dB -40 dB -35 dB -30 dB -25 dB -20 dB >-15 dB orrnrroor <-45 dB -40 dB -35 dB -30 dB -25 dB -20 dB >-15 dB oreoreeor <-45 dB -4O dB -35 dB -30 dB -25 dB -20 dB >-15 dE oreoreeor deonmp Gaoor\nFigure 6: Images where predicted threshold is too high (\"Overshoot\", where perturbation saliency is better than predicted) or too low (\"'Undershoot'), considered from several perceptual threshold ranges (2 dB of shown number). Some images are reproduced from Figure 1."}]
HJ0NvFzxl
[{"section_index": "0", "section_name": "LEARNING GRAPHICAL STATE TRANSITIONS", "section_text": "machines with different rules and initial tape contents, each of which simulated 6 timesteps of the Turing machine. Performance was then evaluated on 1o00 new examples generated with the same format. The models were evaluated by picking the most likely graph generated by the model, and. comparing it with the correct graph. The percent accuracy denotes the fraction of the examples for which these two graphs were identical at all timesteps. In addition to evaluating the performance on identical tasks, the generalization ability of the models was also assessed. The same trained models were evaluated on versions of the task with 20 and 30 timesteps of simulation..\nDaniel D. Johnson\nDanerD..lonnson Department of Computer Science. Harvey Mudd College. 301 Platt Boulevard.\n1. John grabbed the milk.. 2. John travelled to the bedroom. 3. Sandra took the football.. 4. John went to the garden.. 5. John let go of the milk.. 6. Sandra let go of the football.. 7. John got the football.. 8. John grabbed the milk.. Where is the milk?.\nResults are shown in Table[3] The models successfully learned the assigned tasks, reaching high levels of accuracy for both tasks. Additionally, the models show the ability to generalize to large inputs, giving a perfect output in the majority of extended tasks. For visualization purposes, Figure 3|shows the model at various stages of training when evaluated starting with a single 1 cell.\nGraph-structured data is important in modeling relationships between multiple. entities, and can be used to represent states of the world as well as many data. structures.Li et al.(2016) describe a model known as a Gated Graph Sequence. Neural Network (GGS-NN) that produces sequences from graph-structured input In this work I introduce the Gated Graph Transformer Neural Network (GGT. NN), an extension of GGS-NNs that uses graph-structured data as an intermediate. representation. The model can learn to construct and modify graphs in sophisti. cated ways based on textual input, and also to use the graphs to produce a variety. of outputs. For example, the model successfully learns to solve almost all of the. bAbI tasks (Weston et al.J2016), and also discovers the rules governing graphica. formulations of a simple cellular automaton and a family of Turing machines..\nFigure 5: Diagram of one sample story from the bAbI dataset (Task 2), along with a graphica representation of the knowledge state after the italicized sentence..\nMany methods have been proposed for combining neural networks with graphs. These methods gen. erally require the input to the network to be in graphical format. For instance, GNNs and GGS-NNs. take a graph as input, and propagate information between nodes according to the graph structure. (Gori et al.]2005] Scarselli et al.]2009]Li et al.]2016). Similarly, graph convolutional networks extract information from an existing graph structure by using approximations to spectral graph con volutions (Kipf & Welling2016). These methods are similar to GGT-NNs in that they all store. information in the nodes of a graph and use edges to determine how information flows. However,. they all use a graph with fixed structure, and can only accept graphical data. The GGT-NN model,. on the other hand, allows the graph structure to be built and modified based on unstructured input.\nGiles et al.[(1992) describe a method for extracting a finite state machine from a trained recurren neural network by quantizing the hidden states of the network, recording all possible state transi tions, and using them to construct a minimal directed graph representing the state machine. This method, however, requires postprocessing of the network to extract the graph, and is limited to ex. tracting graphs that represent state machines. Additionally, although the FSM extraction method described byGiles et al.(1992) and the GGT-NN model both produce graphs using neural networks the goals are different: the FSM extraction method aims to learn a single graph that can classify. sequences, whereas the GGT-NN model aims to learn a neural network that can manipulate graphs."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many different types of data can be formulated using a graph structure. One form of data that lends itself to a graphical representation is data involving relationships (edges) between entities (nodes) Abstract maps of places and paths between them also have a natural graph representation, where places are nodes and paths are edges. In addition, many data structures can be expressed in graphical form, including linked lists and binary trees.\nThe lifted relational neural network (LRNN) is another approach to working with structured data. (Sourek et al.|2015). LRNNs require the input to be formatted as a combination of weighted predi. cate logic statements, encompassing both general rules and specific known facts. For each training example, the statements are used to construct a \"ground neural network'', with a connection patterr determined by the dependencies between the statements. LRNNs can learn to extract information b adjusting the weights of each statement, but require the rules to be composed by hand based on the. task structure. Furthermore, unlike in GGT-NNs, a LRNN has no internal state associated with the objects it describes (which are instead represented by single neurons), and the relationships betweer objects cannot be constructed or modified by the network..\nSubstantial research has been done on producing output when given graph-structured input (Kashima et al.]2003] Shervashidze et al.2011, Perozzi et al.] 2014] Bruna et al.,2013] Duvenaud et al. 2015). Of particular relevance to this work are Graph Neural Networks (Gori et al.][2005) |Scarselli et al.| 2009), or GNNs, which extend recursive neural networks by assigning states to each node in a graph based on the states of adjacent nodes. RecentlyLi et al.(2016) have modified GNNs to use gated state updates and to produce output sequences. The resulting networks, called GG-NNs and GGS-NNs, are successful at solving a variety of tasks with graph-structured input.\nFigure 6: Diagram of one example from the automaton task, along with a graphical representation of the automaton state after the fourth simulate command (italicized).\nThe current work further builds upon GG-NNs and GGS-NNs by allowing graph-structured inter mediate representations, as well as graph-structured outputs. This is accomplished using a mor flexible graph definition, along with a set of graph transformations which take a graph and othe information as input and produce a modified version of the graph. This work also introduces th Gated Graph Transformer Neural Network model (GGT-NN), which combines these transforma tions with a recurrent input model to incrementally construct a graph given natural language input and can either produce a final graph representing its current state, or use the graph to produce natural language output.\nMultiple recent architectures have included differentiable internal states. Memory Networks, as de- scribed in|Weston et al.(2014), and the fully differentiable end-to-end memory networks, described. in Sukhbaatar et al.(2015), both utilize a differentiable long-term memory component, consisting. of a set of memories that are produced by encoding the input sentences. To answer a query, an. attention mechanism is used to select a subset of these memories, and the resulting memories are processed to produce the desired output. Differentiable Neural Computers (DNCs), described in Graves et al.(2016), interact with a fixed-size memory using a set of read and write \"heads\", which. can be moved within the memory either by searching for particular content or by following temporal. \"links of association'' that track the order in which data was written..\nExtending GG-NNs in this way opens up a wide variety of applications. Since many types of dat. can be naturally expressed as a graph, it is possible to train a GGT-NN model to manipulate. meaningful graphical internal state. In this paper I demonstrate the GGT-NN model on the bAb task dataset, which contains a set of stories about the state of the world. By encoding this state a a graph and providing these graphs to the model at training time, a GGT-NN model can be traine. to construct the correct graph from the input sentences and then answer questions based on thi. internal graph. I also demonstrate that this architecture can learn complex update rules by trainin. it to model a simple 1D cellular automaton and arbitrary 4-state Turing machines. This requires th network to learn how to transform its internal state based on the rules of each task.\nMemory networks and DNCs share with the GGT-NN model the ability to iteratively construct an internal state based on textual input, and use that internal state to answer questions about the underlying structured data. However, in these models, the structure of the internal state is implicit: although the network can store and work with structured data, the actual memory consists of a set of vectors that cannot be easily interpreted, except by monitoring the network access patterns. The GGT-NN model, on the other hand, explicitly models the internal state as a graph with labeled\nFigure 7: Diagram of an example from the Turing machine task, with a graphical representation of the machine state after the second run command (italicized).\ndCLOlTS 1T TOCa cTOn John Garden Bedroom ettable is in location. gettable is in actor. Football Milk Sandra"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Zero Value edges Neighbor edges Initial cells New cells (left) New cells (right) One\n10. input symbol_0 head 11. input symbol_0 12. input symbol_0 States and rules 13. input symbol_1 14. run Current state 15. run Head Current cell 16. run Cells 17. run 18. run 19. run Zero One\ntates and rules Currentstat Head Current cell Cells Zero One\nNodes Connectivity Annotation Strength State Destination 1234567 Q D 9 9 Q 9 9 Qh Ch 4h 4h Yh sounee h 4h 9h4h 4h 9h 4 Dh 9 9 9 Q 9 9 9h 4h h Ch L 5 6 Q Q Q Q 9 Q 9 6 7 Q9999 99\nNodes Connectivity Annotation Strength State Destination 123456 . h Ch 9h 9h 9h 9h Y h4h 4h 4h 4h suunee Dh 9h 9h 0h Ch 4h Yh Lh 4h9h9h l Q Q Q 9 Q Q Q 5 Q 9 9 9 9 9 9 6 7 Q999999\nnodes and edges. This allows the produced graph to be extracted, visualized, and potentially used i downstream applications that require graph-structured input.\nHierarchical Attentive Memory (HAM) is a memory-based architecture that consists of a binary tre. built on top of an input sequence (Andrychowicz & Kurach]2016). A recurrent controller accesses. the HAM module by performing a top-down search through the tree, at each stage choosing tc. attend to either the left or right subtrees. Once this process reaches a leaf, the value of the leaf is provided to the controller to use in predicting the next output, and this leaf's value can be updatec. with a new value. This architecture is especially suited toward sequence-based tasks, and has beer. shown to generalize to longer sequences very efficiently due to the tree structure. However, it i. unclear whether a HAM module would work well with non-sequential structured data, since the tre structure is fixed by the network.\nAn example of a graph produced from the bAbI tasks is given in Figure5\nThe cellular automaton task was mapped to graphical format as follows: Nodes have 5 types: zerc. one, init-cell, left-cell, and right-cell. Edges have 2 types: value, and next-r. There is always exactl. one \"zero'' node and one \"one'' node, and all of the cell nodes form a linked list, with a \"value'' edg connecting to either zero or one, and a \"next-r' edge pointing to the next cell to the right (or no edg. for the rightmost cell).\nFigure 1: Diagram of the differentiable encoding of a graphical structure, as described in section|3. On the left, the desired graph we wish to represent, in which there are 6 node types (shown as blue. purple, red, orange, green, and yellow) and two edge types (shown as blue/solid and red/dashed). Node 3 and the edge between nodes 6 and 7 have a low strength. On the right, depictions of the node and edge matrices: annotations, strengths, state, and connectivity correspond to xy, Sv, hy, and C, respectively. Saturation represents the value in each cell, where white represents O, and. fully saturated represents 1. Note that each node's annotation only has a single nonzero entry,. corresponding to each node having a single well-defined type, with the exception of node 3, which. has an annotation that does not correspond to a single type. State vectors are shaded arbitrarily. to indicate that they can store network-determined data. The edge connectivity matrix C is three dimensional, indicated by stacking the blue-edge cell on top of the red-edge cell for a given source-. destination pair. Also notice the low strength for cell 3 in the strength vector and for the edge. between node 6 and node 7 in the connectivity matrix..\nOne advantage of the GGT-NN model over existing works is that it can process data in a distributec. fashion. Each node independently processes its surroundings, which can be beneficial for complex. tasks such as pathfinding on a graph. This is in contrast to memory networks, DNCs, and HAM. modules, which are restricted to processing only a fixed number of locations in a given timestep. On the other hand, the distributed nature of the GGT-NN model means that it is less time and space. efficient than these other networks. Since every node can communicate with every other node, the. time and space required to run a GGT-NN step scales quadratically with the size of the input. A. DNC or memory network, on the other hand, either scales linearly (since it attends to all storec. data or memories) or is constant (if restricted to a fixed-size memory), and a HAM module scale. logarithmically (due to the tree structure).\nAt the start of each training example, there are 13 timesteps with input of the form \"init X\" where X is O or 1. These timesteps indicate the first 13 initial cells. Afterward, there are 7 \"simulate\"' inputs At each of these timesteps, one new left-cell node is added on the left, one new right-cell node is added on the right, and then all cells update their value according to the Rule 30 update rules.\nAn example of the graphical format for the cellular automaton task is given in Figure6\nFor the Turing machine task, nodes were assigned to 8 types: state-A, state-B, state-C, state-D. head, cell, 0, and 1. Edges have 16 types: head-cell, next-left, head-state, value, and 12 types o the form rule-R-W-D, where R is the symbol read (O or 1), W is the symbol written (0 or 1), an D is the direction to move afterward (Left, Right, or None). State nodes are connected with rule edges, which together specify the rules governing the Turing machine. Cell nodes are connected tc. adjacent cells with next-left edges, and to the symbol on the tape with value edges. Finally, the heac. node is connected to the current state with a head-state edge, and to the current cell of the head witl. a head-cell edge."}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "Gated Recurrent Units (GRU) are a type of recurrent network cell introduced byCho et al.(2014] Each unit uses a reset gate r and an update gate z, and updates according to.\nr(t) =o(W,x(t) +U,h(t-1) + br) z(t) =o(Wzx(t) +U,h(t-1) + h(t) =(Wx+U(r(t) Oh(t-1)) +b) h(t) =zOh(t-1) +(1-z)Oh(t)\nThe GGT-NN architecture has a few advantages over the architectures described in existing works In contrast to other approaches to working with structured data, GGT-NNs are designed to work witl unstructured input, and are able to modify a graphical structure based on the input. And in contras to memory networks or DNCs, the internal state of the network is explicitly graph structured, anc complex computations can be distributed across the nodes of the graph.\nr(t) =o(W,x(t) + U,h(t-1) + br) z(t) =o(Wzx(t) +U,h(t-1 n(t) =(Wx+U(r(t) Oh(t-1)) +b h(t)=zOh(t-1)+1z\nwhere o is the logistic sigmoid function, is an activation function (here tanh is used), x(t) is the input vector at timestep t, h(t) is the hidden output vector at timestep t, and W, U, Wr, Ur, Wz. Uz, b, br and bz are learned weights and biases. Note that O denotes elementwise multiplication.\nOne downside of the current model is that the time and space required to train the model increase very quickly as the complexity of the task increases, which limits the model's applicability. It would be very advantageous to develop optimizations that would allow the model to train faster and with. smaller space requirements, such as using sparse edge connections, or only processing some subset. of the nodes at each timestep. Another promising direction of future work is in reducing the level of supervision needed to obtain meaningful graphs, for example by combining a few examples that have full graph-level supervision with a larger set of examples that do not have graph-level information. or using additional regularization to enable the GGT-NN model to be trained without any graph. information.\nAn example of the graphical format for the Turing machine task is given in Figure7"}, {"section_index": "4", "section_name": "2.2 GG-NN AND GGS-NN", "section_text": "The model described in Section 4 conditions the output of the model on the final graph producec by the network. This is ideal when the graph represents all of the necessary knowledge for solving the task. However, it may also be desirable for each graph to represent a subset of knowledge corre sponding to a particular time, and for the output to be based on the sequence of graphs produced. Foj nstance, in the third bAbI task (which requires reasoning about the temporal sequence of events each graph could represent the state of the word at that particular time, instead of representing the full sequence of events prior to that time. In Appendix [C] section[C.1] I describe a transformation to the tasks which allows all information to be contained in the graph. But this adds complexity tc the graphical structure. If it were possible for the model to take into account the full sequence of graphs, instead of just the final one, we could maintain the simplicity of the graph transformation\nThe Gated Graph Neural Network (GG-NN) is a form of graphical neural network model described byLi et al.[(2016). In a GG-NN, a graph G = (V,E) consists of a set V of nodes v with unique values and a set E of directed edges e = (v, v') E V V oriented from v to v'. Each node has an annotation x, E RN and a hidden state h, E RD, and each edge has a type ye E {1, ... , M}.\nGG-NNs operate by first initializing the state h, of each node to correspond to the annotation xy.. Then, a series of propagation steps occur. In each step, information is transferred between nodes. across the edges, and the types of edge determine what information is sent. Each node sums the input it receives from all adjacent nodes, and uses that to update its own internal state, in the same manner as a GRU cell. Finally, the states of all nodes are used either to create a graph-level aggregate. output, or to classify each individual node."}, {"section_index": "5", "section_name": "ACKNOWLEDGMENTS", "section_text": "To this end, I present an extension of the GGT-NN model that can produce output using the full graphical sequence. In the extended model, the graphical output of the network after each input sentence is saved for later use. Then, when processing the query, the same set of query transfor- mations are applied to every intermediate graph, producing a sequence of representation vectors hanswer. These are then combined into a final summary representation vector hanswer hanswer summary\nGGS-NNs extend GG-NNs by performing a large number of propagation-output cycles. At each stage, two versions of the GG-NN propagation process are run. The first is used to predict an outpu for that timestep, and the second is used to update the annotations of the nodes for the next timestep This allows GGS-NNs to predict a sequence of outputs from a single graph.\nI would like to thank Harvey Mudd College for computing resources. I would also like to thank th developers of the Theano library, which I used to run my experiments. This work used the Extrem Science and Engineering Discovery Environment (XSEDE), which is supported by National Scienc. Foundation grant number ACI-1053575.\nIn a few of the tasks, specific entities had multi-word representations. While this works for normal input, it makes it difficult to do direct reference, since direct reference is checked on an individual word level. These tasks were modified slightly so that the entities are referred to with single words (e.g. \"red_square'\" instead of \"red square')..\nThe results presented here show that GGT-NNs are able to successfully model a wide variety of tasks using graph-structured states and potentially could be useful in solving many other types of problems. The specific GGT-NN model described here can be used as-is for tasks consisting of a sequence of input sentences and graphs, optionally followed by a query. In addition, due to the modular nature of GGT-NNs, it is possible to reconfigure the order of the transformations to produce a model suitable for a different task.\nAt the start of each training example, each of the rules for the Turing machine are given, in the. form \"rule state-X R W state-Y D\". Next, the initial state is given in the format \"start state-X\", and the initial contents of the tape (of length 4) are given sequentially in the format \"input symbol-X\". with the position for the head to start marked by \"input symbol-X head\". Finally, there are 6 \"run'. inputs, after each of which the head node updates its edges and the cell at the head updates its value. according to the rules of the Turing machine. If the head leaves the left or right of the tape, a new node is introduced there.\nThere are exciting potential uses for the GGT-NN model. One particularly interesting application would be using GGT-NNs to extract graph-structured information from unstructured textual de scriptions. More generally, the graph transformations provided here may allow machine learning to interoperate more flexibly with other data sources and processes with structured inputs and outputs\na)\nDirect reference No direct reference Task Accuracy Accuracy 3 - Three Supporting Facts 90.3% 65.4% 5 - Three Arg. Relations 89.8% 74.2%"}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locall connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.\nFigure 2: Summary of the graph transformations. Input and output are represented as gray squares. a) Node addition (Tadd), where the input is used by a recurrent network (white box) to produce nev. nodes, of varying annotations and strengths. b) Node state update (Th), where each node receives input (dashed line) and updates its internal state. c) Edge update (Tc), where each existing edge. (colored) and potential edge (dashed) is added or removed according to the input and states of the. adjacent nodes (depicted as solid arrows meeting at circles on each edge). d) Propagation (Tprop). where nodes exchange information along the current edges, and update their states. e) Aggregatior. (Trepr), where a single representation is created using an attention mechanism, by summing informa. tion from all nodes weighted by relevance (with weights shown by saturation of arrows)..\nTable 4: Performance of the sequence-extended GGT-NN on the two bAbI tasks with a tempora component.\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.\nAlgorithm 2 Sequence-Extended Pseudocode 9o > Initialize G to an empty grap for k from 1 to K do > Process each sentenc 9k Tn(9k-1,i(k)) if direct reference enabled then 9k Tdirect(9k,D(k)) end if if intermediate propagation enabled then 9k<Tprop(Gk) end if h2gh Trepr(Gr) 9k Tada(9k,[i(k) hagd]) 9k<Tc(9k,i(k)) end for hoausweary 0 > Initialize hsusweary to the zero vecto for k from 1 to K do > Process the query for each grap 9k Tquery(9k,iquery) if direct reference enabled then end if 9kTqupry(Gr) 1 hausmeary end for return foutput (hansweary)\nDavid K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan. Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems, pp. 2224-2232, 2015."}, {"section_index": "7", "section_name": "DIFFERENTIABLE E GRAPH TRANSFORMATIONS", "section_text": "In this section, I describe some modifications to the graph structure to make it fully differentiable and then propose a set of transformations which can be applied to a graph structure in order t transform it. In particular, I redefine a graph g = (V, C) e T as a set V of nodes v, and a connectivit matrix C E R!V||V|Y, where Y is the number of possible edge types. As before, each node has an annotation x, E RN and a hidden state h, E RD. However, there is an additional constraint tha N possible node types. Each node also has a strength s, E [0, 1]. This represents the level of belie that node v should exist, where s, = 1 means the node exists, and s, = 0 indicates that the node should not exist and thus should be ignored.\nSimilarly, elements of C are constrained to the range [0, 1], and thus one can interpret Cu,v',y as the level of belief that there should be a directed edge of type y from v to v'. (Note that it is possible for there to be edges of multiple types between the same two nodes v and v', i.e. it is possible for Cy,v',y = Cu,v',y' = 1 where y y'.) Figure[1shows the values of xu, Su, h,, and C corresponding to a particular graphical structure\nMikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969, 2016\nThere are five classes of graph transformation:\nHisashi Kashima, Koji Tsuda, and Akihiro Inokuchi. Marginalized kernels between labeled graphs In ICML, volume 3, pp. 321-328, 2003\nusing a recurrent network such as a GRU layer, from which the output can be produced. The modi fied pseudocode for this is shown in Algorithm|2\nYujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. ICLR, 2016.\nI evaluated the extended model on bAbI tasks 3 and 5, the two tasks which asked questions about a sequence of events. (Note that although Task 14 also involves a sequence of events, it uses a set of discrete named time periods and so is not applicable to this modification.) The model was trainec on each of these tasks, without the extra record and history nodes used to store the sequence, insteac simply using the sequence of graphs to encode the relevant information. Due to the simpler graphs produced, intermediate propagation was also disabled.\nFranco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks. 20(1:61-80. 2009"}, {"section_index": "8", "section_name": "1+ GATED GRAPH TRANSFORMER NEURAL NETWORK (GGT-NN)", "section_text": "In this section I introduce the Gated Graph Transformer Neural Network (GGT-NN), which is con structed by combining a series of these transformations. Depending on the configuration of the transformations, a GGT-NN can take textual or graph-structured input, and produce textual or graph\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advance. 44 8 0015\na)\na) Node addition (Tadd), which modifies a graph by adding new nodes and assigning ther. annotations x, and strengths s, based on an input vector.. b) Node state update (Tn), which modifies the internal state of each node using an input vector. (similar to a GRU update step). Optionally, different input can be given to nodes of each type, based on direct textual references to specific node types. This version is called a direct. reference update (Th.direct). c) Edge update (Tc), which modifies the edges between each pair of nodes based on the inter nal states of the two nodes and an external input vector.. d) Propagation (Tprop), which allows nodes to trade information across the existing edges and. then update their internal states based on the information received.. e) Aggregation (Trepr), which uses an attention mechanism to select relevant nodes and then. generates a graph-level output.\nThomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional net works. arXiv preprint arXiv:1609.02907. 2016\nEach transformation has its own trainable parameters. Together, these transformations can be com bined to process a graph in complex ways. An overview of these operations is shown in Figure 2. For details about the implementation of each of these transformations, see Appendix|B.\nResults from training the model are shown in Table4] The accuracy of the extended model appears to be slightly inferior to the original model in general, although the extended direct-reference model of task 5 performs slightly better than its original counterpart. One possible explanation for the inferiority of the extended model is that the increased amount of query processing made the model more likely to overfit on the training data. Even so, the extended model shows promise, and could be advantageous for modeling complex tasks for which preprocessing the graph would be impractical.\nAlgorithm 1 Graph Transformation Pseudocode\nJason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merrienboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy. tasks. ICLR, 2016.\nstructured output. Here I describe one particular GGT-NN configuration, designed to build an modify a graph based on a sequence of input sentences, and then produce an answer to a query.\nStephen Wolfram. A new kind of science, volume 5. Wolfram media Champaign, 2002\nWhen run, the model performs the following: For each sentence k, each word is converted to a. one-hot vector w(~), and the sequence of words (of length L) is passed through a GRU layer to -(k) p(k). The full sentence representation produce a sequence of partial-sentence representation vectors pi~). vector i(k) is initialized to the last partial representation vector p). Furthermore, a direct-reference (k) input matrix D(k) is set to the sum of partial representation vectors corresponding to the words that. (k) that directly refer to node type n. This acts like an attention mechanism, by accumulating the partial representation vectors for the words that directly reference each type, and masking out the vectors. corresponding to other words.\nCaiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual an textual question answering. In Proceedings of The 33rd International Conference on Machin Learning, pp. 2397-2406, 2016.\n. The full sentence representation\nNext, a series of graph transformations are applied, as depicted in Algorithm[1] Depending on the task, direct reference updates and per-sentence propagation can be enabled or disabled. The output. function foutput will depend on the specific type of answer desired. If the answer is a single word,. foutput can be a multilayer perceptron followed by a softmax operation. If the answer is a sequence. of words, foutput can use a recurrent network (such as a GRU) to produce a sequence of outputs. transformations with different learned weights.\nSince the processing of the input and all of the graph transformations are differentiable, at this point the network output can be compared with the correct output for that query and used to update the network parameters, including both the GRU parameters used when processing the input and the internal weights associated with each transformation."}, {"section_index": "9", "section_name": "4.1 SUPERVISION", "section_text": "As with many supervised models, one can evaluate the loss based on the likelihood of producing. an incorrect answer, and then minimize the loss by backpropagation. However, based on initial. experiments, the model appeared to require additional supervision to extract meaningful graph. structured data. To provide this additional supervision, I found it beneficial to provide the correct graph at each timestep and train the network to produce that graph. This occurs in two stages, first. when new nodes are proposed, and then when edges are adjusted. For the edge adjustment, the edge. loss between a correct edge matrix C* and the computed edge matrix C is given by.\nLedge =-C* ln(C)(1C*)ln(1C)\nThe node adjustment is slightly more complex. Multiple nodes are added in each timestep, but the. order of those nodes is arbitrary, and only their existence is important. Thus it should be possible for the network to determine the optimal ordering of the nodes. In fact, this is important because there. is no guarantee that the nodes will be ordered consistently in the training data..\nVinyals et al.(2016) demonstrate a simple method for training a network to output unordered sets the network produces a sequence of outputs, and these outputs are compared with the closest order\nTlanslonlnlallonlfseudocode 1: g 11: G Tada(9, [i(k) hadd]) 2: for k from 1 to K do 12: 9 Tc(G,i(k)) 3: G Tn(9,i(k)) 13: end for 4: if direct reference enabled then 14: G Tquery(G,iquery) 5: G Th,direct(G,D(k)) 15: if direct reference enabled then 9 Tnudrc(G, Dquery) 6: end if 16: 7: if intermediate propagation enabled then 17: end if 18: g Tpupry(9) 8: G Tprop(9) 9: end if 19: hanswer Treuery () hadd Trepr(9) 10: 20: return foutput(hanswer)\n13: end for 14: GT 15: if direct reference enabled t 16: 17: end if 18: G Tppupry( ron 19: hanswer Trequery () 20: return foutput(hanswer inswer\ninput matrix D("}, {"section_index": "10", "section_name": "APPENDIX A BACKGROUND ON GG-NNS AND GGS-NNs", "section_text": "ing of the training data, i.e., the ordering of the training data which would produce the smallest loss. when compared with the network output. Vinyals et al.|show that when using this method, the net- work arbitrarily chooses an ordering which may not be the optimal ordering for the task. However. in this case any ordering should be sufficient, and I found the arbitrary orderings selected in this. way to work well in practice. In particular, letting s*() and x*(o) denote the correct strength and annotations of node v under ordering , the loss becomes.\nThis section gives additional background on the implementation of GG-NNs and GGS-NNs, de scribed byLi et al.(2016).\n|Vnew| Lnode = s*(u) ln(s) + (1- s*(v)) ln(1- s) + x*(v) ln(xy) max TT v=|Vo1d|+1\nt) + Ul a t = tanh(Wa(t) + U(r(t) o h"}, {"section_index": "11", "section_name": "4.2 OTHER TRANSFORMATION CONFIGURATIONS", "section_text": "The structure described in Algorithm 1is designed for question-answering tasks. However, due. to the composability of the individual graph transformations, other configurations could be used to solve other tasks that operate on structured data.."}, {"section_index": "12", "section_name": "5.1 BABI TASKS", "section_text": "M Sedge(v, v', y) O Py + Sedge(V', v,y) O P' U'EV y=1\nI evaluated the GGT-NN model on the bAbI tasks, a set of simple natural-language tasks, where eacl task is structured as a sequence of sentences followed by a query (Weston et al.|2016). The gener ation procedure for the bAbI tasks includes a \"Knowledge\" object for each sentence, representing the current state of knowledge after that sentence. I exposed this knowledge object in graph format and used this to train a GGT-NN in supervised mode. The knowledge object provides names fo. each node type, and direct reference was performed based on these names: if a word in the sentence matched a node type name, it was parsed as a direct reference to all nodes of that type. For details on this graphical format, see Appendix|C\nwhere Sedge(v, v', y) is 1 if e = (v, v) E and ye = y, and O otherwise"}, {"section_index": "13", "section_name": "5.1.1 ANALYSIS AND RESULTS", "section_text": "Gated Graph Sequence Neural Networks (GGS-NN) are an extension of GG-NNs to sequential .,o(K).At each output step k, the annotation matrix I is given by (k). Output o(1). k (k)1 E R|V|Ly. A GG-NN F, is trained to predict an output sequence o(k) from (k), and another GG-NN Fx is trained to predict (k+1) from (k). Prediction of the output at. each step is performed as in a normal GG-NN, and prediction of (k+1) from the set of all final. hidden states H(k,T) (after T propagation steps of Fx) occurs according to the equation.\nResults are shown in Tables[1and2 The GGT-NN model was able to reach 95% accuracy in all but one of the tasks, and reached 1o0% accuracy in eleven of them (see Table2). Additionally, for fourteen of the tasks, the model was able to reach 95% accuracy using 500 or fewer of the 1000 training examples (see Table1).\nThe only task that the GGT-NN was unable to solve with 95% accuracy was task 17 (Positional Reasoning), for which the model was not able to attain a high accuracy. Task 17 has a larger number\nRecall from section|2.2|that GG-NNs represent a graph G = (V, &) as a set V of nodes v with unique values 1,..., V and a set E of directed edges e = (v, v) E V V oriented from v to v'. Each node has an annotation x, E RN and a hidden state h, E RD. Additionally, each edge has a type Ye E {1,.::, M}\nInitially, hs') is set to the annotation x, padded with zeros. Then nodes exchange information for some fixed number of timesteps T according to the propagation model.\nHere at) represents the information received by each node from its neighbors in the graph, and the (t) matrix A E IRD|V|2D|V! has a specific structure that determines how nodes communicate. The first half of A, denoted A(out) E RD|V| D|VI, corresponds to outgoing edges, whereas the second half A(in) E RD|V| D|V| corresponds to incoming edges.\nFor instance, if a task consists of tracking relationships between a fixed set of objects, one could. construct a version of the model that does not use the new-nodes transformation (Tadd), but instead. only modifies edges. If the task was to extract information from an existing graph, a structure similar. to the GGS-NNs could be built by using only the propagation and aggregation transformations. If the. task was to construct a graph based on textual input, the query processing steps could be omitted, and. instead the final graph could be returned for processing. And if information should be gathered from. a sequence of graphs instead of from a single graph, the query processing steps could be modified. to run in parallel on the full sequence of graphs and extract information from each graph. This last modification is demonstrated in AppendixD\nhg = tanh Xo ) O tanh(j(h)\nI trained two versions of the GGT-NN model for each task: one with and one without direct refer ence. Tasks 3 and 5, which involve a complex temporal component, were trained with intermediate propagation, whereas all of the other tasks were not because the structure of the tasks made such complexity unnecessary. Most task models were configured to output a single word, but task 19. (pathfinding) used a GRU to output multiple words, and task 8 (listing) was configured to output a strength for each possible word to allow multiple words to be selected without having to consider ordering.\nk+1 X\nNode addition Node state update Edge update New states GRU Input GRU-style Dest update State GRU Q Q Q Q GRU 999(Q XH Q QQQj Propagation Aggregation blue frerd To node 2 + fbwd To node 3 X Output tanh(i) New states From node 2 From node 3 GRU-style update Input\nTable 1: Number of training examples needed before the GGT-NN model could attain 5% error. on each of the bAbI tasks. Experiments were run with 50, 100, 250, 500, and 1000 examples \"GGT-NN + direct ref\" denotes the performance of the model with direct reference, and \"GGT. NN\" denotes the performance of the model without direct reference. Dashes indicate that the model. was unable to reach the desired accuracy with 1000 examples..\nFigure 4: Diagram of the operations performed for each class of transformation. Graph state is shown in the format given by Figure [1 Input and output are shown as gray boxes. Black dots represent concatenation, and + and represent addition and multiplication, respectively. 1 - #/. represents taking the input value and subtracting it from 1. Note that for simplicity, operations are only shown for single nodes or edges, although the operations act on all nodes and edges in parallel. In particular, the propagation section focuses on information sent and received by the first node only. In that section the strengths of the edges in the connectivity matrix determine what information is. sent to each of the other nodes. Light gray connections indicate the value zero, corresponding to. situations where a given edge is not present."}, {"section_index": "14", "section_name": "APPENDIX B GRAPH TRANSFORMATION DETAILS", "section_text": "In this section I describe in detail the implementations of each type of differentiable graph trans formation!'A diagram of the implementation of each transformation is shown in Figure |4 Note that it is natural to think of these transformations as operating on a single graphical state, and each modifying the state in place. However, in the technical descriptions of these transformations, th operations will be described as functions that take in an old graph and produce a new one, similarly to unrolling a recurrent network over time."}, {"section_index": "15", "section_name": "B.1 NODE ADDITION", "section_text": "The node addition transformation Tadd : T R -> T takes as input a graph G and an input vecto a E Ra, and produces a graph g' with additional nodes. The annotation and strength of each new node is determined by a function fadd : R R -> R RN R, where a is the length of the. input vector, is the length of the internal state vector, and as before N is the number of node types The new nodes are then produced according to.\n(S|Vg|+i,X|Vg|+i,hi) = fadd(a,h-1)\nTable 2: Error rates of various models on the bAbI tasks. Bold indicates 5% error. For descriptions. of each of the tasks, see Table[1] \"GGT-NN + direct ref\" denotes the GGT-NN model with direct reference, and \"GGT-NN\"' denotes the version without direct reference. See text for details regarding. the models used for comparison. Results from LSTM and MemNN reproduced from Weston et al. (2016). Results from other existing models reproduced fromHenaff et al.(2016).\nNG-LNN NG-LNN Task Task 1 - Single Supporting Fact. 100 1000 11 - Basic Coreference 100 1000 2 - Two Supporting Facts. 250 12 - Conjunction 500 1000 3 - Three Supporting Facts. 1000 13 - Compound Coref. 100 1000 4 - Two Arg. Relations 1000 1000 14 - Time Reasoning 1000 5 - Three Arg. Relations. 500 15 - Basic Deduction 500 500 6 - Yes/No Questions 100 16 - Basic Induction 100 500 7 - Counting 250 17 - Positional Reasoning. 8 - Lists/Sets 250 1000 18 - Size Reasoning 1000 9 - Simple Negation 250 19 - Path Finding 500 0 - Indefinite Knowledge 1000 20 - Agent's Motivations. 250 250\n1,000 examples 10,000 examples CC-LNN run treee te NernN WLN-V WLST +NWC WLN DNC Task 1 0 0.7 50.0 0 0 0.7 31.5 4.4 0 0 0 0 2 0 5.7 80.0 0 8.3 56.4 54.5 27.5 0.3 0.4 0.3 0.1 3 1.3 12.0 80.0 0 40.3 69.7 43.9 71.3 2.1 1.8 1.1 4.1 4 1.2 2.2 39.0 0 2.8 1.4 0 0 0 0 0 0 5 1.6 10.9 30.0 2.0 13.1 4.6 0.8 1.7 0.8 0.8 0.5 0.3 6 0 7.7 52.0 0 7.6 30.0 17.1 1.5 0.1 0 0 0.2 7 0 5.6 51.0 15.0 17.3 22.3 17.8 6.0 2.0 0.6 2.4 0 8 0 3.3 55.0 9.0 10.0 19.2 13.8 1.7 0.9 0.3 0 0.5 9 0 11.6 36.0 0 13.2 31.5 16.4 0.6 0.3 0.2 0 0.1 10 3.4 28.6 56.0 2.0 15.1 15.6 16.6 19.8 0 0.2 0 0.6 11 0 0.2 28.0 0 0.9 8.0 15.2 0 0 0 0 0.3 12 0.1 0.7 26.0 0 0.2 0.8 8.9 6.2 0 0 0.2 0 13 0 0.8 6.0 0 0.4 9.0 7.4 7.5 0 0 0 1.3 14 2.2 55.1 73.0 1.0 1.7 62.9 24.2 17.5 0.2 0.4 0.2 0 15 0.9 0 79.0 0 0 57.8 47.0 0 0 0 0 0 16 0 0 77.0 0 1.3 53.2 53.6 49.6 51.8 55.1 45.3 0.2 17 34.5 48.0 49.0 35.0 51.0 46.4 25.5 1.2 18.6 12.0 4.2 0.5 18 2.1 10.6 48.0 5.0 11.1 8.8 2.2 0.2 5.3 0.8 2.1 0.3 19 0 70.6 92.0 64.0 82.8 90.4 4.3 39.5 2.3 3.9 0 2.3 20 0 1.0 9.0 0 0 2.6 1.5 0 0 0 0 0\nstarting with ho initialized to some learned initial state, and recurrently computing s, and x, for. each new node, up to some maximum number of nodes. Based on initial experiments, I found that. implementing fadd as a GRU layer followed by 2 hidden tanh layers was effective, although other. recurrent networks would likely be similarly effective. The node hidden states h,, are initialized to zero. The recurrence should be computed as many times as the maximum number of nodes that\n1The code for each transformation, and for the GGT-NN model itself, is available at|ht tps : / / git hub om/hexahedria/gated-graph-transformer-network\nmight be produced. The recurrent function fadd can learn to output s, = 0 for some nodes to create fewer nodes, if necessary.\nof possible entities than the other tasks: each entity consists of a color (chosen from five options and a shape (chosen from four shapes), for a total of 20 unique entities that must be represented. separately. Additionally, the stories are much shorter than those in other tasks (2 facts for each se1 of 8 questions). It is likely that these additional complexities caused the network performance tc. suffer.\nFor comparison, accuracy on the bAbI tasks is also included for a simple sequence-to-sequence. LSTM model and for a variety of existing state-of-the-art approaches (see Table 2): a simple. sequence-to-sequence LSTM model, as implemented in Weston et al.(2016), a modified Mem- ory Network model (MemNN, Weston et al.] 2016), End-To-End Memory Network (MemN2N.. Sukhbaatar et al.]2015), Recurrent Entity Network (EntNet, Henaff et al.]2016), Neural Turing. Machine (NTM, Graves et al.]2014), Dynamic NTM (D-NTM, Gulcehre et al.]2016), a larger version of the MemN2N model with weight tying and nonlinearity (MemN2N*, Sukhbaatar et al.. 2015), Differentiable Neural Computer (DNC,Graves et al.J2016), and Dynamic Memory Network (DMN+,Xiong et al.]2016). Although the GGT-NN model was trained using only 1,000 training. examples, results using 10,o00 examples have also been reproduced here for comparison. Also, it is important to note that the GGT-NN and MemNN models were trained with strong supervision:. the GGT-NN model was trained with full graph information, and the MemNN model was trained. with information on which sentences were relevant to the query. All other models were trained. end-to-end without additional supervision."}, {"section_index": "16", "section_name": "B.2 NODE STATE UPDATE", "section_text": "ry = (Wr|a x,+ Urhv +br)z zy =(Wz|axv+Uzhy+bz) h', = tanh(W[a x,] + U (r O h,) + b), h', =z, O h, +(1 -zy) O h,\nry = o(Wr[a xy] + U,hy + br), h', = tanh(W[a x,] + U (r O hy) + b)\nSince the GGT-NN and MemNN models are both strongly supervised, it is interesting to note tha each approach outperforms the other on a subset of the tasks. In particular, the GGT-NN model with direct reference attains a higher level of accuracy on the following tasks, with an improvement o1 0.4-64% depending on the task: task 5 (0.4%), task 7 (15%), task 8 (9%), task 17 (0.5%), task 18 (2.9%), and task 19 (64%). This may indicate that a graphical representation is superior to a list o sentence memories for solving these tasks. On the other hand, the MemNN model outperforms the GGT-NN model (0.1-2.9% greater accuracy) on tasks 3, 4, 10, 12, 14, and 15.\nFor some tasks, performance can be improved by providing information to nodes of a particular type only. For instance, if the input is a sentence, and one word of that sentence directly refers to a node type (e.g., if nodes of type 1 represent Mary, and Mary appears in the sentence), it can be helpful tc allow all nodes of type 1 to perform an update using this information. To accomplish this, Th car be modified to take node types into account. (This modification is denoted Th,direct.) Instead of a single vector a E R, the direct-reference transformation takes in A E RNxa, where An E Ra is the input vector for nodes with type n. The update equations then become\nOf particular interest is the performance on task 19, the pathfinding task, for which the GGT-NN model with direct reference performs better than all but one of the other models (DMN+), anc shows a large improvement over the performance of the MemNN model. This is reasonable, since pathfinding is a task that is naturally suited to a graphical representation. The shortest path betweer two nodes can be easily found by sending information across all paths away from one of the nodes ir a distributed fashion, which the GGT-NN model allows. Note that the preexisting GGS-NN mode (discussed in Section2.2) was also able to successfully learn the pathfinding task, but required the. input to be preprocessed into graphical form even when evaluating the model, and thus could no. be directly evaluated on the textual form of any of the bAbI tasks (Li et al.]2016). The curren results demonstrate that the proposed GGT-NN model is able to solve the pathfinding task wher given textual input."}, {"section_index": "17", "section_name": "B.3 EDGE UPDATE", "section_text": "The edge update transformation Tc : F R -> T takes a graph G and an input vector a E Ra, and produces a graph g' with updated edges. For each pair of nodes (v, v'), the update equations are.\nSimilarly, both variants of the GGT-NN model show improvement over many other models on task. 16, the induction task. Solving the induction task requires being able to infer relationships based on. similarities between entities. (One example from this task: Lily is a swan. Lily is white. Bernhard is green. Greg is a swan. What color is Greg? A:white.) In a graphical setting, this can be done. by following a sequence of edges (Greg -> swan -> Lily > white), and the performance of the. GGT-NN model indicates that this task is particularly suited to such a representation..\nIn general, the GGT-NN model with direct reference performs better than the model without it. The. model with direct reference reaches 95% accuracy on 19/20 of the bAbI tasks, while the model without direct reference reaches that level of accuracy on 9/20 of the tasks (see Table 2). Addi-. tionally, when compared to the direct-reference model, the model without direct reference requires more training examples in order to reach the accuracy threshold (see Table|1). This indicates that although the model can be used without direct reference, adding direct reference greatly improves. the training of the model.."}, {"section_index": "18", "section_name": "5.2 RULE DISCOVERY TASKS", "section_text": "The propagation transformation Tprop : F -> I takes a graph G = g(o) and runs a series of T propagation steps (as in GG-NN), returning the resulting graph G' = G(T). The GG-NN propagation step is extended to handle node and edge strengths, as well as to allow more processing to occur to\nTo demonstrate the power of GGT-NN to model a wide variety of graph-based problems, I applied the GGT-NN to two additional tasks. In each task, a sequence of data structures were transformed into a graphical format, and the GGT-NN was tasked with predicting the data for the next timestep\nNote that in order to use information from all of the existing nodes to produce the new nodes, the input to this transformation should include information provided by an aggregation transformation. Trepr, described in sectionB.5\nThe node state update transformation Tn : I Ra -> T takes as input a graph G and an input vector a E Ra, and produces a graph g' with updated node states. This is accomplished by performing a GRU-style update for each node, where the input is a concatenation of a and that node's annotation vector x, and the state is the node's hidden state, according to\nry = o(Wr|a, xv+ Urhv + br), Zy = o(Wz[ay xu]+ Uzhy+bz) ; h', =tanh(W[a, xy] +U(rOhy) + b), h', =z, Oh,+(1-zy) O h'\nCv,v' = fset(a, Xu, hu, Xu', hy,) rv,v' = freset(a, Xv, hu, Xu', hy,) Cy,v' = (1-Cu,v') O Cu,v'+ Cu,v'O (1-ru,u')\nThe functions fset, freset : IR2N 2D -> [0, 1]Y are implemented as neural networks.(In my experiments, I used a simple 2-layer fully connected network.) cu,v',y gives the level of belief in [0, 1] that an edge from v to v' of type y should be created if it does not exist, and ru,v',y gives the level of belief in [0, 1] that an edge from v to v' of type y should be removed if it does. Setting both to zero results in no change for that edge, and setting both to 1 toggles the edge state.\nthe information transferred across edges. The full pr agation equations for step t are.\nv'EV y=] ) = o xy]+ U,h(t-1) + br tanh(W[ (t\nTable 3: Accuracy of GGT-NN on the Rule 30 Automaton and Turing Machine tasks\n000 iterations 2000 iterations 3000 iterations 7000 iterations Ground truth . .. .8. r .....p..\nEquation|5|has been adjusted in the most significant manner (relative to Equation|2). In particular Sy' restricts propagation so that nodes with low strength send less information to adjacent nodes. Sedge has been replaced with C to allow edges with fractional strength, and the propagation matrices. Py, P' have been replaced with arbitrary functions ffwd, fbwd : RN RD > R, where is the. length of the vector a. I used a fully connected layer to implement each function in my experiments.. Equations[6[ 7] and[8|have also been modified slightly to add a bias term.\nFigure 3: Visualization of network performance on the Rule 30 Automaton task. Top node (purple represents zero, bottom node (blue) represents 1, and middle nodes (green, orange, and red) repre sent individual cells. Blue edges indicate adjacent cells, and gold edges indicate the value of each cell. Three timesteps occur between each row.."}, {"section_index": "19", "section_name": "B.5 AGGREGATION", "section_text": "The aggregation transformation Trepr : T -> R produces a graph-level representation vector from a graph. It functions very similarly to the output representation of a GG-NN (equation|3), combining an attention mechanism with a node representation function, but is modified slightly to take into account node strengths. As in GG-NN, both i and j are neural networks, and in practice a single fully connected layer appears to be adequate for both.\nbased on the current timestep. No additional information was provided as textual input; instead, the. network was tasked with learning the rules governing the evolution of the graph structure over time"}, {"section_index": "20", "section_name": "5.2.1 CELLULAR AUTOMATON TASK", "section_text": "The first task used was a 1-dimensional cellular automaton, specifically the binary cellular automa. ton known as Rule 30 (Wolfram2002). Rule 30 acts on an infinite set of cells, each with a binary. state (either O or 1). At each timestep, each cell deterministically changes state based on its previous state and the states of its neighbors. In particular, the update rules are.\nThe knowledge graph object used during generation of the bAbI tasks is structured as a dictionary relating entities to each other with specific relationship types. Entities are identified based on thei names, and include people (John, Mary, Sandra), locations (bedroom, kitchen, garden), object (football, apple, suitcase), animals (mouse, wolf, cat), and colors (white, yellow, green), depending on the particular task. Relationships between entities are also expressed as strings, and are directed if John is holding the milk there is an \"is_in' relationship from \"milk' to \"John'; if Sandra is ir the bedroom there is an \"is_in'' relationship from \"Sandra' to \"bedroom'; if Lily is green there is a \"has_color\"' relationship from \"Lily\"' to \"green\", etc.\nCell states can be converted into graphical format by treating the cells as a linked list. Each of the cells is represented by a node with edges connecting it to the cell's neighbors, and a value edge is used to indicate whether the cell is O or 1. This format is described in more detail in AppendixC"}, {"section_index": "21", "section_name": "5.2.2 TURING MACHINES", "section_text": "The second task was simulating an arbitrary 2-symbol 4-state Turing machine. A Turing machin. operates on an infinite tape of cells, each containing a symbol from a finite set of possible symbols. It has a head, which points at a particular cell and can read and write the symbol at that cell. It als has an internal state, from a finite set of states. At each timestep, based on the current state and th contents of the cell at the head, the machine writes a new symbol, changes the internal state, and ca move the head left or right or leave it in place. The action of the machine depends on a finite set o. rules, which specify the actions to take for each state-symbol combination. Note that the version o Turing machine used here has only 2 symbols, and requires that the initial contents of the tape be al. O (the first symbol) except for finitely many 1s (the second symbol)..\nThe transformation from the knowledge object to a graph is straightforward: each entity used is assigned to a new node type, and relationships between entities are represented as edges between the corresponding nodes. To avoid confusion from overloaded relationships (such as \"is_in'' being used to represent an object being held by a person as well as a person being in a room), relation names are given a distinct edge type depending on the usage context. For instance, when a person is carrying an object, the generic \"is_in'' relationship becomes an edge of type \"gettable_is_in_actor'\nSome of the graph representations had to be modified in order to ensure that they contained all o. the necessary information. For instance, task 3 requires the network to remember where items wer in the past, but the knowledge object only contained references to their current locations. In thes cases, a linked list structure was added to the knowledge object to allow the history information t be represented in the graph.\nWhen converting a Turing machine to graphical format, the tape of the machine is modeled as a linked list of cells. Additionally, each state of the machine is denoted by a state node, and edges between these nodes encode the transition rules. There is also a head node, which connects both to the current cell and to the current state of the machine. See Appendix|C[for more details..\nIn particular, each time an item changed locations, a new \"record' node was added, with a \"previous' edge to the previous history node and a \"value\"' edge to the current location of the item. Each item then connected to the most recent history node using a \"history-head\"' edge. This ensures that the history of each node is present in the graph."}, {"section_index": "22", "section_name": "5.2.3 ANALYSIS AND RESULTS", "section_text": "The GGT-NN model was trained on 1000 examples of the Rule 30 automaton with different ini tial states, each of which simulated 7 timesteps of the automaton, and 20,o00 examples of Turing.\nOriginal Task Generalization: 20 Generalization: 30\n.. .\nhc = tanh xu)) O tanh(j(h\nCurrent neighborhood 111 110 101 100 011 010 001 000 Next value 0 0 0 1 1 1 1 0"}]
S1Bb3D5gg
"[{\"section_index\": \"0\", \"section_name\": \"LEARNING END-TO-END GOAL-ORIENTED DIALOG\", \"secti(...TRUNCATED)
r10FA8Kxg
"[{\"section_index\": \"0\", \"section_name\": \"5 CONCLUSIONS\", \"section_text\": \"We train shall(...TRUNCATED)
Hk85q85ee
"[{\"section_index\": \"0\", \"section_name\": \"REFERENCES\", \"section_text\": \"Choromanska, Anna(...TRUNCATED)
Skvgqgqxe
"[{\"section_index\": \"0\", \"section_name\": \"LEARNING TO COMPOSE WORDS INTO SENTENCES WITH REINF(...TRUNCATED)
BymIbLKgl
"[{\"section_index\": \"0\", \"section_name\": \"ACKNOWLEDGMENTS\", \"section_text\": \"This project(...TRUNCATED)
rJ8Je4clg
"[{\"section_index\": \"0\", \"section_name\": \"REFERENCES\", \"section_text\": \"Frank S. He\\nDep(...TRUNCATED)
By1snw5gl
"[{\"section_index\": \"0\", \"section_name\": \"REFERENCES\", \"section_text\": \"Johannes Brust, J(...TRUNCATED)
README.md exists but content is empty.
Downloads last month
7