forum_id
stringlengths
9
10
sections
stringlengths
1.26k
174k
HkwoSDPgg
[{"section_index": "0", "section_name": "SEMI-SUPERVISED KNOWLEDGE TRANSFER FOR DEEP LEARNING FROM PRIVATE TRAINING DATA", "section_text": "Nicolas Papernot\nMartin Abadi\nGoogle Brain\nPennsylvania State University\ngoodfellow@google.com\nSome machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information.\nTo address this problem, we demonstrate a generally applicable approach to pro- viding strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not pub- lished, but instead used as \"teachers\"' for a \"student\"' model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Some machine learning applications with great benefits are enabled only through the analysis oi. sensitive data, such as users' personal contacts, private photographs or correspondence, or even. medical records or genetic sequences (Alipanahi et al.]2015] Kannan et al.]2016] Kononenko]2001 Sweeney[1997). Ideally, in those cases, the learning algorithms would protect the privacy of users. training data, e.g., by guaranteeing that the output model generalizes away from the specifics of any. individual user. Unfortunately, established machine learning algorithms make no such guarantee indeed, though state-of-the-art algorithms generalize well to the test set, they continue to overfit on. specific training examples in the sense that some of these examples are implicitly memorized..\nRecent attacks exploiting this implicit memorization in machine learning have demonstrated that. private, sensitive training data can be recovered from models. Such attacks can proceed directly, by. analyzing internal model parameters, but also indirectly, by repeatedly querying opaque models to. gather data for the attack's analysis. For example, [Fredrikson et al.(2015) used hill-climbing on the. output probabilities of a computer-vision classifier to reveal individual faces from the training data\nUlfar Erlingsson\nabadi@google.com\nulfar@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning\nBecause of those demonstrations-and because privacy guarantees must apply to worst-case out liers, not only the average- any strategy for protecting the privacy of training data should prudently assume that attackers have unfettered access to internal model parameters..\nTo protect the privacy of training data, this paper improves upon a specific, structured application of. the techniques of knowledge aggregation and transfer (Breiman|[1994), previously explored byNis- sim et al.(2007), Pathak et al.(2010), and particularlyHamm et al.[(2016). In this strategy, first, an ensemble (Dietterich!20o0) of teacher models is trained on disjoint subsets of the sensitive data Then, using auxiliary, unlabeled non-sensitive data, a student model is trained on the aggregate out-. put of the ensemble, such that the student learns to accurately mimic the ensemble. Intuitively, this. strategy ensures that the student does not depend on the details of any single sensitive training data. point (e.g., of any single user), and, thereby, the privacy of the training data is protected even if. attackers can observe the student's internal model parameters..\nThis paper shows how this strategy's privacy guarantees can be strengthened by restricting student training to a limited number of teacher votes, and by revealing only the topmost vote after care. fully adding random noise. We call this strengthened strategy PATE, for Private Aggregation oJ Teacher Ensembles. Furthermore, we introduce an improved privacy analysis that makes the strat egy generally applicable to machine learning algorithms with high utility and meaningful privacy guarantees--in particular, when combined with semi-supervised learning.\nTo establish strong privacy guarantees, it is important to limit the student's access to its teachers. so that the student's exposure to teachers' knowledge can be meaningfully quantified and bounded Fortunately, there are many techniques for speeding up knowledge transfer that can reduce the rate of student/teacher consultation during learning. We describe several techniques in this paper, the most effective of which makes use of generative adversarial networks (GANs) (Goodfellow et al. 2014) applied to semi-supervised learning, using the implementation proposed by Salimans et al. (2016). For clarity, we use the term PATE-G when our approach is combined with generative, semi- supervised methods. Like all semi-supervised learning methods, PATE-G assumes the student has access to additional, unlabeled data, which, in this context, must be public or non-sensitive. This assumption should not greatly restrict our method's applicability: even when learning on sensitive data, a non-overlapping, unlabeled set of data often exists, from which semi-supervised methods can extract distribution priors. For instance, public datasets exist for text and images, and for medical data.\nIt seems intuitive, or even obvious, that a student machine learning model will provide good privacy when trained without access to sensitive training data, apart from a few, noisy votes from a teacher quorum. However, intuition is not sufficient because privacy properties can be surprisingly hard to reason about; for example, even a single data item can greatly impact machine learning models trained on a large corpus (Chaudhuri et al.|2011). Therefore, to limit the effect of any single sensitive data item on the student's learning, precisely and formally, we apply the well-established, rigorous standard of differential privacy (Dwork & Roth!2014). Like all differentially private algorithms, our learning strategy carefully adds noise, so that the privacy impact of each data item can be analyzed and bounded. In particular, we dynamically analyze the sensitivity of the teachers' noisy votes: for this purpose, we use the state-of-the-art moments accountant technique fromAbadi et al.(2016). which tightens the privacy bound when the topmost vote has a large quorum. As a result, for MNIST and similar benchmark learning tasks, our methods allow students to provide excellent utility, while our analysis provides meaningful worst-case guarantees. In particular, we can bound the metric for privacy loss (the differential-privacy e) to a range similar to that of existing, real-world privacy protection mechanisms, such as Google's RAPPOR (Erlingsson et al.]2014).\nFinally, it is an important advantage that our learning strategy and our privacy analysis do not depend on the details of the machine learning techniques used to train either the teachers or their student Therefore, the techniques in this paper apply equally well for deep learning methods, or any such learning methods with large numbers of parameters, as they do for shallow, simple techniques. In comparison, Hamm et al.(2016) guarantee privacy only conditionally, for a restricted class of student classifiers-in effect, limiting applicability to logistic regression with convex loss. Also, unlike the methods of Abadi et al.(2016), which represent the state-of-the-art in differentially private deep learning, our techniques make no assumptions about details such as batch selection, the loss function, or the choice of the optimization algorithm. Even so, as we show in experiments on\nFigure 1: Overview of the approach: (1) an ensemble of teachers is trained on disjoint subsets of the sensitive data, (2) a student model is trained on public data labeled using the ensemble\nOur results are encouraging, and highlight the benefits of combining a learning strategy based on. semi-supervised knowledge transfer with a precise, data-dependent privacy analysis. However, the most appealing aspect of this work is probably that its guarantees can be compelling to both an expert and a non-expert audience. In combination, our techniques simultaneously provide both an intuitive and a rigorous guarantee of training data privacy, without sacrificing the utility of the targeted model. This gives hope that users will increasingly be able to confidently and safely benefit from machine learning models built from their sensitive data..\nIn this section, we introduce the specifics of the PATE approach, which is illustrated in Figure[1 We describe how the data is partitioned to train an ensemble of teachers, and how the predictions made by this ensemble are noisily aggregated. In addition, we discuss how GANs can be used in. training the student, and distinguish PATE-G variants that improve our approach using generative,. semi-supervised methods.\nNot accessible by adversary Accessible by adversary Data 1 Teacher 1 Data 2 Teacher 2 Sensitive Aggregate Student Queries Data Data 3 Teacher 3 Teacher Predicted Incomplete Data n Teacher n. completion Public Data Training Prediction Data feeding\nMNIST and SVHN, our techniques provide a privacy/utility tradeoff that equals or improves upon bespoke learning methods such as those of Abadi et al.(2016).\nWe demonstrate a general machine learning strategy, the PATE approach, that provides dif-. ferential privacy for training data in a \"black-box\"' manner, i.e., independent of the learning. algorithm, as demonstrated by Section4|and Appendix |C We improve upon the strategy outlined inHamm et al.(2016) for learning machine models that protect training data privacy. In particular, our student only accesses the teachers' top. vote and the model does not need to be trained with a restricted class of convex losses. We explore four different approaches for reducing the student's dependence on its teachers.. and show how the application of GANs to semi-supervised learning of Salimans et al.. (2016) can greatly reduce the privacy loss by radically reducing the need for supervision.. We present a new application of the moments accountant technique from|Abadi et al.[(2016 for improving the differential-privacy analysis of knowledge transfer, which allows the. training of students with meaningful privacy bounds.. . We evaluate our framework on MNIST and SVHN, allowing for a comparison of our results. with previous differentially private machine learning methods. Our classifiers achieve an. (c, ) differential-privacy bound of (2.04, 10-5) for MNIST and (8.19, 10-6) for SVHN, respectively with accuracy of 98.00% and 90.66%. In comparison, for MNIST,|Abadi et al.. (2016) obtain a looser (8, 10-5) privacy bound and 97% accuracy. For SVHN,Shokri & Shmatikov(2015) report approx. 92% accuracy with e > 2 per each of 300,000 model pa- rameters, naively making the total e > 600,000, which guarantees no meaningful privacy.. Finally, we show that the PATE approach can be successfully applied to other model struc-. tures and to datasets with different characteristics. In particular, in Appendix C PATE. protects the privacy of medical data used to train a model based on random forests..\nData partitioning and teachers: Instead of training a single model to solve the task associated with dataset (X, Y), where X denotes the set of inputs, and Y the set of labels, we partition the data in n disjoint sets (Xn, Yn) and train a model separately on each set. As evaluated in Section4.1] assum- ing that n is not too large with respect to the dataset size and task complexity, we obtain n classifiers fi called teachers. We then deploy them as an ensemble making predictions on unseen inputs x by querying each teacher for a prediction f;(x) and aggregating these into a single prediction.\nAggregation: The privacy guarantees of this teacher ensemble stems from its aggregation. Let m be the number of classes in our task. The label count for a given class j E [m] and an input x is the number of teachers that assigned class j to input x: n;(x) = {i : i E [n], f(x) = j}|. If we simply apply plurality-use the label with the largest count--the ensemble's decision may depend on a single teacher's vote. Indeed, when two labels have a vote count differing by at most one, there is a tie: the aggregated output changes if one teacher makes a different prediction. We add random noise to the vote counts n; to introduce ambiguity:\n1} f(x) = arg max n;(x) + Lap\nWhile we could use an f such as above to make predictions, the noise required would increase as we make more predictions, making the model useless after a bounded number of queries. Furthermore privacy guarantees do not hold when an adversary has access to the model parameters. Indeed as each teacher fi was trained without taking into account privacy, it is conceivable that they have sufficient capacity to retain details of the training data. To address these limitations, we train anothe. model, the student, using a fixed number of labels predicted by the teacher ensemble..\nWe train a student on nonsensitive and unlabeled data, some of which we label using the aggregatior mechanism. This student model is the one deployed, in lieu of the teacher ensemble, so as to fix the privacy loss to a value that does not grow with the number of user queries made to the student model Indeed, the privacy loss is now determined by the number of queries made to the teacher ensemble during student training and does not increase as end-users query the deployed student model. Thus the privacy of users who contributed to the original training dataset is preserved even if the student's architecture and parameters are public or reverse-engineered by an adversary.\nWe considered several techniques to trade-off the student model's quality with the number of labels it needs to access: distillation, active learning, semi-supervised learning (see Appendix[B). Here, we only describe the most successful one, used in PATE-G: semi-supervised learning with GANs.\nTraining the student with GANs: The GAN framework involves two machine learning models. a generator and a discriminator. They are trained in a competing fashion, in what can be viewed as a two-player game (Goodfellow et al.]2014). The generator produces samples from the data distribution by transforming vectors sampled from a Gaussian distribution. The discriminator is. trained to distinguish samples artificially produced by the generator from samples part of the real data distribution. Models are trained via simultaneous gradient descent steps on both players' costs. In practice, these dynamics are often difficult to control when the strategy set is non-convex (e.g., a. DNN). In their application of GANs to semi-supervised learning, Salimans et al.(2016) made the. following modifications. The discriminator is extended from a binary classifier (data vs. generator. sample) to a multi-class classifier (one of k classes of data samples, plus a class for generated. samples). This classifier is then trained to classify labeled real samples in the correct class, unlabeled real samples in any of the k classes, and the generated samples in the additional class..\nIn this equation, y is a privacy parameter and Lap(b) the Laplacian distribution with location O and. scale b. The parameter y influences the privacy guarantee we can prove. Intuitively, a large y leads. to a strong privacy guarantee, but can degrade the accuracy of the labels, as the noisy maximum f above can differ from the true plurality.\nAlthough no formal results currently explain why yet, the technique was empirically demonstrated to greatly improve semi-supervised learning of classifiers on several datasets, especially when the classifier is trained with feature matching loss (Salimans et al.[|2016).\nTraining the student in a semi-supervised fashion makes better use of the entire data available to the student, while still only labeling a subset of it. Unlabeled inputs are used in unsupervised learning to estimate a good prior for the distribution. Labeled inputs are then used for supervised learning\nWe now analyze the differential privacy guarantees of our PATE approach. Namely, we keep track. of the privacy budget throughout the student's training using the moments accountant (Abadi et al. 2016). When teachers reach a strong quorum, this allows us to bound privacy costs more strictly..\nDefinition 1. A randomized mechanism M with domain D and range R satisfies (E, 8)-differentia privacy iffor any two adjacent inputs d, d' E D and for any subset of outputs S C R it holds that:.\nThe privacy loss random variable C(M, aux, d, d') is defined as c(M(d); M, aux, d, d'), i.e. the. random variable defined by evaluating the privacy loss at an outcome sampled from M(d).\nA natural way to bound our approach's privacy loss is to first bound the privacy cost of each label. queried by the student, and then use the strong composition theorem (Dwork et al.|2010) to derive. the total cost of training the student. For neighboring databases d, d', each teacher gets the same. training data partition (that is, the same for the teacher with d and with d', not the same across. teachers), with the exception of one teacher whose corresponding training data partition differs Therefore, the label counts n,(x) for any example x, on d and d' differ by at most 1 in at most two. locations. In the next subsection, we show that this yields loose guarantees..\nDefinition 3. Let M : D -> R be a randomized mechanism and d, d' a pair of adjacent database. Let aux denote an auxiliary input. The moments accountant is defined as.\nThe following properties of the moments accountant are proved in[Abadi et al.(2016]\nDifferential privacy (Dwork et al.] [2006b, Dwork]2011) has established itself as a strong standard. It provides privacy guarantees for algorithms analyzing databases, which in our case is a machine learning training algorithm processing a training dataset. Differential privacy is defined using pairs of adjacent databases: in the present work, these are datasets that only differ by one training example. Recall the following variant of differential privacy introduced in|Dwork et al.(2006a).\nPr[M(d) E S] < ec Pr[M(d) e S]+ o\nPr[M(aux, d) = o c(o; M, aux, d, d') = 1og Pr[M(aux, d') = o]\naM(A) max, am(; aux, d, d' aux,d,d'\nwhere aM(; aux, d, d') logE[exp(C(M, aux, d, d'))] is the moment generating function oJ the privacy loss random variable.\nTheorem 1. 1. [Composability] Suppose that a mechanism M consists of a sequence of adap tive mechanisms M1,..., M where M: I'=1 R, D - Ri. Then, for any output sequence. , Ok-1 and any )1\nk QM(;d, d') =)QM,(;01,.,Oi-1, d, i=1\ns = minexp(aM() Xe)\nThe following theorem, proved in Appendix A] provides a data-dependent bound on the moments of any differentially private mechanism where some specific outcome is very likely\nTo upper bound q for our aggregation mechanism, we use the following simple lemma, also proved in AppendixA\nLemma 4. Let n be the label score vector for a database d with n;* > n; for all j. Then\n2+y(nj*-n Pr[M(d) F j*] > 4 exp(y(nj* - nj) j#j*\nThis allows us to upper bound q for a specific score vector n, and hence bound specific moments. We take the smaller of the bounds we get from Theorems|2|and|3] We compute these moments for a fev values of X (integers up to 8). Theorem 1|allows us to add these bounds over successive steps, anc derive an (e, ) guarantee from the final a. Interested readers are referred to the script that we used to empirically compute these bounds, which is released along with our code: https : / /github.\ncom/tensorflow/models/tree/master/differential privacy/multiple_teachers\nx(l; aux, d, d) < 2y2l(l +1\n) which is (2y. 0)-DP. Thus over. At each step, we use the aggregation mechanism with noise Lap(\nT steps, we get (4T2 + 2y/2T ln , )-differential privacy. This can be rather large: plugging. in values that correspond to our SVHN result, y = 0.05, T = 1000, = 1e-6 gives us e ~ 26 or alternatively plugging in values that correspond to our MNIST result, y = 0.05, T = 100, = 1e5 gives us E ~ 5.80.\nOur data-dependent privacy analysis takes advantage of the fact that when the quorum among the. teachers is very strong, the majority outcome has overwhelming likelihood, in which case the pri-. vacy cost is small whenever this outcome occurs. The moments accountant allows us analyze the composition of such mechanisms in a unified framework..\na(l; aux, d, d') < log((1 + q exp(2yl))\nSince the privacy moments are themselves now data dependent, the final e is itself data-dependent. and should not be revealed. To get around this, we bound the smooth sensitivity (Nissim et al. 2007) of the moments and add noise proportional to it to the moments themselves. This gives us a differentially private estimate of the privacy cost. Our evaluation in Section4|ignores this overhead and reports the un-noised values of e. Indeed, in our experiments on MNIST and SVHN, the scale of the noise one needs to add to the released e is smaller than 0.5 and 1.0 respectively..\nHow does the number of teachers affect the privacy cost? Recall that the student uses a noisy label computed in (1) which has a parameter y. To ensure that the noisy label is likely to be the correct one, the noise scale 1 should be small compared to the the additive gap between the two largest vales of ns. While the exact dependence of y on the privacy cost in Theorem|3lis subtle, as a general principle, a smaller leads to a smaller privacy cost. Thus, a larger gap translates to a smaller privacy cost. Since the gap itself increases with the number of teachers, having more teachers would lower the privacy cost. This is true up to a point. With n teachers, each teacher only trains on a fraction of the training data. For large enough n, each teachers will have too little training data to be accurate."}, {"section_index": "3", "section_name": "4 EVALUATION", "section_text": "In our evaluation of PATE and its generative variant PATE-G, we first train a teacher ensemble for. each dataset. The trade-off between the accuracy and privacy of labels predicted by the ensemble is greatly dependent on the number of teachers in the ensemble: being able to train a large set of teachers is essential to support the injection of noise yielding strong privacy guarantees while having. a limited impact on accuracy. Second, we minimize the privacy budget spent on learning the student. by training it with as few queries to the ensemble as possible..\nOur experiments use MNIST and the extended SVHN datasets. Our MNIST model stacks two convolutional layers with max-pooling and one fully connected layer with ReLUs. When trained on the entire dataset, the non-private model has a 99.18% test accuracy. For SVHN, we add two hidden layers|'|The non-private model achieves a 92.8% test accuracy, which is shy of the state-of-the-art.. However, we are primarily interested in comparing the private student's accuracy with the one of a. non-private model trained on the entire dataset, for different privacy guarantees. The source code. for reproducing the results in this section is available on GitHub2.\nAs mentioned above, compensating the noise introduced by the Laplacian mechanism presented ir Equation1requires large ensembles. We evaluate the extent to which the two datasets considered car be partitioned with a reasonable impact on the performance of individual teachers. Specifically, we show that for MNIST and SVHN, we are able to train ensembles of 250 teachers. Their aggregatec predictions are accurate despite the injection of large amounts of random noise to ensure privacy The aggregation mechanism output has an accuracy of 93.18% for MNIST and 87.79% for SVHN when evaluated on their respective test sets, while each query has a low privacy budget of e = 0.05\nPrediction accuracy: All other things being equal, the number n of teachers is limited by a trade off between the classification task's complexity and the available data. We train n teachers by partitioning the training data n-way. Larger values of n lead to larger absolute gaps, hence poten- tially allowing for a larger noise level and stronger privacy guarantees. At the same time, a larger n implies a smaller training dataset for each teacher, potentially reducing the teacher accuracy. We empirically find appropriate values of n for the MNIST and SVHN datasets by measuring the test\nThe model is adapted from https://www.tensorflow.org/tutorials/deep_cnn https://github.com/tensorflow/models/tree/master/differential_privacy/multiple_teach\nTo conclude, we note that our analysis is rather conservative in that it pessimistically assumes that, even if just one example in the training set for one teacher changes, the classifier produced by that teacher may change arbitrarily. One advantage of our approach, which enables its wide applica bility, is that our analysis does not require any assumptions about the workings of the teachers Nevertheless, we expect that stronger privacy guarantees may perhaps be established in specific settings-when assumptions can be made on the learning algorithm used to train the teachers\n100 (%) aeeeee eee aaee aeeetaete er eeeaneee 90 80 70 60 50 mNIST (n=10) X MNIST (n=100) 40 MNIST (n=250) K 30 SVHN (n=10) - SVHN (n=100) 20 SVHN (n=250) 10 0.0 0.1 0.2 0.3 0.4 0.5 Y per label query\nFigure 2: How much noise can be injected to a query? Accuracy of the noisy aggrega- tion for three MNIST and SVHN teacher en- sembles and varying y value per query. The noise introduced to achieve a given y scales inversely proportionally to the value of y: small values of y on the left of the axis corre- spond to large noise amplitudes and large values on the right to small noise\nPrediction confidence: As outlined in Section[3] the privacy of predictions made by an ensembl of teachers intuitively requires that a quorum of teachers generalizing well agree on identical labels. This observation is reflected by our data-dependent privacy analysis, which provides stricter privac. bounds when the quorum is strong. We study the disparity of labels assigned by teachers. In othe. words, we count the number of votes for each possible label, and measure the difference in vote between the most popular label and the second most popular label, i.e., the gap. If the gap is smal. introducing noise during aggregation might change the label assigned from the first to the seconc. Figure 3 shows the gap normalized by the total number of teachers n. As n increases, the ga. remains larger than 60% of the teachers, allowing for aggregation mechanisms to output the correc. label in the presence of noise..\nNoisy aggregation: For MNIST and SVHN, we consider three ensembles of teachers with varying. number of teachers n E {10, 100, 250}. For each of them, we perturb the vote counts with Laplacian noise of inversed scale y ranging between O.01 and 1. This choice is justified below in Section|4.2 We report in Figure|2|the accuracy of test set labels inferred by the noisy aggregation mechanism for. these values of e. Notice that the number of teachers needs to be large to compensate for the impact of noise injection on the accuracy.\nThe noisy aggregation mechanism labels the student's unlabeled training set in a privacy-preservin. fashion. To reduce the privacy budget spent on student training, we are interested in making as fev. label queries to the teachers as possible. We therefore use the semi-supervised training approach de. scribed previously. Our MNIST and SVHN students with (E, ) differential privacy of (2.04, 10-5 and (8.19, 10-6) achieve accuracies of 98.00% and 90.66%. These results improve the differentia privacy state-of-the-art for these datasets. Abadi et al.(2016) previously obtained 97% accuracy. with a (8, 10-5) bound on MNIST, starting from an inferior baseline model without privacy.Shokr. & Shmatikov[(2015) reported about 92% accuracy on SVHN with e > 2 per model parameter and model with over 300,000 parameters. Naively, this corresponds to a total e > 600,000.\n100 (%) danennne eo anarne aq pnnnnrnne den 80 60 40 20 MNIST SVHN 1 2 3 4 51025 50 100 250 Number of teachers\nFigure 3: How certain is the aggregation of. teacher predictions? Gap between the num- ber of votes assigned to the most and second. most frequent labels normalized by the num-. ber of teachers in an ensemble. Larger gaps. indicate that the ensemble is confident in as-. signing the labels, and will be robust to more. noise injection. Gaps were computed by av-. eraging over the test data..\nset accuracy of each teacher trained on one of the n partitions of the training data. We find that even for n = 250, the average test accuracy of individual teachers is 83.86% for MNIST and 83.18% for SVHN. The larger size of SVHN compensates its increased task complexity..\nFigure 4: Utility and privacy of the semi-supervised students: each row is a variant of the stu. dent model trained with generative adversarial networks in a semi-supervised way, with a differen number of label queries made to the teachers through the noisy aggregation mechanism. The last. column reports the accuracy of the student and the second and third column the bound e and failure probability & of the (c, o) differential privacy guarantee..\nWe apply semi-supervised learning with GANs to our problem using the following setup for eac. dataset. In the case of MNIST, the student has access to 9,000 samples, among which a subse of either 100, 500, or 1,000 samples are labeled using the noisy aggregation mechanism discussec. in Section2.1 Its performance is evaluated on the 1,000 remaining samples of the test set. Not that this may increase the variance of our test set accuracy measurements, when compared to thos. computed over the entire test data. For the MNIST dataset, we randomly shuffle the test set to ensur that the different classes are balanced when selecting the (small) subset labeled to train the studen1 For SVHN, the student has access to 10,000 training inputs, among which it labels 500 or 1,000. samples using the noisy aggregation mechanism. Its performance is evaluated on the remaining. 16,032 samples. For both datasets, the ensemble is made up of 250 teachers. We use Laplacian scal of 20 to guarantee an individual query privacy bound of e = 0.05. These parameter choices ar motivated by the results from Section|4.1\nIn Figure4] we report the values of the (c, d) differential privacy guarantees provided and the cor- responding student accuracy, as well as the number of queries made by each student. The MNIST student is able to learn a 98% accurate model, which is shy of 1% when compared to the accuracy of a model learned with the entire training set, with only 100 label queries. This results in a strict differentially private bound of e = 2.04 for a failure probability fixed at 10-5. The SVHN stu- dent achieves 90.66% accuracy, which is also comparable to the 92.80% accuracy of one teacher learned with the entire training set. The corresponding privacy bound is e = 8.19, which is higher than for the MNIST dataset, likely because of the larger number of queries made to the aggregation mechanism.\nSeveral privacy definitions are found in the literature. For instance, k-anonymity requires information about an individual to be indistinguishable from at least k - 1 other individuals in the dataset (L. Sweeney[2002). However, its lack of randomization gives rise to caveats (Dwork & Roth|[2014), and attackers can infer properties of the dataset (Aggarwal||2005). An alternative definition, differential privacy, established itself as a rigorous standard for providing privacy guarantees (Dwork et al.. 2006b). In contrast to k-anonymity, differential privacy is a property of the randomized algorithm. and not the dataset itself.\nA variety of approaches and mechanisms can guarantee differential privacy.Erlingsson et al.(2014 showed that randomized response, introduced by Warner(1965), can protect crowd-sourced data collected from software users to compute statistics about user behaviors. Attempts to provide dif- ferential privacy for machine learning models led to a series of efforts on shallow machine learning models, including work by Bassily et al.(2014); Chaudhuri & Monteleoni](2009); Pathak et al (2011); Song et al.(2013), and|Wainwright et al.(2012).\nDataset E 8 Queries Non-Private Baseline Student Accuracy MNIST 2.04 10-5 100 99.18% 98.00% MNIST 8.03 10- - 5 1000 99.18% 98.10% SVHN 5.04 10-6 500 92.80% 82.72% SVHN 8.19 10-6 1000 92.80% 90.66%\nWe observe that our private student outperforms the aggregation's output in terms of accuracy, with or without the injection of Laplacian noise. While this shows the power of semi-supervised learning the student may not learn as well on different kinds of data (e.g., medical data), where categories are not explicitly designed by humans to be salient in the input space. Encouragingly, as Appendix|C illustrates, the PATE approach can be successfully applied to at least some examples of such data.\nA privacy-preserving distributed SGD algorithm was introduced byShokri & Shmatikov(2015). I applies to non-convex models. However, its privacy bounds are given per-parameter, and the large number of parameters prevents the technique from providing a meaningful privacy guarantee. Abad et al.(2016) provided stricter bounds on the privacy loss induced by a noisy SGD by introducing th moments accountant. In comparison with these efforts, our work increases the accuracy of a privat MNIST model from 97% to 98% while improving the privacy bound e from 8 to 1.9. Furthermore the PATE approach is independent of the learning algorithm, unlike this previous work. Suppor for a wide range of architecture and training algorithms allows us to obtain good privacy bound on an accurate and private SVHN model. However, this comes at the cost of assuming that non private unlabeled data is available, an assumption that is not shared by (Abadi et al.]2016) Shokri & Shmatikov2015).\nPathak et al. (2010) first discussed secure multi-party aggregation of locally trained classifiers for a. global classifier hosted by a trusted third-party. Hamm et al.[(2016) proposed the use of knowledge transfer between a collection of models trained on individual devices into a single model guaran- teeing differential privacy. Their work studied linear student models with convex and continuously differentiable losses, bounded and c-Lipschitz derivatives, and bounded features. The PATE ap-. proach of this paper is not constrained to such applications, but is more generally applicable..\nPrevious work also studied semi-supervised knowledge transfer from private models. For instance. Jagannathan et al.(2013) learned privacy-preserving random forests. A key difference is that thei. approach is tailored to decision trees. PATE works well for the specific case of decision trees, as. demonstrated in Appendix [C] and is also applicable to other machine learning algorithms, includin more complex ones. Another key difference is that Jagannathan et al.[(2013) modified the classi model of a decision tree to include the Laplacian mechanism. Thus, the privacy guarantee doe. not come from the disjoint sets of training data analyzed by different decision trees in the randon forest, but rather from the modified architecture. In contrast, partitioning is essential to the privac. guarantees of the PATE approach.\nTo protect the privacy of sensitive training data, this paper has advanced a learning strategy and a corresponding privacy analysis. The PATE approach is based on knowledge aggregation and transfej from \"teacher'' models, trained on disjoint data, to a \"student\"' model whose attributes may be mad. public. In combination, the paper's techniques demonstrably achieve excellent utility on the MNIST and SVHN benchmark tasks, while simultaneously providing a formal, state-of-the-art bound or users' privacy loss. While our results are not without limits- e.g., they require disjoint training data for a large number of teachers (whose number is likely to increase for tasks with many outpu classes)-they are encouraging, and highlight the advantages of combining semi-supervised learn ing with precise, data-dependent privacy analysis, which will hopefully trigger further work. Ir particular, such future work may further investigate whether or not our semi-supervised approach will also reduce teacher queries for tasks other than MNIST and SVHN, for example when the discrete output categories are not as distinctly defined by the salient input space features.\nA key advantage is that this paper's techniques establish a precise guarantee of training data pri vacy in a manner that is both intuitive and rigorous. Therefore, they can be appealing, and easily. explained, to both an expert and non-expert audience. However, perhaps equally compelling are the. techniques' wide applicability. Both our learning approach and our analysis methods are \"black-. box,' i.e., independent of the learning algorithm for either teachers or students, and therefore apply,. in general, to non-convex, deep learning, and other learning methods. Also, because our techniques. do not constrain the selection or partitioning of training data, they apply when training data is natu-. rally and non-randomly partitioned--e.g., because of privacy, regulatory, or competitive concerns-. or when each teacher is trained in isolation, with a different method. We look forward to such further applications, for example on RNNs and other sequence-based models.."}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "Nicolas Papernot is supported by a Google PhD Fellowship in Security. The authors would like t. thank Ilya Mironov and Li Zhang for insightful discussions about early drafts of this document."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Dana Angluin. Queries and concept learning. Machine learning, 2(4):319-342, 1988\nRaef Bassily, Adam Smith, and Abhradeep Thakurta. Differentially private empirical risk minimiza tion: efficient algorithms and tight error bounds. arXiv preprint arXiv:1405.7085, 2014\nEric B Baum. Neural net algorithms that learn in polynomial time from examples and queries. IEE! Transactions on Neural Networks, 2(1):5-19. 1991.\nLeo Breiman. Bagging predictors. Machine Learning, 24(2):123-140, 1994\nKamalika Chaudhuri and Claire Monteleoni. Privacy-preserving logistic regression. In Advances in Neural Information Processing Systems, pp. 289-296, 2009\nKamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(Mar):1069-1109, 2011\nThomas G Dietterich. Ensemble methods in machine learning. In International workshop on multi ple classifier systems, pp. 1-15. Springer, 2000\nple classifier systems, pp. 1-15. Springer, 2000 Cynthia Dwork. A firm foundation for private data analysis. Communications of the ACM, 54(1): 86-95, 2011. Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4):211-407, 2014. Cynthia Dwork and Guy N Rothblum.. Concentrated differential privacy. arXiv preprint arXiv:1603.01887, 2016. Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data. ourselves: privacy via distributed noise generation. In Advances in Cryptology-EUROCRYP7 2006, pp. 486-503. Springer, 2006a. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity. in private data analysis. In Theory of Cryptography, pp. 265-284. Springer, 2006b.. Cynthia Dwork, Guy N Rothblum, and Salil Vadhan. Boosting and differential privacy. In Pro-. ceedings of the 51st IEEE Symposium on Foundations of Computer Science, pp. 51-60. IEEE, 2010. Ulfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. RAPPOR: Randomized aggregatable. privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC Conference on. Computer and Communications Security, pp. 1054-1067. ACM, 2014.\nJane Bromley, James W Bentz, Leon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard Sackinger, and Roopak Shah. Signature verification using a \"Siamese\"' time delay neural network International Journal. l of Pattern Recognition and Artificial Intelligence. 7(04):669-688. 1993\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXi preprint arXiv:1503.02531, 2015.\nIgor Kononenko. Machine learning for medical diagnosis: history, state of the art and perspective Artificial Intelligence in medicine, 23(1):89-109, 2001.\nIlya Mironov. Renyi differential privacy. manuscript, 2016\nJason Poulos and Rafael Valle. Missing data imputation for supervised learning. arXiv preprini arXiv:1610.09075. 2016\nLatanya Sweeney. Weaving technology and policy together to maintain confidentiality. The Journa of Law, Medicine & Ethics, 25(2-3):98-110, 1997.\nStanley L Warner. Randomized response: A survey technique for eliminating evasive answer bias Journal of the American Statistical Association, 60(309):63-69, 1965.\nihun Hamm, Paul Cao, and Mikhail Belkin. Learning privately from multiparty data. arXiv preprin arXiv:1602.03552. 2016\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training GANs. arXiv preprint arXiv:1606.03498, 2016.\nReza Shokri and Vitaly Shmatikov. Privacy-preserving deep learning. In Proceedings of the 22nd\nx(l; aux, d, d) < log((1 + q exp(2~l))\nfz)=(1- V\nWe next argue that this function is non-decreasing in (0, e4? -1) under the conditions of the lemma Towards this goal, define\nW 2yl gz,w)=(1-\nmma4 Let n be the label score vector for a database d with nj* n; for all j. Ther.\nProof. The probability that ny* + Lap() < n; + Lap() is equal to the probability that the sum. of two independent Lap(1) random variables exceeds y(n;* - n;). The sum of two independent. Lap(1) variables has the same distribution as the difference of two Gamma(2, 1) random variables Recalling that the Gamma(2, 1) distribution has pdf xe-x, we can compute the pdf of the difference. via convolution as\n1 1 + (y+|x|)e-y-|x| (y2 + y|x|)e-2y dy = 4e|x| -\nThe probability mass in the tail can then be computed by integration as 2+y(nj*-nj) Taking S 4exp(y(n;*-nj) union bound over the various candidate j's gives the claimed bound.\nPr[M(d) = o] exp(a(l;aux,d,d')) = >`Pr[M(d) = o Pr[M(d') = o] PrM(a Pr[M(d) =o]( Pr[M(d) Pr[M(d) Pr[M(d') = o*] \\Pr[M(d') = Pr[M(d) = o](e2~)4\nand observe that f(z) = g(z, z). We can easily verify by differentiation that g(z, w) is increasing individually in z and in w in the range of interest. This implies that f(q') < f(q) completing the proof.\n2+y(nj*-r Pr[M(d) j*]< 4exp(y(nj* - nj) j#j*\nIn this appendix, we describe approaches that were considered to reduce the number of queries made to the teacher ensemble by the student during its training. As pointed out in Sections3]and4] this effort is motivated by the direct impact of querying on the total privacy cost associated with student training. The first approach is based on distillation, a technique used for knowledge transfer and model compression (Hinton et al.2015). The three other techniques considered were proposed in the context of active learning, with the intent of identifying training examples most useful for learning. In Sections2land4] we described semi-supervised learning, which yielded the best results.. The student models in this appendix differ from those in Sections2land4] which were trained using. GANs. In contrast, all students in this appendix were learned in a fully supervised fashion from. a subset of public, labeled examples. Thus, the learning goal was to identify the subset of labels. yielding the best learning performance.."}, {"section_index": "6", "section_name": "B.1 TRAINING STUDENTS USING DISTILLATION", "section_text": "Distillation is a knowledge transfer technique introduced as a means of compressing large model. into smaller ones, while retaining their accuracy (Bucilua et al.||2006f|Hinton et al.||2015). This is fo. instance useful to train models in data centers before deploying compressed variants in phones. Th. transfer is accomplished by training the smaller model on data that is labeled with probability vector produced by the first model, which encode the knowledge extracted from training data. Distillatioi . is parameterized by a temperature parameter T, which controls the smoothness of probabilitie. output by the larger model: when produced at small temperatures, the vectors are discrete, wherea. at high temperature, all classes are assigned non-negligible values. Distillation is a natural candidat. to compress the knowledge acquired by the ensemble of teachers, acting as the large model, into . student, which is much smaller with n times less trainable parameters compared to the n teachers..\nTo evaluate the applicability of distillation, we consider the ensemble of n = 50 teachers for SVHN In this experiment, we do not add noise to the vote counts when aggregating the teacher predictions We compare the accuracy of three student models: the first is a baseline trained with labels obtained by plurality, the second and third are trained with distillation at T E {1, 5}. We use the first 10,000 samples from the test set as unlabeled data. Figure|5 reports the accuracy of the student model on the last 16,032 samples from the test set, which were not accessible to the model during training. It is plotted with respect to the number of samples used to train the student (and hence the number oi queries made to the teacher ensemble). Although applying distillation yields classifiers that perform more accurately, the increase in accuracy is too limited to justify the increased privacy cost of re vealing the entire probability vector output by the ensemble instead of simply the class assigned the largest number of votes. Thus, we turn to an investigation of active learning."}, {"section_index": "7", "section_name": "B.2 ACTIVE LEARNING OF THE STUDENT", "section_text": "Active learning is a class of techniques that aims to identify and prioritize points in the student' training set that have a high potential to contribute to learning (Angluin]1988] Baum]1991). If the label of an input in the student's training set can be predicted confidently from what we have learne so far by querying the teachers, it is intuitive that querying it is not worth the privacy budget spent In our experiments, we made several attempts before converging to a simpler final formulation\nSiamese networks: Our first attempt was to train a pair of siamese networks, introduced byBrom. ey et al.[(1993) in the context of one-shot learning and later improved byKoch[(2015). The siamese. networks take two images as input and return 1 if the images are equal and 0 otherwise. They are. two identical networks trained with shared parameters to force them to produce similar represen tations of the inputs, which are then compared using a distance metric to determine if the images. are identical or not. Once the siamese models are trained, we feed them a pair of images where the first is unlabeled and the second labeled. If the unlabeled image is confidently matched with a. known labeled image, we can infer the class of the unknown image from the labeled image. In our. experiments, the siamese networks were able to say whether two images are identical or not, but did. not generalize well: two images of the same class did not receive sufficiently confident matches. We. also tried a variant of this approach where we trained the siamese networks to output 1 when the twc.\n90 85 80 7 5 X 7 0 65 xDistilled Vectors x x Labels only XX Distilled Vectors at T=5. 60 0 2000 4000 6000 8000 1000 Student share of samples in SVHN test set (out of 26032)\nFigure 5: Influence of distillation on the accuracy of the SVHN student trained with respect to the. initial number of training samples available to the student. The student is learning from n = 50. teachers, whose predictions are aggregated without noise: in case where only the label is returned,. we use plurality, and in case a probability vector is returned, we sum the probability vectors output by each teacher before normalizing the resulting vector..\nimages are of the same class and 0 otherwise, but the learning task proved too complicated to be al effective means for reducing the number of queries made to teachers..\nCollection of binary experts: Our second attempt was to train a collection of binary experts, one. per class. An expert for class j is trained to output 1 if the sample is in class j and O otherwise. We first trained the binary experts by making an initial batch of queries to the teachers. Using. the experts, we then selected available unlabeled student training points that had a candidate labe. score below 0.9 and at least 4 other experts assigning a score above 0.1. This gave us about 500. unconfident points for 1700 initial label queries. After labeling these unconfident points using the. ensemble of teachers, we trained the student. Using binary experts improved the student's accuracy. when compared to the student trained on arbitrary data with the same number of teacher queries The absolute increases in accuracy were however too limited-between 1.5% and 2.5%..\nIdentifying unconfident points using the student: This last attempt was the simplest yet the mos effective. Instead of using binary experts to identify student training points that should be labeled b the teachers, we used the student itself. We asked the student to make predictions on each unlabele training point available. We then sorted these samples by increasing values of the maximum proba bility assigned to a class for each sample. We queried the teachers to label these unconfident input first and trained the student again on this larger labeled training set. This improved the accuracy o the student when compared to the student trained on arbitrary data. For the same number of teache queries, the absolute increases in accuracy of the student trained on unconfident inputs first whe compared to the student trained on arbitrary data were in the order of 4% - 10%.\nAPPENDIX: ADDITIONAL EXPERIMENTS ON THE UCI ADULT AND DIABETES DATASETS\nUCI Adult dataset: The UCI Adult dataset is made up of census data, and the task is to predic when individuals make over $50k per year. Each input consists of 13 features (which include the age workplace, education, occupation---see the UCI website for a full list). The only pre-processing we apply to these features is to map all categorical features to numerical values by assigning an integer value to each possible category. The model is a random forest provided by the scikit-learr Python package. When training both our teachers and student, we keep all the default parameter values, except for the number of estimators, which we set to 100. The data is split between a training set of 32,562 examples, and a test set of 16,282 inputs.\nUCI Diabetes dataset: The UCI Diabetes dataset includes de-identified records of diabetic patients. and corresponding hospital outcomes, which we use to predict whether diabetic patients were read mitted less than 30 days after their hospital release. To the best of our knowledge, no particulai. classification task is considered to be a standard benchmark for this dataset. Even so, it is valuable. to consider whether our approach is applicable to the likely classification tasks, such as readmission. since this dataset is collected in a medical environment-a setting where privacy concerns arise. frequently. We select a subset of 18 input features from the 55 available in the dataset (to avoic. features with missing values) and form a dataset balanced between the two output classes (see the. UCI website for more details4). In class 0, we include all patients that were readmitted in a 30-day. window, while class 1 includes all patients that were readmitted after 30 days or never readmitted a. all. Our balanced dataset contains 34,104 training samples and 12,702 evaluation samples. We use. a random forest model identical to the one described above in the presentation of the Adult dataset\nExperimental results: We apply our approach described in Section2 For both datasets, we trair ensembles of n = 250 random forests on partitions of the training data. We then use the noisy aggregation mechanism, where vote counts are perturbed with Laplacian noise of scale 0.05 tc privately label the first 500 test set inputs. We train the student random forest on these 500 test se inputs and evaluate it on the last 11,282 test set inputs for the Adult dataset, and 6,352 test set input for the Diabetes dataset. These numbers deliberately leave out some of the test set, which allowe us to observe how the student performance-privacy trade-off was impacted by varying the numbe of private labels, as well as the Laplacian scale used when computing these labels.\nFor the Adult dataset, we find that our student model achieves an 83% accuracy for an (e, ) : (2.66, 10-5) differential privacy bound. Our non-private model on the dataset achieves 85% accu racy, which is comparable to the state-of-the-art accuracy of 86% on this dataset (Poulos & Valle 2016). For the Diabetes dataset, we find that our privacy-preserving student model achieves a. 93.94% accuracy for a (e, ) = (1.44, 10-5) differential privacy bound. Our non-private mode. on the dataset achieves 93.81% accuracy.\nIn order to further demonstrate the general applicability of our approach, we performed experiments on two additional datasets. While our experiments on MNIST and SVHN in Section|used con volutional neural networks and GANs, here we use random forests to train our teacher and student models for both of the datasets. Our new results on these datasets show that, despite the differing data types and architectures. we are able to provide meaningful privacy guarantees."}]
SyOvg6jxx
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Reinforcement learning (RL) studies an agent acting in an initially unknown environment, learning through trial and error to maximize rewards. It is impossible for the agent to act near-optimally unti it has sufficiently explored the environment and identified all of the opportunities for high reward, in all scenarios. A core challenge in RL is how to balance exploration--actively seeking out novel states and actions that might yield high rewards and lead to long-term gains; and exploitation-maximizing short-term rewards using the agent's current knowledge. While there are exploration techniques for finite MDPs that enjoy theoretical guarantees, there are no fully satisfying techniques for high dimensional state spaces; therefore, developing more general and robust exploration techniques is an active area of research.\n*These authors contributed equally\nCount-based exploration algorithms are known to perform near-optimally when. used in conjunction with tabular reinforcement learning (RL) methods for solving. small discrete Markov decision processes (MDPs). It is generally thought that. count-based methods cannot be applied in high-dimensional state spaces, since. most states will only occur once. Recent deep RL exploration strategies are able to. deal with high-dimensional continuous state spaces through complex heuristics. often relying on optimism in the face of uncertainty or intrinsic motivation. In. this work, we describe a surprising finding: a simple generalization of the classic. count-based approach can reach near state-of-the-art performance on various high. dimensional and/or continuous deep RL benchmarks. States are mapped to hash. codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based. exploration theory. We find that simple hash functions can achieve surprisingly good. results on many challenging tasks. Furthermore, we show that a domain-dependent. learned hash code may further improve these results. Detailed analysis reveals. important aspects of a good hash function: 1) having appropriate granularity and. 2) encoding information relevant to solving the MDP. This exploration strategy. achieves near state-of-the-art performance on both continuous control tasks and. Atari 2600 games, hence providing a simple yet powerful baseline for solving. MDPs that require considerable exploration.\nMost of the recent state-of-the-art RL results have been obtained using simple exploration strategies such as uniform sampling (Mnih et al.f2015) and i.i.d./correlated Gaussian noise (Schulman et al. 2015} Lillicrap et al. 2015). Although these heuristics are sufficient in tasks with well-shaped. rewards, the sample complexity can grow exponentially (with state space size) in tasks with sparse rewards (Osband et al.] 2016b). Recently developed exploration strategies for deep RL have led. to significantly improved performance on environments with sparse rewards. Bootstrapped DQN\n(Osband et al.]2016a) led to faster learning in a range of Atari 2600 games by training an ensemble of Q-functions. Intrinsic motivation methods using pseudo-counts achieve state-of-the-art performance. on Montezuma's Revenge, an extremely challenging Atari 2600 game (Bellemare et al.]2016). Variational Information Maximizing Exploration (VIME, Houthooft et al.(2016)) encourages the. agent to explore by acquiring information about environment dynamics, and performs well on various. robotic locomotion problems with sparse rewards. However, we have not seen a very simple and fast method that can work across different domains..\nThis paper presents a simple approach for exploration, which extends classic counting-based method. to high-dimensional. continuous state spaces. We discretize the state space with a hash function an apply a bonus based on the state-visitation count. The hash function can be chosen to appropriately. balance generalization across states, and distinguishing between states. We select problems from rllal. (Duan et al.]2016) and Atari 2600 (Bellemare et al.|2012) featuring sparse rewards, and demonstrate near state-of-the-art performance on several games known to be hard for naive exploration strategie. The main strength of the presented approach is that it is fast, flexible and complementary to mos. existing RL algorithms.\nIn summary, this paper proposes a generalization of classic count-based exploration to high-dimensional. spaces through hashing (Section|2); demonstrates its effectiveness on challenging deep RL benchmark problems and analyzes key components of well-designed hash functions (Section [3)"}, {"section_index": "1", "section_name": "2.1 NOTATION", "section_text": "Our approach discretizes the state space with a hash function $ : S -> Z. An exploration bonus is added to the reward function, defined as.\nSome of the classic, theoretically-justified exploration methods are based on counting state-action visitations, and turning this count into a bonus reward. In the bandit setting, the well-known UCB 2 log t algorithm of|Lai & Robbins|(1985) chooses the action at at time t that maximizes (at) + n(at) where r(ap) is the estimated reward, and n(at) is the number of times action at was previously chosen In the MDP setting, some of the algorithms have similar structure, for example, Model Based Interval Estimation-Exploration Bonus (MBIE-EB) of Strehl & Littman (2008) counts state-action pairs with a table n(s, a) and adding a bonus reward of the form - to encourage exploring less visited pairs Vn(s, a) Kolter & Ng(2009) show that the inverse-square-root dependence is optimal. MBIE and related algorithms assume that the augmented MDP is solved analytically at each timestep, which is only practical for small finite state spaces.\nThis paper assumes a finite-horizon discounted Markov decision process (MDP), defined by. (S,A, P,r, po, y,T), in which S is the state space, A the action space, P a transition proba-. bility distribution, r : S A -> R>o a reward function, po an initial state distribution, y E (0, 1] a. discount factor, and T the horizon. The goal of RL is to maximize the total expected discounted reward E.P T=o y'r(st, at)] over a policy r, which outputs a distribution over actions given a state..\nwhere e R>o is the bonus coefficient. Initially the counts n() are set to zero for the whole range of $. For every state st encountered at time step t, n((sp)) is increased by one. The agent is trained with rewards (r + r+). while performance is evaluated as the sum of rewards without bonuses.\nNote that our approach is a departure from count-based exploration methods such as MBIE-EB since we use a state-space count n(s) rather than a state-action count n(s, a). State-action counts n(s, a) are investigated in Appendix A.6] but no significant performance gains over state counting could be witnessed.\nAlgorithm 1: Count-based exploration through static hashing\n1 static hashing 1 Define state preprocessor g : S -> RK 2 (In case of SimHash) Initialize A e RkxK with entries drawn i.i.d. from the standard Gaussian. distribution N(0, 1) 3 Initialize a hash table with values n(-) = 0. 4 for each iteration j do. 5 6 Compute hash codes through any LSH method, e.g., for SimHash, $(sm) = sgn(Ag(sm)) 7 Update the hash table counts Vm : 0 m M as n($(sm)) n($(sm)) + 1 M Update the policy using rewards r(Sm, am). B 8 with any RL algorithm. n($(Sm)) Jm=0\nAlgorithm|1|summarizes our method. The main idea is to use locality-sensitive hashing (LSH) to convert continuous, high-dimensional data to discrete hash codes. LSH is a popular class of hash functions for querying nearest neighbors based on certain similarity metrics (Andoni & Indyk| 2006). A computationally efficient type of LSH is SimHash (Charikar 2002), which measures similarity by. angular distance. SimHash retrieves a binary code of state s e S as.\nWhen the MDP states have a complex structure, as is the case with image observations, measuring. their similarity directly in pixel space fails to provide the semantic similarity measure one would. desire. Previous work in computer vision (Lowe]1999] Dalal & Triggs2005] Tola et al.] 2010 introduce manually designed feature representations of images that are suitable for semantic tasks including detection and classification. More recent methods learn complex features directly from data by training convolutional neural networks (Krizhevsky et al.]2012,Simonyan & Zisserman,2014] He et al.]2015). Considering these results, it may be difficult for SimHash to cluster states appropriately. using only raw pixels.\ndownsample code linear softmax 6X6 b(.) 96 x 5 x 5 512 96 x 5 x 5 96 11 11 96 x 10 10 96 x 24 x 24 1024 96 x 24 x 24 2400 1 x 52 x 52 64 x 52 52 1 52 52\nFigure 1: The autoencoder (AE) architecture; the solid block represents the dense sigmoidal binary code layer, after which noise U(-a, a) is injected..\nTherefore, we propose to use an autoencoder (AE) consisting of convolutional, dense, and transposed convolutional layers to learn meaningful hash codes in one of its hidden layers. This AE takes as input states s and contains one special dense layer comprised of K saturating activation functions\nClearly the performance of this method will strongly depend on the choice of hash function $. One. important choice we can make regards the granularity of the discretization: we would like for \"distant' states to be be counted separately while \"similar\"' states are merged. If desired, we can incorporate prior knowledge into the choice of $, if there would be a set of salient state features which are known. to be relevant.\n$(s) = sgn(Ag(s)) e{-1,1}k\nwhere g : S -> Rd is an optional preprocessing function and A is a k d matrix with i.i.d. entries drawn from a standard Gaussian distribution N(0, 1). The value for k controls the granularity: higher values lead to fewer collisions and are thus more likely to distinguish states.\ndownsample code linear softmax 6x6 6X6 66 b(.) 96 x 5 x 5 512 96 5x 5 96 x 11 x 11 96 x 10 10 96 x 24 x 24 1024 96 x 24 x 24 2400 1 x 52 x 52 1 x 52 52 64 52 52\nAlgorithm 2: Count-based exploration using learned hash codes\n1 Define state preprocessor g : S -> BK as the binary code resulting from the autoencoder (AE) 2 Initialize A e RkxK with entries drawn i.i.d. from the standard Gaussian distribution N(0, 1) 3 Initialize a hash table with values n() = 0 4 for each iteration j do 5 6 7 if j mod jupdate = 0 then 8 Update the AE loss function in Eq. (3) using samples drawn from the replay pool Compute g(sm) = [b(sm)l, the K-dim rounded hash code for sm learned by the AE 9 10 Project g(sm) to a lower dimension k via SimHash as $(sm) = sgn(Ag(sm)) 11 Update the hash table counts Vm : 0 m M as n($(sm)) n($(sm)) + 1 M B 12 Update the policy using rewards r(sm, am) with any RL algorithm (n($(Sm)) m=0\nmore specifically sigmoid functions. By rounding the sigmoid output b(s) of this layer to the closes binary number, any state s can be binarized.\nSince gradients cannot be back-propagated through a rounding function, an alternative method must. be used to ensure that distinct states are mapped to distinct binary codes. Therefore, uniform noise. U(-a, a) is added to the sigmoid output. By choosing uniform noise with a sufficiently high variance. the AE is only capable of reconstructing distinct inputs s if its hidden dense layer outputs values b(s). that are sufficiently far apart from each other (Gregor et al.|2016). Feeding a state s to the AE input extracting b(s) and rounding it to [b(s)7 yields a learned binary code. As such, the loss function L() over a set of collected states {si}=, is defined as\nN K min{(1- b;(sn)) lOg K A i=1\nThis objective function consists of a cross-entropy term and a term that pressures the binary code laye to take on binary values, scaled by e R>o. The reasoning behind this is that uniform noise U(-a, a alone is insufficient, in case the AE does not use a particular sigmoid unit. This term ensures that al unused binary code output is assigned an arbitrary binary value. When omitting this term, the code i more prone to oscillations, causing unwanted bit flips, and destabilizing the counting process.\nIn order to make the AE train sufficiently fast---which is required since it is updated during the agent's training-we make use of a pixel-wise softmax output layer (van den Oord et al.2016) that shares weights between all pixels. The different softmax outputs merge together pixel intensities into discrete. bins. The architectural details are described in Appendix|A.1|and are depicted in Figure[1] Because the code dimension often needs to be large in order to correctly reconstruct the input, we apply a downsampling procedure to the resulting binary code [b(s)1, which can be done through random. projection to a lower-dimensional space via SimHash as in Eq. (2).\nOne the one hand, it is important that the mapping from state to code needs to remain relatively consistent over time, which is nontrivial as the AE is constantly updated according to the latest data (Algorithm|2|line[8). An obvious solution would be to significantly downsample the binary code to a very low dimension, or by slowing down the training process. But on the other hand, the code has tc remain relatively unique for states that are both distinct and close together on the image manifold This is tackled both by the second term in Eq. (3) and by the saturating behavior of the sigmoid units As such, states that are already well represented in the AE hidden layers tend to saturate the sigmoic units, causing the resulting loss gradients to be close to zero and making the code less prone to change\nTo answer question 1, we run the proposed method on deep RL benchmarks (rllab and ALE) that feature sparse rewards, and compare it to other state-of-the-art algorithms. Question 2 is answered by trying out different image preprocessors on Atari 2600 games. Finally, we investigate question 3 in Section3.3|and[3.4] Trust Region Policy Optimization (TRPO, Schulman et al.(2015)) is chosen as the RL algorithm for all experiments, because it can handle both discrete and continuous action spaces, it can conveniently ensure stable improvement in the policy performance, and is relatively insensitive to hyperparameter changes. The hyperparameters settings are reported in Appendix|A.1"}, {"section_index": "2", "section_name": "3.1 CONTINUOUS CONTROL", "section_text": "The rllab benchmark (Duan et al.]2016) consists of various control tasks to test deep RL algorithms We selected several variants of the basic and locomotion tasks that use sparse rewards, as shown in Figure[2] and adopt the experimental setup as defined in (Houthooft et al.]2016)--a description can be found in Appendix[A.2] These tasks are all highly difficult to solve with naive exploration strategies, such as adding Gaussian noise to the actions.\nFigure 2: Illustrations of the rllab tasks used in the continuous control experiments, namely MountainCar, CartPoleSwingup, SimmerGather, and HalfCheetah; taken from (Duan et al.]. 2016)\n350 baseline 1.0 VIME SimHash 0.8 25 0.6 0.4 0.2 50 7 200 400 600 800 200 600 800 1000 200 400 600 800 100 (a) MountainCar (b) CartPoleSwingup (c) SwimmerGather (d) HalfCheetah\nFigure 3: Mean average return of different algorithms on rllab tasks with sparse rewards; the solid line represents the mean average return, while the shaded area represents one standard deviation, over 5 seeds for the baseline and SimHash.\nFigure3 shows the results of TRPO (baseline), TRPO-SimHash, and VIME (Houthooft et al.]2016 on the classic tasks MountainCar and CartPoleSwingup, the locomotion task HalfCheetah, and th hierarchical task SwimmerGather. Using count-based exploration with hashing is capable of reachin the goal in all environments (which corresponds to a nonzero return), while baseline TRPO witl Gaussian control noise fails completely. Although TRPO-SimHash picks up the sparse reward oj HalfCheetah, it does not perform as well as VIME. In contrast, the performance of SimHash i comparable with VIME on MountainCar, while it outperforms VIME on SwimmerGather."}, {"section_index": "3", "section_name": "3.2 ARCADE LEARNING ENVIRONMENT", "section_text": "The Arcade Learning Environment (ALE,Bellemare et al.(2012), which consists of Atari 2600. video games. is an important benchmark for deep RL due to its high-dimensional state space and wide\n1. Can count-based exploration through hashing improve performance significantly across different domains? How does the proposed method compare to the current state of the art in exploration for deep RL? 2. What is the impact of learned or static state preprocessing on the overall performance when image observations are used? 3. What factors contribute to good performance, e.g., what is the appropriate level of granularity of the hash function?\nvariety of games. In order to demonstrate the effectiveness of the proposed exploration strategy, six. games are selected featuring long horizons while requiring significant exploration: Freeway, Frostbite. Gravitar, Montezuma's Revenge, Solaris, and Venture. The agent is trained for 50o iterations in all. experiments, with each iteration consisting of O.1 M steps (the TRPO batch size, corresponds to O.4 M. frames). Policies and value functions are neural networks with identical architectures to (Mnih et al. 2016). Although the policy and baseline take into account the previous four frames, the counting. algorithm only looks at the latest frame..\nBASs To compare with the autoencoder-based learned hash code, we propose using Basic Abstrac. tion of the ScreenShots (BASS, also called Basic; see|Bellemare et al.(2012) as a static preprocessing. function g. BASS is a hand-designed feature transformation for images in Atari 2600 games. BASS. builds on the following observations specific to Atari: 1) the game screen has a low resolution, 2. most objects are large and monochrome, and 3) winning depends mostly on knowing object location and motions. We designed an adapted version of BASs that divides the RGB screen into square. cells, computes the average intensity of each color channel inside a cell, and assigns the resulting. values to bins that uniformly partition the intensity range [0, 255]. Mathematically, let C be the cel. size (width and height), B the number of bins, (i, j) cell location, (x, y) pixel location, and z the. channel.\nfeature(i, j, z) = B (x,y)e cell(i,j) I(x, y, Z 255C2\nAfterwards. the resulting integer-valued feature tensor is converted to an integer hash code ((st) ir Line[6|of Algorithm|1). A BASS feature can be regarded as a miniature that efficiently encodes object locations, but remains invariant to negligible object motions. It is easy to implement and introduces little computation overhead. However, it is designed for generic Atari game images and may not capture the structure of each specific game very well.\nTable 1: Atari 2600: average total reward after training for 50 M time steps. Boldface numbers indicate best results. Italic numbers are the best among our methods..\nFreeway Frostbite1 Gravitar Montezuma Solaris Venture TRPO (baseline) 16.5 2869 486 0 2758 121 TRPO-pixel-SimHash 31.6 4683 468 0 2897 263 TRPO-BASS-SimHash 28.4 3150 604 238 1201 616 TRPO-AE-SimHash 33.5 5214 482 75 4467 445 Double-DQN 33.3 1683 412 0 3068 98.0 Dueling network. 0.0 4672 588 0 2251 497 Gorila 11.7 605 1054 4 N/A 1245 DQN Pop-Art 33.4 3469 483 0 4544 1172 A3C+ 27.3 507 246 142 2175 0 29.2 1450 pseudo-count2 3439 369\n1 WhileVezhnevets et al.(2016) reported best score 8108, their evaluation was based on top 5 agents trained with 500M time steps, hence not comparable. 2 Results reported only for 25 M time steps (100 M frames).\nWe compare our results to double DQN (van Hasselt et al.| 2016b), dueling network (Wang et al 2016), A3C+ (Bellemare et al.] 2016), double DQN with pseudo-counts (Bellemare et al. 2016) Gorila (Nair et al.[2015), and DQN Pop-Art (van Hasselt et al.]|2016a) on the \"null op\" metrid2] We show training curves in Figure4|and summarize all results in Table 1. Surprisingly, TRPO-pixel SimHash already outperforms the baseline by a large margin and beats the previous best result on Frostbite. TRPO-BASS-SimHash achieves significant improvement over TRPO-pixel-SimHash on\n1The original BASS exploits the fact that at most 128 colors can appear on the screen. Our adapted versio does not make this assumption.. 2The agent takes. ithin20 the heoi Of each enisode\nMontezuma's Revenge and Venture, where it captures object locations better than other methods[3 TRPO-AE-SimHash achieves near state-of-the-art performance on Freeway, Frostbite and Solaris|4\n10000 1000 30 TRPO-AE-SimHash 900 TRPO 25 8000 800 TRPO-BASS-SimHash TRPO-pixel-SimHash 20 700 6000 15 600 4000 500 10 400 2000 300 200 5 100 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 (a) Freeway (b) Frostbite (c) Gravitar 500 7000 1200 400 6000 1000 5000 800 300 4000 600 200 3000 400 2000 100 200 1000 100 200 400 500 -100 500 200 300 100 200 300 400 100 200 300 400 500 (d) Montezuma's Revenge (e) Solaris (f) Venture\nFigure 4: Atari 2600 games: the solid line is the mean average undiscounted return per iteration while the shaded areas represent the one standard deviation, over 5 seeds for the baseline, TRPO. pixel-SimHash, and TRPO-BASS-SimHash, while over 3 seeds for TRPO-AE-SimHash..\nAs observed in Table 1, preprocessing images with BASS or using a learned hash code through the AE leads to much better performance on Gravitar, Montezuma's Revenge and Venture. Therefore, an static or adaptive preprocessing step can be important for a good hash function\nIn conclusion, our count-based exploration method is able to achieve remarkable performance gains even with simple hash functions like SimHash on the raw pixel space. If coupled with domain-dependent state preprocessing techniques, it can sometimes achieve far better results."}, {"section_index": "4", "section_name": "3.3 GRANULARITY", "section_text": "While our proposed method is able to achieve remarkable results without requiring much tuning the granularity of the hash function should be chosen wisely. Granularity plays a critical role in count-based exploration, where the hash function should cluster states without under-generalizing or over-generalizing. Table[2|summarizes granularity parameters for our hash functions. In Table[3 we summarize the performance of TRPO-pixel-SimHash under different granularities. We choose Frostbite and Venture on which TRPO-pixel-SimHash outperforms the baseline, and choose as reward bonus coefficient = 0.01 256 to keep average bonus rewards at approximately the same scale. k = 16 only corresponds to 65536 distinct hash codes, which is insufficient to distinguish between semantically distinct states and hence leads to worse performance. We observed that k = 512 tends to capture trivial image details in Frostbite, leading the agent to believe that every state is new and equally worth exploring. Similar results are observed while tuning the granularity parameters for TRPO-BASS-SimHash and TRPO-AE-SimHash.\nThe best granularity depends on both the hash function and the MDP. While adjusting granularity parameter, we observed that it is important to lower the bonus coefficient as granularity is increased This is because a higher granularity is likely to cause lower state counts, leading to higher bonus rewards that may overwhelm the true rewards.\nTable 2: Granularity parameters of various hash functions\nTable 3: Average score at 50M time steps achieved by TRPO-pixel-SimHash\nk 16 64 128 256 512 Frostbite 3326 4029 3932 4683 1117 Venture 0 218 142 263 306\nMontezuma's Revenge is widely known for its extremely sparse rewards and difficult exploratior. (Bellemare et al.2016). While our method does not outperform Bellemare et al.[(2016) on this game. we investigate the reasons behind this through various experiments. The experiment process belov again demonstrates the importance of a hash function having the correct granularity and encoding. relevant information for solving the MDP..\nOur first attempt is to use game RAM states instead of image observations as inputs to the policy (details in Appendix A.1), which leads to a game score of 2500 with TRPO-BASS-SimHash. Oui second attempt is to manually design a hash function that incorporates domain knowledge, called SmartHash, which uses an integer-valued vector consisting of the agent's (x, y) location, room number and other useful RAM information as the hash code (details in Appendix|A.3). The best SmartHash agent is able to obtain a score of 3500. Still the performance is not optimal. We observe that a slight change in the agent's coordinates does not always result in a semantically distinct state, and thus the hash code may remain unchanged. Therefore we choose grid size s and replace the x coordinate by (x - xmin)/s] (similarly for y). The bonus coefficient is chosen as = 0.01s to maintain the scale relative to the true reward5|(see Table|4). Finally, the best agent is able to obtain 6600 total rewards after training for 1000 iterations (1000 M time steps), with a grid size s = 10.\n8000 exact enemy locations 7000 ignore enemies 6000 random enemy locations. 5000 4000 3000 2000 1000 0 1000! 0 200 400 600 800 1000\nFigure 5: SmartHash results on Montezuma's Revenge (RAM observations): the solid line is th mean average undiscounted return per iteration, while the shaded areas represent the one standard deviation, over 5 seeds.\nDuring our pursuit, we had another interesting discovery that the ideal hash function should not simply cluster states by their visual similarity, but instead by their relevance to solving the MDP. We\n5The bonus scaling is chosen by assuming all states are visited uniformly and the average bonus reward should remain the same for any grid size\nTable 4: Average score at 50M time steps achieved by TRPO-SmartHash on Montezuma's Revenge (RAM observations)\n8000 exact enemy locations 7000 ignore enemies 6000 random enemy locations 5000 4000 3000 2000 1000 0 -1000 0 200 400 600 800 1000\nexperimented with including enemy locations in the first two rooms into SmartHash (s = 1O), an observed that average score dropped to 1672 (at iteration 1000). Though it is important for the agen to dodge enemies, the agent also erroneously \"enjoys'' watching enemy motions at distance (sinc new states are constantly observed) and \"forgets' that his main objective is to enter other rooms. Ar alternative hash function keeps the same entry \"enemy locations', but instead only puts randoml sampled values in it, which surprisingly achieves better performance (3112). However, by ignoring enemy locations altogether, the agent achieves a much higher score (5661) (see Figure|5). In retrospec we examine the hash codes generated by BASS-SimHash and find that codes clearly distinguisl between visually different states (including various enemy locations), but fails to emphasize that th agent needs to explore different rooms. Again this example showcases the importance of encoding relevant information in designing hash functions."}, {"section_index": "5", "section_name": "4 RELATED WORK", "section_text": "Classic count-based methods such as MBIE (Streh1 & Littman]2005), MBIE-EB and (Kolter & Ng2009) solve an approximate Bellman equation as an inner loop before the agent takes an action (Strehl & Littman!2008). As such, bonus rewards are propagated immediately throughout the state-action space. In contrast, contemporary deep RL algorithms propagate the bonus signal based on rollouts collected from interacting with environments, with value-based (Mnih et al.[2015) or policy gradient-based (Schulman et al.]2015] Mnih et al.]2016) methods, at limited speed. In addition, our proposed method is intended to work with contemporary deep RL algorithms, it differs from classical count-based method in that our method relies on visiting unseen states first, before the bonus reward can be assigned, making uninformed exploration strategies still a necessity at the beginning. Filling the gaps between our method and classic theories is an important direction of future research.\nAnother type of exploration is curiosity-based exploration. These methods try to capture the agent's surprise about transition dynamics. As the agent tries to optimize for surprise, it naturally discovers novel states. We refer the reader to Schmidhuber (2010) and Oudeyer & Kaplan(2007) for an extensive review on curiosity and intrinsic rewards.\nThe most related exploration strategy is proposed byBellemare et al. (2016), in which an exploration bonus is added inversely proportional to the square root of a pseudo-count quantity. A state pseudo count is derived from its log-probability improvement according to a density model over the state space, which in the limit converges to the empirical count. Our method is similar to pseudo-count approach in the sense that both methods are performing approximate counting to have the necessary generalization over unseen states. The difference is that a density model has to be designed and learned to achieve good generalization for pseudo-count whereas in our case generalization is obtained by a wide range of simple hash functions (not necessarily SimHash). Another interesting connection total number of states visited. Another method similar to hashing is proposed by[Abel et al.[(2016) which clusters states and counts cluster centers instead of the true states, but this method has yet to be tested on standard exploration benchmark problems.\nA related line of classical exploration methods is based on the idea of optimism in the face of uncertainty Brafman & Tennenholtz 2002) but not restricted to using counting to implement \"optimism\"', e.g. R-Max (Brafman & Tennenholtz 2002), UCRL (Jaksch et al.]2010), and E3 (Kearns & Singh]2002) These methods, similar to MBIE and MBIE-EB, have theoretical guarantees in tabular settings..\nBayesian RL methods (Kolter & Ng2009] Guez et al.]2014] Sun et al.]2011Ghavamzadeh et al. 2015), which keep track of a distribution over MDPs, are an alternative to optimism-based methods. Extensions to continuous state space have been proposed by|Pazis & Parr (2013) and|Osband et al. (2016b).\nSeveral exploration strategies for deep RL have been proposed to handle high-dimensional state space. recently. Houthooft et al. (2016) propose VIME, in which information gain is measured in Bayesian neural networks modeling the MDP dynamics, which is used an exploration bonus.Stadie et al.. (2015) propose to use the prediction error of a learned dynamics model as an exploration bonus.. Thompson sampling through bootstrapping is proposed by|Osband et al.(2016a), using bootstrapped Q-functions."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This research was funded in part by ONR through a PECASE award. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a Berkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through grant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical and Statistical Frontiers. Adam Stooke gratefully acknowledges funding from a Fannie and John Hertz Foundation fellowship. Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FwO)."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Marc G Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, 2016.\nThis paper demonstrates that a generalization of classical counting techniques through hashing is able to provide an appropriate signal for exploration, even in continuous and/or high-dimensional MDPs using function approximators, resulting in near state-of-the-art performance across benchmarks. It. provides a simple yet powerful baseline for solving MDPs that require informed exploration..\nMarc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment:. An evaluation platform for general agents. Journal of Artificial Intelligence Research. 2012\nRonen I Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213-231, 2002.\nMoses S Charikar. Similarity estimation techniques from rounding algorithms. In Proceedings of the\nGraham Cormode and S Muthukrishnan. An improved data stream summary: the count-min sketch and its applications. Journal of Algorithms, 55(1):58-75, 2005.\nThomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning Journal of Machine Learning Research. 11:1563-1600. 2010\nMichael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2-3):209-232, 2002\nDavid G Lowe. Object recognition from local scale-invariant features. In Computer vision, 1999. The proceedings of the seventh IEEE international conference on. volume 2. pp. 1150-1157. Ieee. 1999\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pp 448-456, 2015.\nVolodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.\nArun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296, 2015.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.\nBradly C Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.\nAlexander L Strehl and Michael L Littman. An analysis of model-based interval estimation for Markov decision processes. Journal of Computer and System Sciences, 74(8):1309-1331, 2008.\nYi Sun, Faustino Gomez, and Jurgen Schmidhuber. Planning to be surprised: Optimal Bayesiar exploration in dynamic environments. In Artificial General Intelligence, pp. 41-51. 2011.\nAaron van den Oord. Nal Kalchbrenner. and Koray Kavukcuoglu. Pixel recurrent neural networks. Ir International Conference on Machine Learning (ICML), 2016.\nAlexander Vezhnevets, Volodymyr Mnih, John Agapiou, Simon Osindero, Alex Graves, Oriol Vinyals. and Koray Kavukcuoglu. Strategic attentive writer for learning macro-actions. In Advances in Neural Information Processing Systems (NIPS), 2016..\nZiyu Wang, Nando de Freitas, and Marc Lanctot. Dueling network architectures for deep reinforcement learning. In International Conference on Machine Learning (ICML), 2016.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel. Trust region\nAlexander L Strehl and Michael L Littman. A theoretical analysis of model-based interval estimation In International Conference on Machine I earnino. (CML). pp. 856-863. 2005\nHado van Hasselt, Arthur Guez, Matteo Hessel, and David Silver. Learning functions across many orders of magnitudes. arXiv preprint arXiv:1602.07714, 2016a."}, {"section_index": "8", "section_name": "A.1 HYPERPARAMETER SETTINGS", "section_text": "For the rllab experiments, we used batch size 5ooo for all tasks except SwimmerGather, for which we. used batch size 50ooo. CartpoleSwingup makes use of a neural network policy with one layer of 32. tanh units. The other tasks make use of a two layer neural network policy of 32 tanh units each for. MountainCar and HalfCheetah, and of 64 and 32 tanh units for SwimmerGather. The outputs are. modeled by a fully factorized Gaussian distribution N(u. -D). in which u is modeled as the network output, while is a parameter. CartPoleSwingup makes use of a neural network baseline with one. layer of 32 ReLU units, while all other tasks make use of a linear baseline function. For all tasks, we. used TRPO step size 0.01 and discount factor y = 0.99. We choose SimHash parameter k = 32 and. bonus coefficient = 0.01, found through a coarse grid search..\nFor Atari experiments, a batch size of 1ooooo is used, while the KL divergence step size is set tc 0.01. The policy and baseline both have the following architecture: 2 convolutional layers with respectively 16 and 32 filters, sizes 8 8 and 4 4, strides 4 and 2, using no padding, feeding intc a single hidden layer of 256 units. The nonlinearities are rectified linear units (ReLUs). The inpu frames are downsampled to 52 52. The input to policy and baseline consists of the 4 previous frames, corresponding to the frame skip of 4. The discount factor was set to y = 0.995. All input. are rescaled to [-1, 1] element-wise. All experiments used 5 different training seeds, except the experiments with the learned hash code, which uses 3 different training seeds. Batch normalizatior (Ioffe & Szegedy|2015) is used at each policy and baseline layer. TRPO-pixel-SimHash uses binary codes of size k = 256; BASS (TRPO-BASS-SimHash) extracts features using cell size C = 20 anc B = 20 bins. The autoencoder for the learned embedding (TRPO-AE-SimHash) uses a binary hidder layer of 512 bit, which are projected to 64 bit.\nRAM states in Atari 2600 games are integer-valued vectors over length 128 in the range [0, 255] Experiments on Montezuma's Revenge with RAM observations use a policy consisting of 2 hidden. layers, each of size 32. RAM states are rescaled to a range [-1, 1]. Unlike images, only the current RAM is shown to the agent. Experiment results are averaged over 10 random seeds..\nThe autoencoder used for the learned hash code has a 512 bit binary code layer, using sigmoid units. to which uniform noise U(-a, a) with a = 0.3 is added. The loss function Eq. (3, using = 10. is updated every Jupdate = 3 iterations. The architecture looks as follows: an input layer of size. 52 52, representing the image luminance is followed by 3 consecutive 6 6 convolutional layers. with stride 2 and 96 filters feed into a fully connected layer of size 1024, which connects to the binary. code layer. This binary code layer feeds into a fully-connected layer of 1024 units, connecting to a fully-connected layer of 2400 units. This layer feeds into 3 consecutive 6 6 transposed convolutional. layers of which the final one connects to a pixel-wise softmax layer with 64 bins, representing the. pixel intensities. Moreover, label smoothing is applied to the different softmax bins, in which the. log-probability of each of the bins is increased by O.003, before normalizing. The softmax weights. are shared among each pixel. All output nonlinearities are ReLUs; Adam (Kingma & Ba] 2015) is. used as an optimization scheme; batch normalization (Ioffe & Szegedyl 2015) is applied to each layer. The architecture was shown in Figure[1of Section|2.3"}, {"section_index": "9", "section_name": "A.2 DESCRIPTION OF THE ADAPTED RLLAB TASKS", "section_text": "This section describes the continuous control environments used in the experiments. The tasks are. implemented as described inDuan et al.(2016), following the sparse reward adaptation of|Houthooft. et al.(2016). The tasks have the following state and action dimensions: CartPoleSwingup, S R4. A c R; MountainCar S c R3, A R1; HalfCheetah, S c R20, A R6; SwimmerGather,. S R33, A R2. For the sparse reward experiments, the tasks have been modified as follows. In. CartPoleSwingup, the agent receives a reward of +1 when cos() > 0.8, with the pole angle. In. MountainCar, the agent receives a reward of +1 when the goal state is reached, namely escaping. the valley from the right side. Therefore, the agent has to figure out how to swing up the pole in. the absence of any initial external rewards. In HalfCheetah, the agent receives a reward of +1 when\nxbody > 5. As such, it has to figure out how to move forward without any initial external reward. The time horizon is set to T = 500 for all tasks."}, {"section_index": "10", "section_name": "A.3 EXAMPLES OF ATAR1 26OO RAM ENTRIES", "section_text": "Table 5: Interpretation of particular RAM entries in Montezuma's Revenge\nRAM index Group Meaning 3 room room number 42 agent x coordinate 43 agent y coordinate 52 agent orientation (left/right). 27 beam walls on/off 83 beam walls beam wall countdown (on: 0, off: 36 -> 0) 0 counter counts from 0 to 255 and repeats 55 counter death scene countdown. 67 objects existence of objects (doors, skull and key) in the 1st room 47 skull x coordinate (both 1st and 2nd rooms)"}, {"section_index": "11", "section_name": "A.4 ANALYSIS OF LEARNED BINARY REPRESENTATION", "section_text": "Figure 6|shows the downsampled codes learned by the autoencoder for several Atari 2600 games (Frostbite, Freeway, and Montezuma's Revenge). Each row depicts 50 consecutive frames (from O tc 49, going from left to right, top to bottom). The pictures in the right column depict the binary codes that correspond with each of these frames (one frame per row). Figure|7 shows the reconstructions of several subsequent images according to the autoencoder.\nTable 5|lists the semantic interpretation of certain RAM entries in Montezuma's Revenge. SmartHash as described in Section3.4 makes use of RAM indices 3, 42, 43, 27, and 67. \"Beam walls' are deadly barriers that occur periodically in some rooms.\n15 20 25 30 35 40 45 15 20 25 30 35 10 15 20 25 30 35 40 45\nFigure 6: Frostbite, Freeway, and Montezuma's Revenge: subsequent frames (left) and corresponding code (right); the frames are ordered from left (starting with frame number O) to right, top to bottom. the vertical axis in the right images correspond to the frame number.\n10\nFigure 7: Freeway: subsequent frames and corresponding code (top); the frames are ordered from left (starting with frame number O) to right, top to bottom; the vertical axis in the right images correspond to the frame number. Within each image, the left picture is the input frame, the middle picture the reconstruction, and the right picture, the reconstruction error..\nWe experimented with directly building a hashing dictionary with keys $(s) and values the state counts, but observed an unnecessary increase in computation time. Our implementation converts th integer hash codes into binary numbers and then into the \"bytes\"' type in Python. The hash table is a dictionary using those bytes as keys\nHowever, an alternative technique called Count-Min Sketch (Cormode & Muthukrishnan!2005), wit a data structure identical to counting Bloom filters (Fan et al.||2ooo), can count with a fixed intege array and thus reduce computation time. Specifically, let p', ...,p' be distinct large prime number and define $' (s) = $(s) mod pj. The count of state s is returned as mintj1 n' (' (s)). To increas the count of s, we increment n' (o' (s)) by 1 for all j. Intuitively, the method replaces $ by weake hash functions, while it reduces the probability of over-counting by reporting counts agreed by a such weaker hash functions. The final hash code is represented as ('(s), ..., $'(s\nThroughout all experiments above, the prime numbers for the counting Bloom filter are 999931 999953, 999959, 999961, 999979, and 999983, which we abbreviate as \"6M\". In addition, we experimented with 6 other prime numbers, each approximately 15 M, which we abbreviate as \"90 M' As we can see in Figure[8l counting states with a dictionary or with Bloom filters lead to similar performance, but the computation time of latter is lower. Moreover, there is little difference between direct counting and using a very larger table for Bloom filters, as the average bonus rewards are almost the same, indicating the same degree of exploration-exploitation trade-off. On the other hand, Bloom filters require a fixed table size, which may not be known beforehand.\nand define (s) = (s) mod p'. The count of state s is returned as min1<i<1 n ( .To increase S\n8000 0.012 direct count direct count 7000 Bloom 6M Bloom 6M 0.010 6000 Bloom 90M Bloom 90M 5000 0.008 4000 0.006 3000 2000 0.004 1000 0.002 0 1000 0.000! 0 100 200 300 400 500 0 100 200 300 400 500 (a) Mean average undiscounted return (b) Average bonus reward.\nFigure 8: Statistics of TRPO-pixel-SimHash (k = 256) on Frostbite. Solid lines are the mean, while the shaded areas represent the one standard deviation. Results are derived from 10 random seeds. Direct counting with a dictionary uses 2.7 times more computations than counting Bloom filters (6 M. Or 90 M).\nTheory of Bloom Filters Bloom filters (Bloom[1970) are popular for determining whether a data. sample s' belongs to a dataset D. Suppose we have l functions $' that independently assign each. data sample to an integer between 1 and p uniformly at random. Initially 1, 2, ...,p are marked as O.. Then every s E D is \"inserted\"' through marking $' (s) as 1 for all j. A new sample s' is reported as a member of D only if $' (s) are marked as 1 for all j. A bloom filter has zero false negative rate (any. s e D is reported a member), while the false positive rate (probability of reporting a nonmember as a member) decays exponentially in l..\nThough Bloom filters support data insertion, it does not allow data deletion. Counting Bloom filter (Fan et al.2000) maintain a counter n(.) for each number between 1 and p. Inserting/deleting corresponds to incrementing/decrementing n(' (s)) by 1 for all j. Similarly, s is considered a\nWe now derive the probability of over-counting. Let s be a fixed data sample (not necessarily inserted yet) and suppose a dataset D of N samples are inserted. We assume that p' > N. Let n := min1j<i n' ($' (s)) be the count returned by the Bloom filter. We are interested in computing. Prob(n > O|s D). Due to assumptions about $', we know n' ((s)) ~ Binomial (N, 1). Therefore,.\nProb(n > Ols e D). Due to assumptions about '. we know n((s)) ~ Binomial(N, - .Therefore\nCount-Min sketch is designed to support memory-efficient counting without introducing too many over-counts. It maintains a separate count n' for each hash function $' defined as $' (s) = $(s) mod p', where p' is a large prime number. For simplicity, we may assume that p' ~ p Vj and J. assigns s to any of 1, . . ., p with uniform probability..\nbe the count returned by the Bloom filter. We are interested in computing := min1<i<1 n\nProb(n > 0,s D) Prob(n > 0|s D) = Prob(s D) Prob(n > 0) - Prob(s E D) Prob(s D) Prob(n > 0) Prob(s D) 1 Prob(nJ(oJ(s)) > O) (1 - 1/pl)N (1-(1-1/p)N) (1 -1/pl)N 1- 0-\nApart from the experimental results shown in Table[1and Table[3] additional experiments have been performed to study several properties of our algorithm.\nHyperparameter sensitivity To study the performance sensitivity to hyperparameter changes, w. focus on evaluating TRPO-RAM-SimHash on the Atari 2600 game Frostbite, where the method has clear advantage over the baseline. Because the final scores can vary between different random seeds. we evaluated each set of hyperparameters with 30 seeds. To reduce computation time and cost, RAN states are used instead of image observations.\nTable 6: TRPO-RAM-SimHash performance robustness to hyperparameter changes on Frostbit\nThe results are summarized in Table[6 Herein, k refers to the length of the binary code for hashing. while is the multiplicative coefficient for the reward bonus, as defined in Section 2.2 This. table demonstrates that most hyperparameter settings outperform the baseline ( = O) significantly Moreover, the final scores show a clear pattern in response to changing hyperparameters. Small. -values lead to insufficient exploration, while large -values cause the bonus rewards to overwhelm. the true rewards. With a fixed k, the scores are roughly concave in , peaking at around O.2. Higher granularity k leads to better performance. Therefore, it can be concluded that the proposed exploration. method is robust to hyperparameter changes in comparison to the baseline, and that the best parameter. settings can obtained from a relatively coarse-grained grid search..\nState and state-action counting Continuing the results in Table[6 the performance of state-action counting is studied using the same experimental setup, summarized in Table[7] In particular, a bonus reward r+ = instead of r+ = is assigned. These results show that the relative Vn(s, a) Vn(s) performance of state counting compared to state-action counting depends highly on the selected hyperparameter settings. However, we notice that the best performance is achieved using state. counting with k = 256 and = 0.2\nTable 7: Performance comparison between state counting (left of the slash) and state-action counting (right of the slash) using TRPO-RAM-SimHash on Frostbite\nB k 0.01 0.05 0.1 0.2 0.4 0.8 1.6 64 879 / 976 2464 / 1491 2243 / 3954 2489 / 5523 1587 / 5985 1107 / 2052 441/742 128 1475 / 808 4248 / 4302 2801 / 4802 3239 / 7291 3621 / 4243 1543 / 1941 395 / 362 256 2583 / 1584 4497 / 5402 4437 / 5431 7849 / 4872 3516 / 3175 2260 / 1238 374 / 96\n3 k 0 0.01 0.05 0.1 0.2 0.4 0.8 1.6 397 64 879 2464 2243 2489 1587 1107 441 128 1475 4248 2801 3239 3621 1543 395 256 2583 4497 4437 7849 3516 2260 374\nk 0 0.01 0.05 0.1 0.2 0.4 0.8 1.6 397 64 879 2464 2243 2489 1587 1107 441 128 1475 4248 2801 3239 3621 1543 395 256 2583 4497 4437 7849 3516 2260 374"}]
Sk8csP5ex
[{"section_index": "0", "section_name": "THE LOSS SURFACE OF RESIDUAL NETWORKS: ENSEMBLES & THE ROLE OF BATCH NORMALIZATION", "section_text": "Etai Littwin & Lior Wolf"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Residual Networks (He et al.]2015) (ResNets) are neural networks with skip connections. Thes. networks, which are a specific case of Highway Networks (Srivastava et al.]2015), present state. of the art results in the most competitive computer vision tasks including image classification anc object detection.\nOur analysis reveals the mechanism for this dynamic behavior and explains the driving force behind it. This mechanism remarkably takes place within the parameters of Batch Normalization (Ioffe & Szegedy2015), which is mostly considered as a normalization and a fine-grained whitening mechanism that addresses the problem of internal covariate shift and allows for faster learning rates\nWe show that the scaling introduced by batch normalization determines the depth distribution in the virtual ensemble of the ResNet. These scales dynamically grow as training progresses, shifting the. effective ensemble distribution to bigger depths.\nThe main tool we employ in our analysis is spin glass models.Choromanska et al.(2015a) have created a link between conventional networks and such models, which leads to a comprehensive study of the critical points of neural networks based on the spin glass analysis of|Auffinger et al. (2013). In our work, we generalize these results and link ResNets to generalized spin glass models. These models allow us to analyze the dynamic behavior presented above. Finally, we apply the results of Auffinger & Arous (2013) in order to study the loss surface of ResNets."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deep Residual Networks present a premium in performance in comparison to con- ventional networks of the same depth and are trainable at extreme depths. It has recently been shown that Residual Networks behave like ensembles of relatively shallow networks. We show that these ensembles are dynamic: while initially the virtual ensemble is mostly at depths lower than half the network's depth, as training progresses, it becomes deeper and deeper. The main mechanism that con- trols the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch Normalization technique. We explain this behavior and demonstrate the driving force behind it. As a main tool in our analysis, we employ generalized spin glass models. which we also use in order to study the number of critical points in the optimization of Residual Networks.\nThe success of residual networks was attributed to the ability to train very deep networks when employing skip connections (He et al.| 2016). A complementary view is presented byVeit et al. (2016), who attribute it to the power of ensembles and present an unraveled view of ResNets that depicts ResNets as an ensemble of networks that share weights, with a binomial depth distribution around half depth. They also present experimental evidence that short paths of lengths shorter than half-depth dominate the ResNet gradient during training\nThe analysis presented here shows that ResNets are ensembles with a dynamic depth behavior When starting the training process, the ensemble is dominated by shallow networks, with depths. lower than half-depth. As training progresses, the effective depth of the ensemble increases. This. Increase in depth allows the ResNet to increase its effective capacity as the network becomes more and more accurate."}, {"section_index": "3", "section_name": "2 A RECAP OF CHOROMANSKA ET AL. (2015A", "section_text": "A simple feed forward fully connected network N, with p layers and a single output unit is consid ered. Let n; be the number of units in layer i, such that no is the dimension of the input, and n, = 1 It is further assumed that the ReLU activation functions denoted by R( are used. The output Y of the network given an input vector x E Rd can be expressed as\nd p Y= (k) W. i=1 j=1 k=1\nDefinition 1. The mass o. f the network N is defined as i Y\nd Y p EA[Y]= k Xij P i=1 j=1 k=1\n(w) =EA[max(0,1-YxY)] La(w) =EA[[Yx-Y]]\nwhere Y, is a random variable corresponding to the true label of sample x. In order to equate either loss with the hamiltonian of the p-spherical spin glass model, a few key approximations are made:\nA4 Spherical constraint - The following is assumed:\nThese assumptions are made for the sake of analysis, and do not necessarily hold. The validity of these assumption was posed as an open problem in[Choromanska et al.[(2015b), where a different. degree of plausibility was assigned to each. Specifically, A1, as well as the independence assumption.. of Aj, were deemed unrealistic, and A2 - A4 as plausible. For example, A1 does not hold since. each input x; is associated with many different paths and x1 = x2 = ...xi. SeeChoromanska. et al.(2015a) for further justification of these approximations.\nWe briefly summarize [Choromanska et al.(2015a), which connects the loss function of multilayer networks with the hamiltonian of the p spherical spin glass model, and state their main contributions and results. The notations of our paper are summarized in Appendix|A|and slightly differ from those inChoromanska et al.(2015a).\nwhere the first summation is over the network inputs x1...xd, and the second is over all paths from input to output. There are = I=1n such paths and Vi, xi1 = x2 = ...xiy. The variable Aij E {0,1} denotes whether the path is active, i.e., whether all of the ReLU units along this path are producing positive activations, and the product II%=1 wf' represents the specific weight .(k) confi guration w1, ..w?, multiplying x, given path j. It is assumed throughout the paper that the input variables are sampled i.i.d from a normal Gaussian distribution.\nThe variables A,; are modeled as independent Bernoulli random variables with a success probability p, i.e., each path is equally likely to be active. Therefore,\nThe task of binary classification using the network V with parameters w is considered, using either the hinge loss Lh. r or the absolute loss L:\nA2 Redundancy in network parameterization - It is assumed the set of all the network weights. [w1, w2...w contains only A unique weights such that A < N.. A3 Uniformity - It is assumed that all unique weights are close to being evenly distributed on the graph of connections defining the network N. Practically, this means that we assume every. node is adjacent to an edge with any one of the A unique weights..\nA 1 < w: Y i=1\nUnder A1-A4, the loss takes the form of a centered Gaussian process on the sphere SA-1(/A) Specifically, it is shown to resemble the hamiltonian of the a spherical p-spin glass model given by:\nA 1 r 11 Hp,A(w) = Xi1 Wik A p- 2 i1...ip k=1\nwhere xi1... are independent normal Gaussian variables\nIn Auffinger et al.(2013), the asymptotic complexity of spherical p spin glass model is analyzed based on random matrix theory. In Choromanska et al.[(2015a) these results are used in order to shed light on the optimization process of neural networks. For example, the asymptotic complexity of spherical spin glasses reveals a layered structure of low-index critical points near the global op timum. These findings are then given as a possible explanation to several central phenomena found in neural networks optimization, such as similar performance of large nets, and the improbability of getting stuck in a \"bad' local minima.\nAs part of our work, we follow a similar path. First, a link is formed between residual networks and the hamiltonian of a general multi-interaction spherical spin glass model as given by..\np A Hp,(w)= Er II Xi1,i2...ir Wik A 2 r= i1,i2...ir=1 k=1\nwhere e1...ep are positive constants. Then, usingAuffinger & Arous(2013), we obtain insights or residual networks. The other part of our work studies the dynamic behavior of residual networks where we relax the assumptions made for the spin glass model.\nWe begin by establishing a connection between the loss function of deep residual networks and the hamiltonian of the general spherical spin glass model. We consider a simple feed forward fully connected network N, with ReLU activation functions and residual connections. For simplicity oi notations without the loss of generality, we assume n1 = ... = np = n. no = d as before. In our ResNet model, there exist p -- 1 identity connections skipping a single layer each, starting from the first hidden layer. The output of layer l > 1 is given by:\nNi(x) =R(W'Ni-1(x))+Ni-1(x\np d Yr r y=LL r)(k W r=1 i=1 j=1 k=1\nDefinition 2. The mass of a depth r subnetwork in N is defined as wr =dY\nThe properties of redundancy in network parameters and their uniform distribution, as described ir Sec.2] allow us to re-index Eq.9\nA 1 w?=1 i=1\nwhere W, denotes the weight matrix connecting layer l - 1 with layer l. Notice that the first hidden layer has no parallel skip connection, and so N1(x) = R(W' x). Without loss of generality, the scalar output of the network is the sum of the outputs of the output layer p and is expressed as\nwhereA.?) E {0,1} denotes whether path j of length r is open, and Vj, j', r, r' x, = x. The i3/. residual connections in W imply that the output Y is now the sum of products of different Iengths indexed by r. Since our ResNet model attaches a skip connection to every layer except the first.. 1 < r < p. See Sec.6[regarding models with less frequent skip connections..\nEach path of length r includes r 1 non-skip connections (those involving the first term in Eq.8 and not the second, identity term) out of layers l = 2.p. Therefore, ~r = (-1)nr. We define the following measure on the network:\nLemma 1. Assuming assumptions A2 - A4 hold, and E Z, then the output can be expressed after reindexing as:\nyr p ^ AT Y = i) Wik 12... r=1i1,i2...ir=1 j=1 k=1\nXr p EA[Y] = II Wik r=1i1,i2...ir=1j=1 k=1\nIn order to connect ResNets to generalized spherical spin glass models, we denote the variables\nA Si1,i2...ir Xi1,i2...ir I1,i2...ix En[?,2. j=1\nLemma 2. Assuming A2 - A3 hold, and n E N then V he following holds..\nThe independence assumption A1 was not assumed yet, and[14|holds regardless. Assuming A4 and denoting the scaled weights w, = w;, we can link the distribution of Y to the distribution on x:\nA I Xi1,i2...ir Wik /d A i1,i2...ir=1 k=1 A > I Xi1,i?...ir W i1,i2...ir=1 k=1\nwhere C1, C2 are positiye constants that do not. ffect the optimization process\nNote that since the input variables x1...xd are sampled from a centered Gaussian distribution (de pendent or not), then the set of variables x1,i2.... are dependent normal Gaussian variables.\nWe approximate the expected output EA(Y) with Y by assuming the minimal value in|13|holds. all weight configurations of a particular length in Eq. [10|will appear the same number of times. When A n, the uniformity assumption dictates that each configuration of weights would appear approximately equally regardless of the inputs, and the expectation values would be very close to\np A L I = I1 Wik r=1 i1,2...iz=1 k=1\nThe following lemma gives a generalized expression for the binary and hinge losses of the network\nLN(x) = C1 + CY\nWe denote the important quantities:\nn\nTheorem 1. Assuming p E N, we have that.. 1\n1 lim -arg max( p->0o\nTheorem 2. For any Q1 Q2, and assuming Q1p, Q2p p E N, it holds that. 1+B\nQ2P lim -1 p->0 r=Q1P\nThm.2 implies that for deep residual networks, the contribution of weight products of order far. an ensemble of potentially shallow conventional nets. The next Lemma shows that we can shift the effective depth to any value by simply controlling C..\nLemma 4. For any integer 1 < k < p there exists a global scaling parameter C such tha arg max,(er(C)) = k.\nThe expression for the output of a residual net in Eq.15 provides valuable insights into the machinery at work when optimizing such models. Thm.|1|and|2Jimply that the loss surface resembles that of ar ensemble of shallow nets (although not a real ensemble due to obvious dependencies), with variou depths concentrated in a narrow band. As noticed inVeit et al.(2016), viewing ResNets as ensembles of relatively shallow networks helps in explaining some of the apparent advantages of these models particularly the apparent ease of optimization of extremely deep models, since deep paths barely affect the overall loss of the network. However, this alone does not explain the increase in accuracy of deep residual nets over actual ensembles of standard networks. In order to explain the improvec performance of ResNets, we make the following claims:\nThe model in Eq.16 has the form of a spin glass model, except for the dependency between the variables i1,i2...tr. We later use an assumption similar to A1 of independence between these vari- ables in order to link the two binary classification losses and the general spherical spin glass model However, for the results in this section, this is not necessary.\nThe series (er)P-1 determines the weight of interactions of a specific length in the loss surface. No- tice that for constant depth p and large enough , arg max. (er) = p. Therefore, for wide networks, where n and, therefore, are large, interactions of order p dominate the loss surface, and the effect of the residual connections diminishes. Conversely, for constant and a large enough p (deep net- works), we have that arg max,(er) < p, and can expect interactions of order r < p to dominate the loss. The asymptotic behavior of e is captured by the following lemma:\nAs the next theorem shows. the epsilons are concentrated in a narrow band near the maximal value\nA simple global scaling of the weights is, therefore, enough to change the loss surface, from an ensemble of shallow conventional nets, to an ensemble of deep nets. This is illustrated in Fig.1(a-c) for various values of . In a common weight initialization scheme for neural networks, C = - (Orr & Muller2003f[Glorot & Bengio|2010). With this initialization and A = n, = p and the maximal weight is obtained at less than half the network's depth limp->oo arg max,(er) < . Therefore, at the initialization, the loss function is primarily influenced by interactions of considerably lower order than the depth p, which facilitates easier optimization.\n1. The distribution of the depths of the networks within the ensemble is controlled by th scaling parameter C.\np d Yr LLL LN(x,w) =C1 +C2 r)k W r=1 i=1 j=1 k=1\nNotice that the addition of a multiplier r indicates that the derivative is increasingly influenced by deeper networks."}, {"section_index": "4", "section_name": "4.1 BATCH NORMALIZATION", "section_text": "Batch normalization has shown to be a crucial factor in the successful training of deep residua networks. As we will show, batch normalization layers offer an easy starting condition for the. network, such that the gradients from early in the training process will originate from extremely. shallow paths.\nWe consider a simple batch normalization procedure, which ignores the additive terms, has the out- put of each ReLU unit in layer l normalized by a factor oj and then is multiplied by some parameter A. The output of layer l > 1 is therefore:\nR(WNi-1(x))+Ni-1(x) Ni(x) = 0\nwhere oj is the mean of the estimated standard deviations of various elements in the vector R(W,' Ni-1(x)). Furthermore, a typical initialization of batch normalization parameters is to set. Vi, i = 1. In this case, providing that units in the same layer have equal variance ot, the recursive relation E[Wi+1(x)?] = 1 + E[W(x)?] holds for any unit j in layer l. This, in turn, implies that the. output of the ReLU units should have increasing variance o? as a function of depth. Multiplying the weight parameters in deep layers with an increasingly small scaling factor , effectively reduces the influence of deeper paths, so that extremely short paths will dominate the early stages of opti-. mization. We next analyze how the weight scaling, as introduced by batch normalization, provides. a driving force for the effective ensemble to become deeper as training progresses..\nWe consider a simple network of depth p, with a single residual connection skipping p - m layers. We further assume that batch normalization is applied at the output of each ReLU unit as described in Eq.22 We denote by l1...lm the indices of layers that are not skipped by the residual connection.\n2. During training, C changes and causes a shift of focus from a shallow ensemble to deeper and deeper ensembles, which leads to an additional capacity. 3. In networks that employ batch normalization, C is directly embodied as the scale parameter X. The starting condition of X = 1 offers a good starting condition that involves extremely shallow nets.\nFor the remainder of Sec.4, we relax all assumptions, and assume that at some point in time the loss can be expressed:\nwhere C1, C2 are some constants that do not affect the optimization process. In order to gain addi tional insight into this dynamic mechanism, we investigate the derivative of the loss with respect to the scale parameter C. Using Eq.[9[for the output, we obtain:\np d 2r 0LN(x,w) rx(?A(?) II r)(k W ac r=1 i=1 j=1 k=1\n0.45 0.35 0.35 0.4 0.3 0.3 0.35 0.25 0.25 0.3 0.25 0.2 0.2 0.2 0.15 0.15 0.15 0.1 0.1 0.1 0.05 0.05 0.05 0 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 (a) (b) (c) 1.2 0.8 0.6 0.4 0.20 5000 10000 15000 20000 500 1000 1500 2000 (d) (e) (f)\nFigure 1: (a) A histogram of er(), r = 1..p, for = 0.1 and p = 100 . (b) Same for = 0.5. (c) Same for = 2. (d) Values (y-axis) of the batch normalization parameters X, (x-axis) for. 10 layers ResNet trained to discriminate between 50 multivariate Gaussians (see Appendix |C|for. more details). Higher plot lines indicate later stages of training. (e) The norm of the weights of a residual network, which does not employ batch normalization, as a function of the iteration. (f) The. asymptotic of the mean number of critical points of a finite index as a function of 3..\ndYm d Yp p N(x,w) =Xm m (m) I (m)(k) (m) s(p) ,(p)(k) wij W xij i=1 j=1 k=1 i=1 j=1 k=1 Lm(x,w) + Lp (x.u\nWe denote by w, the derivative operator with respect to the parameters w, and the gradient g = VwL(x, w) = gm + gp evaluated at point w..\n0Ln(x,w - g)\naLn(x,w- g) aai\nThm.3 suggests that || will increase for layers l that do not have skip-connections. Conversely, if layer l has a parallel skip connection, then || will increase if ||gp||2 > l|gm|[2, where the later condition implies that shallow paths are nearing a local minima. Notice that an increase in |Aigl...lm results in an increase in [p], while [m] remains unchanged, therefore shifting the balance into deeper ensembles.\nThis steady increase of |], as predicted in our theoretical analysis, is also backed in experimen. tal results, as depicted in Fig.1[d). Note that the first layer, which cannot be skipped, behaves differently than the other layers. More experiments can be found in Appendix|C.\nIt is worth noting that the mechanism for this dynamic property of residual networks can also be. observed without the use of batch normalization, as a steady increase in the L2 norm of the weights as shown in Fig.1[e). In order to model this, consider the residual network as discussed above. without batch normalization layers. Recalling, ||w||2 = CA, w = w, the loss of this network is. expressed as:\nd Ym d Yp p LN(x,w) =Cm m) (m) (m)(k LL m) 1(p) I1 37(p)(k) xij wij in i=1 j=1 k=1 i=1 j=1 k=1 Lm(x,w) + Lp(x,W\n0LN(x,w - g (m|gm|l2 + pl|gp|l2 + (m + p)gp gm) dc\nThm.4|indicates that if either l|gpl|2 or l|gml|2 is dominant (for example, near local minimas of the shallow network, or at the start of training), the scaling of the weights C will increase. This expansion will, in turn, emphasize the contribution of deeper paths over shallow paths, and in- crease the overall capacity of the residual network. This dynamic behavior of the effective depth of residual networks is of key importance in understanding the effectiveness of these models. While optimization starts off rather easily with gradients largely originating from shallow paths, the overall advantage of depth is still maintained by the dynamic increase of the effective depth.\nWe now present the results of[Auffinger & Arous(2013) regarding the asymptotic complexity in the case of limA->oo of the multi-spherical spin glass model given by:.\nA He,^=- Er A r- 2 r=2 i1,...ir=1\nA 8 1 e=1 w=1, ^ i=1 r=2\ner(r- 1) a2 = r=2 r=2\nNote that for the single interaction spherical spin model a2 = 0. The index of a critical point of He,A is defined as the number of negative eigenvalues in the hessian V2 He.A evaluated at the critical. point w.\nDefinition 4. For any O < k < A and u E R, we denote the random number Crtx.k(u, e) as the number of critical points of the hamiltonian in the set BX = {AX|X E (-oo, u)} with index k\nCrtA.k(u, e) = 1{He,A E Au}1{i(V2He,A)=k w:VHe,A=0\nwhere J,... are independent centered standard Gaussian variables, and e = (er)r>2 are positive. real numbers such that r=2 er2r < oo. A configuration w of the spin spherical spin-glass model is a vector in RA satisfying the spherical constraint:.\n1. (29) =1 A =1 r=2 Note that the variance of the process is independent of e: OX E[H?.A] =A1-re? 2 = ^ =A (30) Definition 3. We define the following:. O 8 U' =) er, v\" =er(r-1), Q =v\" + v' (31)\n8 A 8 E[H?,A]= A1-r r e? w?)=^ e=A r=2 i=1 r=1\nEq.33|provides the asymptotic mean total number of critical points with non-diverging index k. It is presumed that the SGD algorithm will easily avoid critical points with a high index that have many descent directions, and maneuver towards low index critical points. We, therefore, investigate how the mean total number of low index critical points vary as the ensemble distribution embodied in er Jr>2 changes its shape by a steady increase in 3.\nFig.1(f) shows that as the ensemble progresses towards deeper networks, the mean amount of low index critical points increases, which might cause the SGD optimizer to get stuck in local minima This is, however, resolved by the the fact that by the time the ensemble becomes deep enough the loss function has already reached a point of low energy as shallower ensembles were more dominant earlier in the training. In the following theorem, we assume a finite ensemble such tha 1 Er2r ~ 0.\nTheorem 5. For any k E N, p > 1, we denote the solution to the following constrained optimization nroblems.\np e = 1 e* = argmax0g(R,e) s.t E r=2\nr = p otherwise\nThm.5|implies that any heterogeneous mixture of spin glasses contains fewer critical points of a. finite index, than a mixture in which only p interactions are considered. Therefore, for any distribu tion of e that is attainable during the training of a ResNet of depth p, the number of critical points is. lower than the number of critical points for a conventional network of depth p.."}, {"section_index": "5", "section_name": "6 DISCUSSION", "section_text": "In this work, we use spin glass analysis in order to understand the dynamic behavior ResNets dis. play during training and to study their loss surface. In particular, we use at one point or another the. assumptions of redundancy in network parameters, near uniform distribution of network weights, in. dependence between the inputs and the paths and independence between the different copies of the. nput as described in Choromanska et al.[(2015a). The last two assumptions, i.e., the two indepen dence assumptions, are deemed in Choromanska et al.[(2015b) as unrealistic, while the remaining. are considered plausible\nOur analysis of critical points in ensembles (Sec. 5) requires all of the above assumptions. However, Thm. 1 and 2, as well as Lemma. 4, do not assume the last assumption, i.e., the independence between the different copies of the input. Moreover, the analysis of the dynamic behavior of residual nets (Sec. 4) does not assume any of the above assumptions.\nOur results are well aligned with some of the results shown in Larsson et al.(2016), where it is noted empirically that the deepest column trains last. This is reminiscent of our claim that the deeper networks of the ensemble become more prominent as training progresses. The authors of Larsson et al.(2016) hypothesize that this is a result of the shallower columns being stabilized at a certain point of the training process. In our work, we discover the exact driving force that comes into play.\nIn addition, our work offers an insight into the mechanics of the recently proposed densely connecte. networks (Huang et al.[2016). Following the analysis we provide in Sec. 3, the additional shortcu paths decrease the initial capacity of the network by offering many more short paths from inpu to output, thereby contributing to the ease of optimization when training starts. The driving forc mechanism described in Sec. 4.2 will then cause the effective capacity of the network to increase\nNote that the analysis presented in Sec. 3 can be generalized to architectures with arbitrary skip connections, including dense nets. This is done directly by including all of the induced sub networks in Eq.9] The reformulation of Eq.[10|would still holds, given that I, is modified accordingly.\n0k(R,e) W +w"}, {"section_index": "6", "section_name": "7 CONCLUSION", "section_text": "Ensembles are a powerful model for ResNets, which unravels some of the key questions that have. surrounded ResNets since their introduction. Here, we show that ResNets display a dynamic en semble behavior, which explains the ease of training such networks even at very large depths, while. still maintaining the advantage of depth. As far as we know, the dynamic behavior of the effective. capacity is unlike anything documented in the deep learning literature. Surprisingly, the dynamic mechanism typically takes place within the outer multiplicative factor of the batch normalization. module."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Antonio Auffinger and Gerard Ben Arous. Complexity of random smooth functions on the high dimensional sphere. Annals of Probability, 41(6):4214-4247, 11 2013.\nAnna Choromanska, Yann LeCun, and Gerard Ben Arous. Open problem: The landscape of the los surfaces of multilayer networks. In COLT, pp. 1756-1760, 2015b\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nGao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks arXiv preprint arXiv:1608.06993, 2016\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 448-456, 2015\nGustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net works without residuals. arXiv preprint arXiv:1605.07648, 2016.\nGenevieve B Orr and Klaus-Robert Muller. Neural networks: tricks of the trade. Springer, 2003"}, {"section_index": "8", "section_name": "A SUMMARY OF NOTATIONS", "section_text": "Table[1presents the various symbols used throughout this work and their meaning\nAnna Choromanska. Mikael Henaff. Michael Mathieu, Gerard Ben Arous. and Yann LeCun. The loss surfaces of multilayer networks. In A1STATS, 2015a..\nAndreas Veit, Michael Wilber, and Serge Belongie. Residual networks behave like ensembles of relatively shallow networks. In NIPS, 2016."}, {"section_index": "9", "section_name": "SYMBOL", "section_text": "The dimensionality of the input x The output of layer i of network given input x The final output of the network V True label of input x Loss function of network V Hinge loss Absolute loss The depth of network V Weights of the network w E RA A positive scale factor such that ||w||2 = C Scaled weights such that w = w The number of units in layers l > 0 The number of unique weights in the network The total number of weights in the network V The weight matrix connecting layer l - 1 to layer l in V. The hamiltonian of the p interaction spherical spin glass model. The hamiltonian of the general spherical spin glass model. A Total number of paths from input to output in network V yd Total number of paths from input to output in network N of length r Yr d ReLU activation function Bernoulli random variable associated with the ReLU activation functio Parameter of the Bernoulli distribution associated with the ReLU unit 3) multiplier associated with paths of length r in V. pnC VA Normalization factor Batch normalization multiplicative factor in layer l. The mean of the estimated standard deviation various elements in R(W\nProof of Lemma[1] There are a total of r paths of length r from input to output, and a total of Ar unique r length configurations of weights. The uniformity assumption then implies that each. configuration of weights is repeated Ir times. By summing over the unique configurations, and re. indexing the input we arrive at Eq.10.\nProof of Lemma[] From[12 we have that S1,2.., is defined as a sum of r inputs. Since there are only p distinct inputs, it holds that for each 1,i2..., there exists a sequence Q = (at)i=1 E N such that -1 Q; = Xr, and Si1,2.i, = 1 Q,x. We, therefore, have that E[?,...,] = |||l3 Note that the minimum value of E[&? ?, 2..r] is a solution to the following:\nmin(E[?,...]) = mina(||a|2) s.ta1 -1 E N."}, {"section_index": "10", "section_name": "DESCRIPTION", "section_text": "lim Bap) = H() + alog() Og p->0\nProof of Thm.2 For brevity, we provide a sketch of the proof. It is enough to show that limp->00 O17 = 0 for < 1. Ignoring the constants in the binomial terms. we have\nQ1P Q1p Q1 1 lim lim 9.- lim p->0o p->0o p->0o r=1\n/here z2 which can be expressed using the Legendre polynomial of order p:\nProof of Lemma|4 For simplicity, we ignore the constants in the binomial coefficient, and assume er = () r. Notice that for * = (), we have that arg max,(er(B*)) = p, arg max,(er(*)) = 1 and arg max,(er(1)) = . From the monotonicity and continuity of r, any value 1 k p can be attained. The linear dependency (C) = pnC completes the proof. A\nOLN(x,w- g) dLN(x,w) dLN(x,w) aai 9 aai aai\nOLN(x,w - g) gp +lgp dai\n-\nJsing taylor series expansion:. dLn(x, w- g). dLN(x,w) dLN(x,w) (40) aLN(x,w) Substituting Vw - (gm + gp) in40|we have: dLN(x,w- gw) I < 0 (41) 9m a 9p 9m. + And hence: dLN(x,w - gw) (42 Finally: (43) 1 OLN(x,w) 2. Since paths of length m skip layer l, we have that .. I, 9p. Therefore: dLv(x,w - g) (44) 9m9p - n? The condition ||gpl|2 > ||gm||2 implies that gmgp + l|gpll2 > 0, completing the proof.\ndLN(x,w- gw) 9m + gp)'(gm + gp) = lgm +gplI2 < 0\ngm+gp|l2)]=|i|(1+\ndLN(x,w) g = (mLm(x,w) +pLp(x,w)) ngm + pgp)' dc = (mLm(x,w) +pLp(x, lgm + pgp) mgm + pgp)\n0LN(x,w - gw mgm + pgp)'(gm + gp) dC -(m||gp|l2 + p||gp|l2 + (m +p)gp gm\neT(V\"_ V)e eT(V+ V)\nnaxe0k(R,e) < max\nFig. 1(d) and 1(e) report the experimental results of a straightforward setting, in which the task is to classify a mixture of 10 multivariate Gaussians in 50D. The input is therefore of size 50. The loss employed is the cross entropy loss of ten classes. The network has 10 blocks, each containing. 20 hidden neurons, a batch normalization layer, and a skip connection. Training was performed on. 10,000 samples, using SGD with minibatches of 50 samples..\nAs noted in Sec. 4.2, the dynamic behavior can be present in the Batch Normalization multiplica. tive coefficient or in the weight matrices themselves. In the following experiments, it seems that\nis orthogonal to the weights. We have that L(x,w) (mLm(x, w) +pLp(x, w)). Using taylor ac series expansion we have: dLv(x, w - g) dLN(x,w) dLN(x,w) uVw (45) ac ac ac For the last term we have: dLN(x,w) V w g = (mLm(x,w) + pLp(x, w ac =(mLm(x,w) + pLp(x, W mgm + pgp)'g,(46) d n45 we have: dLN(x,w- gw) 0-(mgm+pgp)(gm+ gp) aC -(m|gp|l2+p|gp|l2+(m+p)ggm) (47) Proof of Thm[5]Inserting Eq.31|into Eq.[33|we have that: qr=2er(r-1) _=2r(r-2) (48) r=2 e?r r=2 e?r2 We denote the matrices V' and V\" such that Vf, = ro, and V/f = r(r -- 1)oj. We then have: eT(V\" _V')e (49) eT(V\"+ V')e maxe0k(R,e) < max min V! - V) nax =0k(R,e*) (50)\nOLN(x,w- g) dLN(x,w) dLN(x,w) 9 ac ac ac\n2r(r-2\nFig.2|depicts the results. There are two types of plots: Fig. 2(a,c) presents for CIFAR-10 and CIFAR-100 respectively the magnitude of the various convolutional layers for multiple epochs (sim ilar in type to Fig. 1(d) in the paper). Fig.2(b,d) depict for the two datasets the mean of these norms over all convolutional layers as a function of epoch (similar to Fig. 1(e))\nAs can be seen, the dynamic phenomenon we describe is very prominent in the public ResNe implementation when applied to these conventional datasets: the dominance of paths with fewe. skip connections increases over time. Moreover, once the learning rate is reduced in epoch 81 the phenomenon we describe speeds up\nIn Fig. 3|we present the multiplicative coefficient of the Batch Normalization when not absorbed As future work, we would like to better understand why these coefficients start to decrease once the learning rate is reduced. As shown above, taking the magnitude of the convolutions into account the dynamic phenomenon we study becomes even more prominent at this point. The change oi location from the multiplicative coefficient of the Batch Normalization layers to the convolutions themselves might indicate that Batch Normalization is no longer required at this point. Indeed Batch Normalization enables larger training rates and this shift happens exactly when the training rate is reduced. A complete analysis is left for future work\nuntil the learning rate is reduced, the dynamic behavior is manifested in the Batch Normaliza- tion multiplicative coefficients and then it moves to the convolution layers themselves. We there- fore absorb the BN coefficients into the convolutional layer using the public code of https: //github.com/e-lab/torch-toolbox/tree/master/BN-absorber Note that the multiplicative coefficient of Batch Normalization is typically refereed to as y. However, throughout our paper, since we follow the notation of|Choromanska et al.[(2015a), y refers to the number of paths. The multiplicative factor of Batch normalization appears as A in Sec. 4.\nNorm of the weights of the convolution layers for multiple epochs for cifar10 Mean norm of convolution layers as a function of epoch for cifar1o 240 21 220 30 200 (pequosqe s!1 141 25 180 20 4C an 10 120 100 15 20 20 20 40 60 80 100 120 140 160 180 conv layer epoch (b) (a Norm of the weights of the convolution layers for multiple epochs for cifar100 Mean norm of convolution layers as a function of epoch for cifar100 350 21 41 60 300 250 S 40 30 150 100 20 25 20 40 60 80 100 120 140 160 180 conv layer epoch (d) c\nFigure 2: (a,c) The Norm of the convolutional layers once the factors of the subsequent Batch. Normalization layers are absorbed, shown for CIFAR-10 and CIFAR-100 respectively. Each graph. s a different epoch, see legend. Waving is due to the interleaving architecture of the convolutiona ayers. (b,d) Respectively for CIFAR-10 and CIFAR-100, the mean of the norm of the convolutional. ayers' weights per epoch.\nFigure 3: The norms of the multiplicative Batch Normalization coefficient vectors. (a,c) The Norn of the coefficients, shown for CIFAR-10 and CIFAR-100 respectively. Each graph is a differen. epoch (see legend). Since there is no monotonic increase between the epochs in this graph, it i harder to interpret. (b,d) Respectively for CIFAR-10 and CIFAR-100, the mean of the norm of the. multiplicative factors per epoch.\nBatch Normalization gamma per layer for multiple epochs for cifar10 Mean norm of Batch Normalization gamma vectors as a function of epoch for cifar10 160 1 10 150 161 140 30 120 110 100 10 20 25 30 35 0 20 40 80 100 120 140 15 60 160 180 conv layer epoch (a (b) Batch Normalization gamma per layer for multiple epochs for cifar100 Mean norm of Batch Normalization gamma vectors as a function of epoch for cifar100 20 200 21 18 41 190 16 180 14 162 12 170 0 mnea 160 150 140 25 130 10 15 30 35 20 40 80 20 60 100 120 140 160 180 conv layer epoch (d) c"}]
BJxhLAuxg
[{"section_index": "0", "section_name": "A DEEP LEARNING APPROACH FOR JOINT VIDEC FRAME AND REWARD PREDICTION IN ATARI GAMES", "section_text": "Felix Leibfried\nfelix.leibfried@qmail.com\nabout the environment dynamics or the reward structure. In this paper we take a step towards using model-based techniques in environments with high-dimensional visual state space when system dynamics and the reward structure are both unknown and need to be learned, by demonstrating that it is possible to learn both jointly. Empirical evaluation on five Atari games demon- strate accurate cumulative reward prediction of up to 200 frames. We consider these positive results as opening up important directions for model-based RL in complex. initially unknown environments."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "When humans or animals receive reward for taking a particular action in a given situation, the prob- ability is increased that they will act similarly in similar situations in the future. This is described by principles such as the law of effect (Thorndike1898), operant conditioning (Skinner1938) and trial-and-error learning (Thorpe 1979) in behaviorist psychology, and has inspired a discipline of artificial intelligence called reinforcement learning (RL,Sutton & Barto[(1998)). RL is concerned with finding optimal behavior policies in order to maximize agents' cumulative future reward.\nApproaches to RL can be divided into model-free and model-based approaches. In model-free ap. proaches, agents learn by trial and error but do not aim to explicitly capture the dynamics of the envi. ronment or the structure of the reward function underlying the environment. State-of-the-art model. free approaches, such as DQN (Mnih et al.]2015), effectively approximate so-called Q-values, i.e the value of taking specific actions in a given state, using deep neural networks. The impressive. effectiveness of these approaches comes from their ability to learn complex policies directly fror. high-dimensional input (e.g., video frames). Despite their effectiveness, model-free approaches re. quire large amounts of training data that have to be collected through direct interactions with the environment, which makes them expensive to apply in settings where interactions are costly (sucl as most real-world applications). Additionally, model-free RL requires access to reward observa tions during training, which is problematic in environments with sparse reward structure unles. coupled with an explicit exploration mechanism..\nRL approaches that explicitly learn statistics about the environment or the reward are generally. referred to as model-based-in a more narrow definition these statistics comprise environment dy. namics and the reward function. In recent work. model-based techniques were successfully use to learn statistics about cumulative future reward (Veness et al.|2015) and to improve exploratior. by favoring actions that are likely to lead to novel states (Bellemare et al.. 2016] Oh et al. 2015\n*Research conducted while interning at Microsoft\nNate Kushman & Katia Hofimann\nnkushman@microsoft.com katja.hofmann@microsoft.cor"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Reinforcement learning is concerned with learning to interact with environments that are initially unknown. State-of-the-art reinforcement learning approaches, such as DQN, are model-free and learn to act effectively across a wide range of environments such as Atari games, but require huge amounts of data. Model- based techniques are more data-efficient, but need to acquire explicit knowledge about the environment dynamics or the reward structure.\nresulting in substantially more data efficient learning compared to model-free approaches. When ar. accurate model of the true environment dynamics and the true reward function is available, model based approaches, such as planning via Monte-Carlo tree search (Browne et al.|2. 2012) outperfor model-free state-of-the-art approaches (Guo et al.]2014).\nOur empirical results on five Atari games demonstrate that our approach can successfully predic. cumulative reward up to roughly 200 frames. We complement our quantitative results with a de tailed error analysis by visualizing example predictions. Our results are the first to demonstrate the. feasibility of using a learned dynamics and reward model for accurate planning. We see this as a sig. nificant step towards data efficient RL in high-dimensional environments without prior knowledge.."}, {"section_index": "3", "section_name": "RELATED WORK AND MOTIVATION", "section_text": "Two lines of research are related to the work presented in this paper: model-based RL and optima. control theory. Model-based RL utilizes a given or learned model of some aspect of a task to, e.g.. reduce data or exploration requirements (Bellemare et al.2016f[Oh et al.[[2015f[Veness et al.[2015) Optimal control theory describes mathematical principles for deriving control policies in continuous action spaces that maximize cumulative future reward in scenarios with known system dynamics and known reward structure (Bertsekas20072005).\nThere has been recent interest in combining principles from optimal control theory and model-basec. learning in settings where no information on system dynamics is available a priori and instead has. to be acquired from visual data (Finn et al.]2016) |Wahlstrom et al.2015)Watter et al.[2015). The general idea behind these approaches is to learn a compressed latent representation of the visual. state space from raw images through autoencoder networks (Bengio2009) and to utilize the ac. quired latent representation to infer system dynamics. System dynamics are then used to specify a planning problem which can be solved by optimization techniques to derive optimal policies. Watte. et al.(2015) introduce an approach for learning system dynamics from raw visual data by jointly. training a variational autoencoder (Kingma & Welling2014) Rezende et al.]2014) and a state pre diction model that operates in the autoencoder's compressed latent state representation. A similai. approach for jointly learning a compressed state representation and a predictive model is pursued by. Wahlstrom et al.(2015).|Finn et al.(2016) devise a sequential approach that first learns a latent state. representation from visual data and that subsequently exploits this latent representation to augment. a robot's initial state space describing joint angles and end-effector positions. The augmented state. space is then used to improve estimates of local system dynamics for planning..\nThe approaches presented above assume knowledge of the functional form of the true reward signa and are hence not directly applicable in settings like ALE (and many real-world settings) where the eward function is initially unknown. Planning in such settings therefore necessitates learning botl ystem dynamics and reward function in order to infer optimal behavioral policies. Recent worl y Oh et al.(2015) introduced an approach for learning environment dynamics from pixel image and demonstrated that this enabled successful video frame prediction over up to 400 frames. Ii our current paper, we extend this recent work to enable reward prediction as well by modifying th network's architecture and training objective accordingly. The modification of the training objectiv ears a positive side effect: since our network must optimize a compound loss consisting of th ideo frame reconstruction loss and the reward loss, reward-relevant aspects in the video frames tc which the reconstruction loss alone might be insensitive are explicitly captured by the optimizatioi objective. In the subsequent section, we elucidate the approach fromOh et al.(2015) as well as ou extensions for reward prediction in more detail.\nA key open question is whether effective model-based RL is possible in complex settings where the. environment dynamics and the reward function are initially unknown, and the agent has to acquire. such knowledge through experience. In this paper, we take a step towards addressing this question by extending recent work on video frame prediction (Oh et al.|2015), which has been demonstrated to effectively learn system dynamics, to enable joint prediction of future states and rewards using a single latent representation. We propose a network architecture and training procedure for joint. state and reward prediction, and evaluate our approach in the Arcade Learning Environment (ALE,. Bellemare et al.(2013)).\nAction At most 18 Predicted reward Fc Lin Input Predicted frames 2048 next frame Softmax Conv Conv Conv Deconv Deconv Deconv 64,6x6 64,6x6 64.6x6 64,6x6 64,6x6 1, 6x6 pad 0,0 pad 2,2 pad 2,2 pad 2,2 pad 2,2 pad 0,0 stride 2 stride 2 stride 2 -( Stride 2 stride 2 stride 2 ReLU ReLU ReLU ReLL ReLL ReL ReL 4x84x84 64x40x40 64x20x20 64x10x10 1024 2048 2048 1024 64x10x10 64x20x20 64x40x40 1x84x84 Encoding Transformation Decoding and reward prediction\nAt most 18 reward -C Lin Input Predicted frames 2048 next frame Softmax Conv Conv Conv Deconv Deconv Deconv 64, 6x6 64, 6x6 64, 6x6 64, 6x6 64, 6x6 1,6x6 pad 0,0 pad 2,2 pad 2,2 pad 2,2 pad 2,2 pad 0,0 stride 2 stride 2 stride 2 stride 2 stride 2 stride 2 ReLU ReLU ReLU ReLU ReLU ReLU ReLU Lin 4x84x84 64x40x40 64x20x20 64x10x10 1024 2048 2048 1024 64x10x10 64x20x20 64x40x40 1x84x84 Encoding Transformation Decoding and reward prediction\nFigure 1: Network architecture for joint video frame and reward prediction. The architecture com prises three stages: an encoding stage mapping current input frames to some compressed latent. representation, a transformation stage integrating the current action into the latent representation. through element-wise vector multiplication denoted by ' ', and a final predictive stage for recon- structing the frame of the next time step and the current reward. The network uses three different types of neuron layers ('Conv' for convolutional, 'Deconv' for deconvolutional and 'Fc' for forward. connection) in combination with three different types of activation functions ('ReLU', 'Softmax' and. 'Lin' for linear activations). The dimensional extend of individual layers is either depicted beneath. or within layers. The network part coloured in red highlights the extension for reward prediction."}, {"section_index": "4", "section_name": "3.1 VIDEO FRAME PREDICTION", "section_text": "The deep network proposed by[Oh et al. (2015) for video frame prediction in Atari games aims at learning a function that predicts the video frame St+1 at the next time step t + 1, given the current history of frames st-h+1:t with time horizon h and the current action at taken by the agent-see Section[3.1 Here, we extend this work to enable joint video frame and reward prediction such that the network anticipates the current reward r as well-see Sections|3.2|and|3.3\nThe video-frame-predictive architecture fromOh et al. (2015) comprisesthreeinformation processing stages: an encoding stage that maps input frames to some compressed latent represen- tation, a transformation stage that integrates the current action into the compressed latent represen- tation, and a decoding stage that maps the compressed latent representation to the predicted next frame- --see Figure [1 The initial encoding stage is a sequence of convolutional and forward oper- ations that map the current frame history St-h+1:t-a three-dimensional tensor-to a compressed feature vector henc. The transformation stage converts this compressed feature vector henc into an action-conditional representation hdec in vectorized form by integrating the current action at. The current action a is represented as a one-hot vector with length varying from game to game since there are at least 3 and at most 18 actions in ALE. The integration of the current action into the compressed feature vector includes an element-wise vector multiplication-depicted as ' ' in Fig- ure|1-with the particularity that the two neuron layers involved in this element-wise multiplication are the only layers in the entire network without bias parameters, see Section 3.2 in|Oh et al. (2015). Finally, the decoding stage performs a series of forward and deconvolutional operations (Dosovit- skiy et al.| 2015Zeiler et al.]2010) by mapping the action-conditional representation hdec of the current frame history St-h+1:t and the current action at to the predicted video frame St+1 of the next time step t + 1. Note that this necessitates a reshape operation at the beginning of the decoding cascade in order to transform the vectorized hidden representation into a three-dimensional tensor The whole network uses linear and rectified linear units (Glorot et al.[[2011) only. In all our experi- ments, following DQN (Mnih et al.|2015), the video frames processed by the network are 84 84 grey-scale images down-sampled from the full-resolution 210 160 Atari RGB images from ALE. Following Mnih et al.(2015) and[Oh et al.[(2015). the history frame time horizon h is set to 4."}, {"section_index": "5", "section_name": "3.2 REWARD PREDICTION", "section_text": "In this section we detail our proposed network architecture for joint state and reward prediction. Our model assumes ternary rewards which result from reward clipping in line with Mnih et al.(2015) Original game scores in ALE are integers that can vary significantly between different Atari games and the corresponding original rewards are clipped to assume one of three values: -1 for negative rewards, 0 for no reward and 1 for positive rewards. Because of reward clipping, rewards can be represented as vectors rt in one-hot encoding of size 3.\nIn Figure[1 our extension of the video-frame-predictive architecture from[Oh et al.(2015) to enab] reward prediction is highlighted in red. We add an additional softmax layer to predict the currer reward rt with information contained in the action-conditional encoding hdec. The motivation be hind this extension is twofold. First, our extension makes it possible to jointly train the networ with a compound objective that emphasizes both video frame reconstruction and reward predictior and thus encourages the network to not abstract away reward-relevant features to which the recor struction loss alone might be insensitive. Second, this formulation facilitates the future use of th model for reward prediction through virtual roll-outs in the compressed latent space, without th computational expensive necessity of reconstructing video frames explicitly-note that this require\nI T-1 K 3 1 (i) l] . ln Pt+k t+k A t+k It+k[' 2:I.T.K i=1 t=0 k=1 l=1 video frame reconstruction loss reward prediction loss\nreward prediction loss where s(i), .(i) (i) t+ k denotes the k-step look ahead probability values of the reward-predicting softmax layer--depicted tween video frame reconstruction and reward loss. The parameter T is a time horizon parameter that. determines how often a single trajectory sample i is unrolled into the future, and K determines the. look ahead prediction horizon dictating how far the network predicts into the future by using its own. video frame predicted output as input for the next time step. FollowingOh et al.(2015) and Michal-. ski et al.(2014), we apply a curriculum learning (Bengio et al.2009) scheme by successively in- creasing K in the course of training such that the network initially learns to predict over a short time horizon and becomes fine-tuned on longer-term predictions as training advances (see Section|A.1. for details). The network parameters 0 are updated by stochastic gradient descent, derivatives of the. training objective w.r.t. 0 are computed with backpropagation through time (Werbos1988).\nFollowing previous work (Oh et al.]2015, Mnih et al.]2015), actions are chosen by the agent on. every fourth frame and are repeated on frames that were skipped. Skipped frames and repeated actions are hence not part of the data sets used to train and test the predictive network on, and. original reward values are accumulated over four frames before clipping.\nThe original training objective in Oh et al.(2015) consists of a video frame reconstruction loss in. terms of a squared loss function aimed at minimizing the quadratic l2-norm of the difference vector between the ground truth image and its action-conditional reconstruction. We extend this training objective to enable joint reward prediction. This results in a compound training loss consisting of the original video frame reconstruction loss and a reward prediction loss given by the cross entropy. Simard et al.| 2003) between the ground truth reward and the corresponding prediction:."}, {"section_index": "6", "section_name": "4 RESULTS", "section_text": "Our quantitative evaluation examines whether our joint model of system dynamics and reward func. tion results in a shared latent representation that enables accurate cumulative reward prediction. W assess cumulative reward prediction on test sets consisting of approximately 50,000 video frames pe game, including actions and rewards. Each network is evaluated on 1,o00 trajectories- suitable tc analyze up to 100-step ahead prediction- drawn randomly from the test set. Look ahead predictior is measured in terms of the cumulative reward error which is the difference between ground trutl. cumulative reward and predicted cumulative reward. For each game, this results in 100 empirical dis tributions over the cumulative reward error--one distribution for each look ahead step- consisting of 1,000 samples each (one for each trajectory). We compare our model predictions to a baseline. model that samples rewards from the marginal reward distribution observed on the test set for eacl game. Note that negative reward values are absent in the games investigated for this study..\nFigure 2|illustrates 20 of the 100 empirical cumulative reward error distributions in all games for. our network model in blue and for the baseline model in red (histograms, bottom), together with. the median and the 5 to 95 percentiles of the cumulative reward error over look ahead steps (top). Across all games, we observe that our joint state and reward prediction model accurately predicts fu-. ture cumulative rewards at least 20 look ahead steps, and that it predicts future rewards substantially more accurately than the baseline model. This is evidenced by cumulative reward error distributions that maintain a unimodal form with mode zero and do not flatten out as quickly as the distributions. for the random-prediction baseline model. Best results are achieved in Freeway and Q*bert where. the probability of zero cumulative reward error at 51 look ahead steps is still around 80% and 60%. respectively-see Figure2Note that 51 look ahead steps correspond to 204 frames because the underlying DQN agent, collecting trajectory samples for training and testing our model, skipped. every fourth frame when choosing an action-see Section|3.2 Lowest performance is obtained in Seaquest where the probability of zero cumulative reward error at 26 steps (104 frames) is around 40% and begins to flatten out soon thereafter-see Figure[2 Running the ALE emulator at a fre-. quency of 60fps, 26 steps correspond to more than 1 second real-time game play because of frame. skipping. Since our model is capable of predicting 26 steps ahead in less than 1 second, our model. enables real-time planning and could be therefore utilized in an online fashion..\nWe now turn our attention to error analysis. While the look ahead step at which errors become. prominent differs substantially from game to game, we find that overall our model underestimates. cumulative reward. This can be seen in the asymmetry towards positive cumulative reward error. values when inspecting the 5 to 95 percentile intervals in the first plot per each game in Figure2 We identify a likely cause in (pseudo-)stochastic transitions inherent in these games. Considering. Seaquest as our running example, objects such as divers and submarines can enter the scene ran-. domly from the right and from the left and at the same time have an essential impact on which rewards the agent can potentially collect. In the ground truth trajectories, the agent's actions are. reactions to these objects. If the predicted future trajectory deviates from the ground truth, targeted. actions such as shooting will miss their target, leading to underestimating true reward. We analyze. this effect in more detail in Section4.2\nAll our experiments were conducted in triplicate with different initial random seeds. Different initial random seeds did not have a significant impact on cumulative reward prediction in all games except Freeway---see Section[A.5|for a detailed analysis. So far, we discussed results concerning reward prediction only. In the appendix, we also evaluate the joint performance of reward and video frame. prediction on the test set in terms of the optimization objective as in Oh et al.[(2015), where the authors report successful video frame reconstruction up to approximately 100 steps (400 frames). and observe similar results-see SectionA.6\nIn our evaluations, we investigate cumulative reward predictions quantitatively and qualitatively on five different Atari games (Q*bert, Seaquest, Freeway, Ms Pacman and Space Invaders). The quan- titative analysis comprises evaluating the cumulative reward prediction error---see Section4.1 The qualitative analysis comprises visualizations of example predictions in Seaquest-see Section4.2\nIn the previous section, we identified stochasticity in state transitions as a likely cause for relativel. low performance in long-term cumulative reward prediction in games such as Seaquest. In Seaques. objects may randomly enter a scene in a non-deterministic fashion. Errors in predicting these event result in predicted possible futures that do not match actually observed future states, resulting i. inaccurate reward predictions. Here, we support this hypothesis by visualizations in Seaquest illus. trating joint video frame and reward prediction for a single network over 20 steps (80 frames)--se. Figure 3|where ground truth video frames are compared to predicted video frames in terms of er. ror maps. Error maps emphasize the difference between ground truth and predicted frames througl. squared error values between pixels in black or white depending on whether objects are absent o. present by mistake in the network's prediction. Actions, ground truth rewards and model-predictec. rewards are shown between state transitions. Peculiarities in the prediction process are shown in red.\nIn step 2, the model predicts reward by mistake because the agent barely misses its target. Steps 4 to 6 report how the model predicts reward correctly but is off by one time step. Steps 7 to 14 depic problems caused by objects randomly entering the scene from the right which the model cannol predict. Steps 26 to 30 show how the model has problems to predict rewards at steps 26 and 28 as. these rewards are attached to objects the model failed to notice entering the scene earlier.."}, {"section_index": "7", "section_name": "CONCLUSION AND FUTURE WORK", "section_text": "Our positive results open up intriguing directions for future work. Our long-term goal is the inte. gration of model-based and model-free approaches for effective interactive learning and planning. in complex environments. Directions for achieving this long-standing challenge include the Dyna. method (Sutton1990), which uses a predictive model to artificially augment expensive training data,. and has been shown to lead to substantial reductions in data requirements in tabular RL approaches.. Alternatively, the model could be could be utilized for planning via Monte-Carlo tree search (Guo. et al. 2014 , Browne et al.f 2012).We hypothesize that such an approach would be particularly. beneficial in multi-task or life-long learning scenarios where the reward function changes but the. environment dynamics are stationary. Testing this hypothesis requires a flexible learning framework. where the reward function and the artificial environment can be changed by the experimenter in an arbitrary fashion, which is not possible in ALE where the environment and the reward function. are fixed per game. A learning environment providing such a flexibility is the recently released. Malmo platform for Minecraft (Johnson et al.2016) where researchers can create user-defined en-. vironments and tasks in order to evaluate the performance of artificial agents. In the shorter-term,. we envision improving the prediction performance of our network by regularization methods such. as dropout and max norm regularization (Srivastava et al.2014)-a state-of-the-art regularizer in supervised learning-and by modifying the optimization objective to enforce similarity between. hidden encodings in multi-step ahead prediction and one-step ahead prediction-see Watter et al. (2015). Finally, extensions of our model to non-deterministic state transitions through dropout and. variational autoencoder schemes (Kingma & Welling2014] Rezende et al.]2014) is a promising direction to alleviate the limitations highlighted in Section 4.2 paving the way for models that. adequately predict and reason over alternative possible future trajectories.\nIn this paper, we extended recent work on video frame prediction (Oh et al.2015) in Atari games to enable reward prediction. Our approach can be used to jointly predict video frames and cumula tive rewards up to a horizon of approximately 200 frames in five different games (Q*bert, Seaquest, Freeway, Ms Pacman and Space Invaders). We achieved best results in Freeway and Q*bert where the probability of zero cumulative reward error after 200 frames is still around 80% and 60% respec- tively, and worst results in Seaquest where the probability of zero cumulative reward error after 100 frames is around 40%. Our study fits into the general line of research using autoencoder networks to learn a latent representation from visual data (Finn et al.]2016] Goroshin et al.]2015] Gregor et al.]2015]Kulkarni et al.]2015] Srivastava et al.[ 2015] Wahlstrom et al.[2015]Watter et al.|2015 Kingma & Welling2014; Rezende et al.]2014 Lange et al.2012]Hinton et al.]2011] Ranzato et al.|2007), and extends this line of research by showing that autoencoder networks are capable of learning a combined representation for system dynamics and the reward function in reinforcement learning settings with high-dimensional visual state spaces-a first step towards applying model- based techniques for planning in environments where the reward function is not initially known.\nQ*bert Seaquest nodel median model 5-95per Look ahead steps. Look ahead steps. model 56 86 61 56 Cumulative reward error Cumulative reward error Freeway Ms Pacman Space Invaders model median andor model 5-95per random 5-95per erwn Look ahead steps. Look ahead steps. Look ahead steps. 51 Cumulative reward error Cumulative reward error Cumulative reward error\nFreeway Ms Pacman Space Invaders error 10 model median random median model 5-95per random 5-95per 10 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Look ahead steps. Look ahead steps. Look ahead steps. model 6 26 76 56 81 31 56 86 36 61 86 41 16 96 96 96 Cumulative reward error Cumulative reward error Cumulative reward error\nFigure 2: Cumulative reward error over look ahead steps in five different Atari games. There are two plots for each game. The top plot per game shows how the median and the 5 to 95 percentiles of the cumulative reward error evolve over look ahead steps for both our model (in blue) and a base- line model that samples rewards from the marginal reward distribution of the test set (in red). Each vertical slice of this concise representation corresponds to a single empirical distribution over the cumulative reward error. We depict these for every fifth look ahead step in the compound plots be- low for both models. These empirical error distributions demonstrate successful cumulative reward prediction over at least 20 steps (80 frames) in all five games as evidenced by their zero-centered and unimodal shape in the first column of each compound plot per game.\nFigure 3: Example predictions in Seaquest. Ground truth video frames, model predictions and error. maps emphasizing differences between ground truth and predicted frames-in form of the squared error between pixel values-are compared column-wise. Error maps highlight objects in black or white respectively depending on whether these objects are absent by mistake or present by mistake. in the model's prediction. Actions taken by the agent as well as ground truth rewards ('rew') and. reward predictions ('pred') are shown below video and error frames. Peculiarities in the prediction process are marked in red. The figure demonstrates how our predictive model fails to anticipate objects that randomly enter the scene from the right and rewards associated to these objects..\nPrediction Error map Steps Ground trutn Error map 1 11 left + fire, rew=0, pred=0 down + right + fire, rew=0, pred=0 2 12 right<rew=0, pred=1 down + right + fire, rew=0, pred=0 3 13 up + left + fire, rew=0, pred=0 down + right + fire, rew=0, pred=0 4 14 left + firecrew=0, pred=1 down + right + fire, rew=0, pred=0 5 down + right<rew=1, pred=0 6 26 down + right, rew=0, pred=0 up + left,<rew=1, pred=0 7 27 left + fire, rew=0, pred=0 down + right + fire, rew=0, pred=0 8 28 down + right + fire, rew=0, pred=0 up + left,<rew=1, pred=0 9 29 left + fire, rew=0, pred=0 down, rew=0, pred=0 10 30 up + fire, rew=0, pred=0 up + fire, rew=0, pred=0\nD P Bertsekas. Dynamic programming & optimal control, volume 1. Athena Scientific, 2005\nD P Bertsekas. Dynamic programming & optimal control, volume 2. Athena Scientific, 2007\nrowne, E Powiey, D wnllenouse. OWIL P Ronnsnagen, S Tavener. D Perez S Samothrakis, and S Colton. A survey of monte carlo tree search methods. IEEE Transactions. on Computational Intelligence and AI in Games, 4(1):1-49, 2012. A Dosovitskiy, J T Springenberg, and T Brox. Learning to generate chairs with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015. C Finn, X Y Tan, Y Duan, T Darrell, S Levine, and P Abbeel. Deep spatial autoencoders for. visuomotor learning. In Proceedings of the IEEE International Conference on Robotics and Au-. tomation, 2016. X Glorot and Y Bengio. Understanding the difficulty of training deep feedforward neural networks.. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2010..\nC Browne, E Powley, D Whitehouse, S Lucas, P I Cowling, P Rohlfshagen, S Tavener, D Perez, S Samothrakis. and S Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1):1-49, 2012. A Dosovitskiy, J T Springenberg, and T Brox. Learning to generate chairs with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015. C Finn, X Y Tan, Y Duan, T Darrell, S Levine, and P Abbeel. Deep spatial autoencoders for visuomotor learning. In Proceedings of the IEEE International Conference on Robotics and Au- tomation, 2016. X Glorot and Y Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2010. X Glorot, A Bordes, and Y Bengio. Deep sparse rectifier neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2011. R Goroshin, M Mathieu, and Y LeCun. Learning to linearize under uncertainty. Advances in Neural Information Processing Systems, 2015. K Gregor, I Danihelka, A Graves, D J Rezende, and D Wierstra. DRAW: a recurrent neural network for image generation. In Proceedings of the International Conference on Machine Learning, 2015. X Guo, S Singh, H Lee, R Lewis, and X Wang. Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. In Advances in Neural Information Processing Systems, 2014. G E Hinton, A Krizhevsky, and S D Wang. Transforming auto-encoders. In Proceedings of the International Conference on Artificial Neural Networks, 2011.\nK Gregor, I Danihelka, A Graves, D J Rezende, and D Wierstra. DRAW: a recurrent neural network for image generation. In Proceedings of the International Conference on Machine Learning,. 2015. X Guo, S Singh, H Lee, R Lewis, and X Wang. Deep learning for real-time Atari game play using. offline Monte-Carlo tree search planning. In Advances in Neural Information Processing Systems, 2014. G E Hinton, A Krizhevsky, and S D Wang. Transforming auto-encoders. In Proceedings of the. International Conference on Artificial Neural Networks, 2011..\nB F Skinner. The behavior of organisms: an experimental analysis. Appleton-Century-Crofts, 1938\nR S Sutton and A G Barto. Reinforcement learning: an introduction. MIT Press, 1998\nW H Thorpe. The origins and rise of ethology. Heinemann Educational Books, 1979\nS Lange, M Riedmiller, and A Voigtlander. Autonomous reinforcement learning on raw visual input. data in a real world application. In Proceedings of the International Joint Conference on Neural. Networks, 2012. V Michalski, R Memisevic, and K Konda. Modeling deep temporal dependencies with recurrent. grammar cells. In Advances in Neural Information Processing Systems, 2014. V Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller. A K Fidjeland, G Ostrovski, S Petersen, C Beattie, A Sadik, I Antonoglou, H King, D Kumaran,. D Wierstra, S Legg, and D Hassabis. Human-level control through deep reinforcement learning.. Nature, 518(7540):529-533, 2015. J Oh, X Guo, H Lee, R Lewis, and S Singh. Action-conditional video prediction using deep networks. in Atari games. In Advances in Neural Information Processing Systems, 2015.. R Pascanu, T Mikolov, and Y Bengio. On the difficulty of training recurrent neural networks. In Proceedings of the International Conference on Machine Learning, 2013..\n1O1C N Srivastava, G E Hinton, A Krizhevsky, I Sutskever, and R Salakhutdinov. Dropout : a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15: 1929-1958, 2014. N Srivastava, E Mansimov, and R Salakhutdinov. Unsupervised learning of video representations using LSTMs. In Proceedings of the International Conference on Machine Learning, 2015. R S Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the International Conference on Machine Learning, 1990.\nE L Thorndike. Animal intelligence: an experimental study of the associative processes in animals. The Psychological Review: Monograph Supplements. 2(4):1-107. 1898\nJ Veness, M G Bellemare, M Hutter, A Chua, and G Desjardins. Compress and control. In Proceed ings of the AAAI Conference on Artificial Intelligence, 2015.."}, {"section_index": "8", "section_name": "A.1 TRAINING DETAILS", "section_text": "In our experiments, we modified the reward prediction loss slightly in order to prevent exploding gradient values by replacing the term - ln p with a first-order Taylor approximation for p-values smaller than e-10_a similar technique is used in DQN (Mnih et al.]2015) to improve the sta- bility of the optimization algorithm. To identify optimal values for the reward weight , we per- formed initial experiments on Ms Pacman without applying the aforementioned curriculum learning scheme instead using a fixed look ahead parameter K = 1. We evaluated the effect of different X-values E {0.1, 1, 10, 100} on the training objective and identified X = 1 for conducting further experiments- see Section|A.2] After identifying an optimal reward weight, we conducted additional initial experiments without curriculum learning with fixed look ahead parameter K = 1 on all of the five different Atari games used in this paper. We observed periodic oscillations in the reward predic. tion loss of the training objective in Seaquest, which was fixed by adding gradient clipping (Pascanu et al.[2013) with threshold parameter 1 to our optimization procedure- experiments investigating the effect of gradient clipping in Seaquest are reported in Section[A.3] The fine-tuning effect of curriculum learning on the training objective in our final experiments is shown in Section[A.4|for all of the five analysed Atari games."}, {"section_index": "9", "section_name": "A.3 EFFECT OF GRADIENT CLIPPING IN SEAOUEST", "section_text": "After identifying an optimal value for the reward weight, see Section[A.2] we observed oscillation in the reward loss of the training objective in Seaquest--see first column in Figure 5which wa. solved by adding gradient clipping to our optimization procedure-- see second and third column i1 Figure[5] We tested two different values for the gradient clipping threshold (5 and 1) both of whicl worked, but for a value of 1 the oscillation vanished completely..\nWe performed all our experiments in Python with Chainer and adhered to the instructions in|Oh et al. (2015) as close as possible. Trajectory samples for learning the network parameters were obtained from a previously trained DQN agent according to Mnih et al.(2015). The dataset for training. comprised around 500, 000 video frames per game in addition to actions chosen by the DQN agent and rewards collected during game play. Video frames used as network input were 84 84 grey-scale. images with pixel values between 0 and 255 down-sampled from the full-resolution 210 160 ALE RGB images. We applied a further preprocessing step by dividing each pixel by 255 and subtracting. mean pixel values from each image leading to final pixel values E [-1; 1]. A detailed network architecture is shown in Figure 1|in the main paper. All weights in the network were initialized according to|Glorot & Bengio(2010) except for those two layers that participate in the element-wise multiplication in Figure[1] the weights of the action-processing layer were initialized uniformly in the range [-0.1; 0.1] and the weights of the layer receiving the latent encoding of the input video. frames were initialized uniformly in the range [-1; 1]. Training was performed for 1, 500, 000 minibatch iterations with a curriculum learning scheme increasing the look ahead parameter K. every 500, 000 iterations from 1 to 3 to 5. When increasing the look ahead parameter K for the first time after 500, 000 iterations, the minibatch size I was also altered from 32 to 8 as was the learning. rate for parameter updates from 10-4 to 10-5. Throughout the entire curriculum scheme, the time. horizon parameter determining the number of times a single trajectory is unrolled into the future was T = 4. The optimizer for updating weights was Adam (Kingma & Ba] 2015) with gradient. momentum 0.9, squared gradient momentum 0.95 and epsilon parameter 10-8. In evaluation mode.. network outputs were clipped to -1; 1| so that strong activations could not accumulate over roll-out time in the network.\nTo identify optimal values for the reward weight X, we conducted initial experiments in Ms Pacman without curriculum learning and a fixed look ahead horizon K = 1. We tested four different X- values E {0.1, 1, 10, 100} and investigated how the frame reconstruction loss and the reward loss of the training objective evolve over minibatch iterations-see Figure4 Best results were obtained for X = 1 and for X = 10, whereas values of X = 0.1 and X = 100 lead to significantly slower convergence and worse overall training performance respectively.\nReward weight = 0.1 Reward weight = 1 sso| 1.0 1.0 0.8 Experiment 1 0.8 Experiment 1 Experiment 2 Experiment 2 0.6 Experiment 3 0.6 Experiment 3 0.4 0.4 0.2 0.2 0.0 0.0 200000 400000 600000 800000 1000000 1200000 1400000 U.010 200000 400000 600000 800000 1000000 1200000 1400000 0.010 aeernp 0.008 0.008 0.006 0.006 0.004 0.004 0.002 0.002 0.000 0.000 200000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 Minibatch iterations Minibatch iterations Reward weight = 10 Reward weight = 100 s 1.0 1.0 0.8 Experiment 1 0.8 Experiment1 Experiment 2 Experiment 2 0.6 Experiment3 0.6 Experiment 3 0.4 0.4 0.2 0.2 0.0 0.0 0.010 200000 400000 600000 800000 1000000 1200000 1400000 0.010 200000 400000 600000 800000 1000000 1200000 1400000 0.008 0.008 Reernd 0.006 0.006 0.004 0.004 0.002 0.002 0.000 0.000 0 200000 400000600000800000 1000000 1200000 1400000 0 200000 400000 6000008000001000000 1200000 1400000 Minibatch iterations Minibatch iterations\nFigure 4: Effect of reward weight on training loss in Ms Pacman. Each of the four panels depicts one experiment with a different reward weight X. Each panel shows how the training loss evolves over minibatch iterations in terms of two subplots reporting video frame reconstruction and reward loss respectively. Each experiment was conducted three times with different initial random seeds depicted in blue, green and red. Graphs were smoothed with an exponential window of size 1000\nNo gradient clipping. Gradient clipping, threshold = 5. Gradient clipping, threshold = 1 punodwog 1.0 1.0 0.8 0.6 0.4 SSOL 0.2 0.0 1.0 200000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 1.0 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0.0 800000 1000000 1200000 1400000 0.0 0.010 200000 400000600000 0.010 200000400000600000 0.010 200000 400000600000 800000 1000000 1200000 1400000 0.008 0.008 0.008 aerrrp 0.006 0.006 0.006 0.004 0.004 0.004 0.002 0.002 0.002 0.000 200000 0.000 200000 400000 600000 800000 1000000 1200000 1400000 0.000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 Minibatch iterations. Minibatch iterations. Minibatch iterations.\nFigure 5: Effect of gradient clipping on training loss in Seaquest. The three panels compare ex periments with no reward clipping to those with reward clipping using the threshold values 5 and 1. respectively. Subplots within each panel are similar to those in Figure|4|but display in the first row the evolution of the compound training loss in addition to the frame reconstruction and reward loss"}, {"section_index": "10", "section_name": "A.4 EFFECT OF CURRICULUM LEARNING", "section_text": "In our final experiments with curriculum learning, the networks were trained for 1, 500, O00 mini batch iterations in total but the look ahead parameter K was gradually increased every 500, 000 iterations from 1 to 3 to 5. The networks were hence initially trained on one-step ahead prediction only and later on fine-tuned on further-step ahead prediction. Figure 6|shows how the training ob jective evolves over iterations. The characteristic \"bumps\"' in the training objective every 500, 000. iterations as training evolves demonstrate improvements in long-term predictions in all games ex cept Freeway where the training objective assumed already very low values within the first 500, 000. iterations and might have been therefore insensitive to further fine-tuning by curriculum learning..\n0 200000 400000 600000 800000 1000000 1200000 1400000 U.010 0.008 0.006 0.004 0.002 0.000 200000 400000 600000 800000 1000000 1200000 1400000\n0.0 200000 400000 600000800000 1000000 1200000 1400000 0.010 0.008 0.006 0.004 0.002 0.000 0 200000 400000 600000 8000001000000 1200000 1400000\nQ*bert Seaquest punodond . erimen 0.8 0.6 0.4 nsol bulu!e 0.2 0.0 0.0 1.0 200000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 1.0 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0.0 200000400000 0.010 600000 800000 1000000 1200000 1400000 0.010 200000 400000 600000 800000 1000000 1200000 1400000 0.008 0.008 aerrrp 0.006 0.006 0.004 0.004 0.002 0.002 0.000 200000 400000 600000 800000 1000000 1200000 1400000 0.000 200000400000 600000800000 1000000 12000001400 Minibatch iterations. Minibatch iterations. Freeway Ms Pacman Space Invaders punodund 1.0 periment 0.8 0.8 0.6 Experimen 0.6 Experiment 3 0.4 0.4 SsoL 0.2 0.2 0.0 0.0 0.0 1.0 200000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 1.0 200000 1.0 400000 600000 0.8 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.2 0.1 0.0 0.0 0.010 200000 400000 600000 800000 1000000 1200000 1400000 0.010 200000 400000 600000 800000 1000000 1200000 1400000 0.010 200000 400000 600000 800000 1000000 1200000 1400000 0.008 0.008 0.008 0.006 0.006 0.006 0.004 0.004 0.004 0.002 0.002 0.002 0.000 0.000 0.000 200000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 Minibatch iterations. Minibatch iterations. Minibatch iterations.\nFigure 6: Effect of curriculum learning on five different Atari games. Each panel corresponds to different game, individual panels are structured in the same way as are those in Figure[5"}, {"section_index": "11", "section_name": "A.5 EFFECT OF RANDOM SEEDS", "section_text": "In order to investigate this reward overestimation in Freeway further, we analyse visualizations of joint video frame and reward prediction for this particular seed (similar in style to Figure 3 from Section4.2|in the main paper). The results are shown in Figure[8|where a peculiar situation occurs after 31 predicted look ahead steps. In Freeway, the agent's job is to cross a busy road from the bottom to the top without bumping into a car in order to receive reward. If the agent bumps into a car, the agent is propelled downwards further away from the reward-yielding top. This propelled downwards movement happens even when the agent tries to move upwards. Exactly that kind of situation is depicted at the beginning of Figure [8|and occurs for this particular prediction after 31 steps. Our predictive model is however not able to correctly predict the aforementioned downwards movement caused by the agent hitting the car, which is highlighted in red throughout steps 31 to 35 documenting an increasing gap between ground truth and predicted agent position as the propelled downwards movement of the ground truth agent continues. In the course of further prediction, the network model assumes the agent to reach the reward-yielding top side of the road way too early which results in a sequence of erroneous positive reward predictions throughout steps 41 to 50 and as a side effect seemingly that the predictive model loses track of other objects in the scene. Concluding, this finding may serve as a possible explanation for cumulative reward overestimation for that particular experiment in Freeway.\nWe conducted three different experiments per game with different initial random seeds. The effect of different initial random seeds on the cumulative reward error is summarized in Figure|7|which reports how the median and the 5 to 95 percentiles of the cumulative reward error evolve over look ahead steps in the different experiments per game. Note that the results of the first column in Figure7 are shown in Figure|2|from the main paper together with a more detailed analysis depicting empirical cumulative reward error distributions for some look ahead steps. The random initial seed does not seem to have a significant impact on the cumulative reward prediction except for Freeway where the network in the third experiment starts to considerably overestimate cumulative rewards at around 30 to 40 look ahead steps.\nExperiment 1 Experiment 2 Experiment 3 Crnnnn enrrnn ennnnnnrnn Daq*r 60 20 60 80 40 60 seeneest 10 40 60 20 40 60 80 20 40 60 100 model median nodel 5-95pe random 5-95per Freeeay 10 40 60 80 20 40 60 80 uewmed Mn 10 80 40 00 dnnperes spede 10 20 40 60 80 20 40 60 80 20 40 60 80 100 Look ahead steps Look ahead steps Look ahead steps\nFigure 7: Effect of different initial random seeds on cumulative reward error. The plots show how. the cumulative reward error evolves over look ahead steps in terms of the median and the 5 tc 95 percentiles for our network model (blue) as well as the baseline model (red) in each experiment.. Each row refers to a different game, each column refers to a different experiment per game initialized. with a different random seed. The first column of this figure is presented in Figure2|of the mair. paper explaining the results in more detail by additionally illustrating empirical distributions over. the cumulative reward error for some look ahead steps..\nSteps Ground truth Prediction Error map Steps Ground truth. Prediction Error map 31 41 up, rew=0, pred=0 up<rew=0, pred=1 32 42 noop, rew=0, pred=0 up, rew=0, pred=0 33 43 up, rew=0, pred=0 up<rew=0, pred=1 34 44 up, rew=0, pred=0 up, rew=0, pred=0 35 45 up, rew=0, pred=0 up<rew=0, pred=1 36 46 noop, rew=0, pred=0 up, rew=0, pred=0 37 47 up, rew=0, pred=0 up, rew=0, pred=0 38 48 up, rew=0, pred=0 up<rew=0, pred=1 39 49 up, rew=0, pred=0 up<rew=0, pred=1 40 50 up, rew=0, pred=0 up<rew=0, pred=1\nFigure 8: Example predictions in Freeway over 20 steps. The figure is similar in nature to Figure|3 from the main paper with the only difference that predictions are depicted from time step 31 onwards."}, {"section_index": "12", "section_name": "A.6 LOSS ON TEST SET", "section_text": "In the main paper, our analysis focuses on evaluating how well our model serves the purpose of. cumulative reward prediction. Here. we evaluate network performance in terms of both the videc frame reconstruction loss as well as the reward prediction loss on the test set following the analysi. conducted in Oh et al.(2015). For each game, we sample 300 minibatches of size I = 50 from. the underlying test set and compute the test loss over K = 100 look ahead steps with the formula. presented in the main paper in Section|3.3|used for learning network parameters, but without aver- aging over look ahead steps because we aim to illustrate the test loss as a function of look ahead. steps-statistics of this analysis are plotted in Figure|9\nBest overall test loss is achieved in Freeway and for initial look ahead steps (up to roughly between 40 and 60 steps) in Q*bert, which is in accordance with results for cumulative reward prediction from the main paper. Also in line with results from the main paper is the finding that the reward loss on the test set is worse in Seaquest, Ms Pacman and Space Invaders when compared to Q*beri (up to approximately 40 steps) and Freeway. Worst video frame reconstruction loss is observed for Space Invaders in compliance withOh et al.[(2015) where the authors report that there are objects in the scene moving at a period of 9 time steps which is hard to predict by a network only taking the last 4 frames from the last 4 steps as input for future predictions. At first sight, it might seem a bit surprising that the reward prediction loss in Space Invaders is significantly lower than in Seaquest and Ms Pacman for long-term ahead prediction despite the higher frame reconstruction loss in Space Invaders. A possible explanation for this paradox might be the frequency at which rewards are collected-this frequency is significantly higher in Seaquest and Ms Pacman than in Space Invaders. A reward prediction model with bias towards zero rewards-as indicated by the main results in the paper---might therefore err less often in absolute terms when rewards are collected at a lower frequency and may hence achieve lower overall reward reconstruction loss.\nCompound loss Reconstruction loss Reward loss median median 20 100 mediar median Tereeen en eeen eeey media 5-95per 10 60 minmax minmax L0C lnnreres median median median minmax laedn uf aasn ssse 20 40 20 20 40 60 80 100 Look ahead steps Look ahead steps Look ahead steps\nFigure 9: Loss on test set over look ahead steps. Each row reports the loss on the test set over 100. look ahead steps for a different game. The first column illustrates the compound loss consisting of the video frame reconstruction loss (second column) and the reward prediction loss (third column).. The loss on the test set is computed according to |Oh et al.(2015) similar to the training loss for. learning network parameters, however with a different look ahead parameter K = 100 and a differ-. ent minibatch size I = 50, and without averaging over look ahead steps since we aim to plot the test. loss as a function of look ahead steps. For each game, the test loss is computed for 300 minibatches. resulting in an empirical distribution with 300 loss values per look ahead step. The figure shows the. mean (in green), the median (in red), the 5 to 95 percentiles (in shaded blue) as well as minimum. and maximum elements (in black dashed lines) of these empirical distributions.."}]
BJmCKBqgl
"[{\"section_index\": \"0\", \"section_name\": \"DYVEDEEP: DYNAMIC VARIABLE EFFORT DEEP NEURAL NETWO(...TRUNCATED)
BJbD_Pqlg
"[{\"section_index\": \"0\", \"section_name\": \"HUMAN PERCEPTION IN COMPUTER VISION / CONFERENCE E (...TRUNCATED)
HJ0NvFzxl
"[{\"section_index\": \"0\", \"section_name\": \"LEARNING GRAPHICAL STATE TRANSITIONS\", \"section_t(...TRUNCATED)
S1Bb3D5gg
"[{\"section_index\": \"0\", \"section_name\": \"LEARNING END-TO-END GOAL-ORIENTED DIALOG\", \"secti(...TRUNCATED)
r10FA8Kxg
"[{\"section_index\": \"0\", \"section_name\": \"1 INTRODUCTION\", \"section_text\": \"Cybenko(1989)(...TRUNCATED)
Hk85q85ee
"[{\"section_index\": \"0\", \"section_name\": \"1 INTRODUCTION\", \"section_text\": \"In this paper(...TRUNCATED)
README.md exists but content is empty.
Downloads last month
23