forum_id
stringlengths 9
9
| sections
stringlengths 20.5k
121k
|
---|---|
HkwoSDPgg | [{"section_index": "0", "section_name": "SEMI-SUPERVISED KNOWLEDGE TRANSFER\nFOR DEEP LEARNING FROM PRIVATE TRAINING DAT\u00a24", "section_text": "Nicolas Papernot*\nMartin Abadi\nPennsylvania State University\ngoodfellow@google.com\nSome machine learning applications involve training data that is sensitive, such\nas the medical histories of patients in a clinical trial. A model may inadvertently\nand implicitly store some of its training data; careful analysis of the model may\ntherefore reveal sensitive information.\nTo address this problem, we demonstrate a generally applicable approach to pro-\nviding strong privacy guarantees for training data: Private Aggregation of Teacher\nEnsembles (PATE). The approach combines, in a black-box fashion, multiple\nmodels trained with disjoint datasets, such as records from different subsets of\nusers. Because they rely directly on sensitive data, these models are not pub-\nlished, but instead used as \u201cteachers\u201d for a \u201cstudent\u201d model. The student learns\nto predict an output chosen by noisy voting among all of the teachers, and cannot\ndirectly access an individual teacher or the underlying data or parameters. The\nstudent\u2019s privacy properties can be understood both intuitively (since no single\nteacher and thus no single dataset dictates the student\u2019s training) and formally, in\nterms of differential privacy. These properties hold even if an adversary can not\nonly query the student but also inspect its internal workings."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Some machine learning applications with great benefits are enabled only through the analysis of\nsensitive data, such as users\u2019 personal contacts, private photographs or correspondence, or even\nmedical records or genetic sequences\n[Sweeney|[1997). Ideally, in those cases, the learning algorithms would protect the privacy of users\u2019\ntraining data, e.g., by guaranteeing that the output model generalizes away from the specifics of any\nindividual user. Unfortunately, established machine learning algorithms make no such guarantee;\nindeed, though state-of-the-art algorithms generalize well to the test set, they continue to overfit on\nspecific training examples in the sense that some of these examples are implicitly memorized.\nRecent attacks exploiting this implicit memorization in machine learning have demonstrated that\nprivate, sensitive training data can be recovered from models. Such attacks can proceed directly, by\nanalyzing internal model parameters, but also indirectly, by repeatedly querying opaque models to\ngather data for the attack\u2019s analysis. For example, [Fredrikson et al.|{2015) used hill-climbing on the\noutput probabilities of a computer-vision classifier to reveal individual faces from the training data.\n*Work done while the author was at Google.\ntWork done both at Google Brain and at OpenA:\nUlfar Erlingsson\nabadi@google.com\nulfar@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Compared with previous work, the approach imposes only weak assumptions on\nhow teachers are trained: it applies to any model, including non-convex models\nlike DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and\nSVHN thanks to an improved privacy analysis and semi-supervised learning.\nBecause of those demonstrations\u2014and because privacy guarantees must apply to worst-case out-\nliers, not only the average\u2014any strategy for protecting the privacy of training data should prudently\nassume that attackers have unfettered access to internal model parameters.\nTo protect the privacy of training data, this paper improves upon a specific, structured application oi\nthe techniques of knowledge aggregation and transfer (Breiman|[1994}, previously explored by[Nis-\n(2007), (2010), and particularly (2016). In this strategy, first.\nan SE emt of teacher models is trained on disjoint subsets of the sensitive data.\nThen, using auxiliary, unlabeled non-sensitive data, a student model is trained on the aggregate out-\nput of the ensemble, such that the student learns to accurately mimic the ensemble. Intuitively, this\nstrategy ensures that the student does not depend on the details of any single sensitive training data\npoint (e.g., of any single user), and, thereby, the privacy of the training data is protected even ii\nattackers can observe the student\u2019s internal model parameters.\nTo establish strong privacy guarantees, it is important to limit the student\u2019s access to its teachers\n30 that the student\u2019s exposure to teachers\u2019 knowledge can be meaningfully quantified and bounded\nFortunately, there are many techniques for speeding up knowledge transfer that can reduce the rate\nof student/teacher consultation during learning. We describe several techniques in this paper, the\nmost effective of which makes use of generative adversarial networks (GANs) (Goodfellow et al.\n2014) applied to semi-supervised learning, using the implementation proposed by|Salimans et al\n(2016). For clarity, we use the term PATE-G when our approach is combined with generative, semi-\nsupervised methods. Like all semi-supervised learning methods, PATE-G assumes the student ha:\naccess to additional, unlabeled data, which, in this context, must be public or non-sensitive. This\nassumption should not greatly restrict our method\u2019s applicability: even when learning on sensitive\ndata, a non-overlapping, unlabeled set of data often exists, from which semi-supervised methods car\nextract distribution priors. For instance, public datasets exist for text and images, and for medica\ndata.\nIt seems intuitive, or even obvious, that a student machine learning model will provide good privacy\nwhen trained without access to sensitive training data, apart from a few, noisy votes from a teacher\nquorum. However, intuition is not sufficient because privacy properties can be surprisingly hard\nto reason about; for example, even a single data item can greatly impact machine learning models\ntrained ona large corpus (Chaudhuri et al.|[2011). Therefore, to limit the effect of any single sensitive\ndata item on the student\u2019s learning, precisely and formally, we apply the well-established, rigorous\nstandard of differential privacy (Dwork & Roth|[2074). Like all differentially private algorithms, our\nlearning strategy carefully adds noise, so that the privacy impact of each data item can be analyzed\nand bounded. In particular, we dynamically analyze the sensitivity of the teachers\u2019 noisy votes;\nfor this purpose, we use the state-of-the-art moments accountant technique from|Abadi et al. (2016),\nwhich tightens the privacy bound when the topmost vote has a large quorum. As a result, for MNIST\nand similar benchmark learning tasks, our methods allow students to provide excellent utility, while\nour analysis provides meaningful worst-case guarantees. In particular, we can bound the metric for\nprivacy loss (the differential-privacy \u00a2) to a range similar to that of existing, real-world privacy-\nprotection mechanisms, such as Google\u2019s RAPPOR {Exlingsson et al.|[2014}.\nThis paper shows how this strategy\u2019s privacy guarantees can be strengthened by restricting student\nraining to a limited number of teacher votes, and by revealing only the topmost vote after care-\n\u2018ully adding random noise. We call this strengthened strategy PATE, for Private Aggregation oj\nfeacher Ensembles, Furthermore, we introduce an improved privacy analysis that makes the strat-\nogy generally applicable to machine learning algorithms with high utility and meaningful privacy\nzuarantees\u2014in particular, when combined with semi-supervised learning.\nFinally, it is an important advantage that our learning strategy and our privacy analysis do not depend\non the details of the machine learning techniques used to train either the teachers or their student.\nTherefore, the techniques in this paper apply equally well for deep learning methods, or any such\nlearning methods with large numbers of parameters, as they do for shallow, simple techniques.\nIn comparison, guarantee privacy only conditionally, for a restricted class of\nstudent classifiers\u2014in effect, limiting applicability to logistic regression with convex loss. Also,\nunlike the methods of {2016}, which represent the state-of-the-art in differentially-\nprivate deep learning, our techniques make no assumptions about details such as batch selection, the\nloss function, or the choice of the optimization algorithm, Even so, as we show in experiments on\nFigure 1: Overview of the approach: (1) an ensemble of teachers is trained on disjoint subsets of the\nsensitive data, (2) a student model is trained on public data labeled using the ensemble.\nMNIST and SVHN, our techniques provide a privacy/utility tradeoff that equals or improves upor\nbespoke learning methods such as those of[Abadi et al.}{2016).\nSection[5] further discusses the related work. Building on this related work, our contributions are as\nfollows:\nOur results are encouraging, and highlight the benefits of combining a learning strategy based on\nsemi-supervised knowledge transfer with a precise, data-dependent privacy analysis. However, the\nmost appealing aspect of this work is probably that its guarantees can be compelling to both an expert\nand a non-expert audience. In combination, our techniques simultaneously provide both an intuitive\nand a rigorous guarantee of training data privacy, without sacrificing the utility of the targeted model.\nThis gives hope that users will increasingly be able to confidently and safely benefit from machine\nlearning models built from their sensitive data.\nIn this section, we introduce the specifics of the PATE approach, which is illustrated in Figure]\nWe describe how the data is partitioned to train an ensemble of teachers, and how the predictions\nmade by this ensemble are noisily aggregated. In addition, we discuss how GANs can be used in\ntraining the student, and distinguish PATE-G variants that improve our approach using generative.\nsemi-supervised methods.\nNot accessible by ativersary I Accessible by adversary\np\nSensitive * Aggregate .\n> tudent 4: + Queries\n\nData - p Teacher I \u00b0\n\\ 4 Predicted 4 Incomplete\nP completion [| Public Data\n\n\u2014_ Training sees \u00abBe Prediction \u2014-\u2014 > Data feeding\ne We demonstrate a general machine learning strategy, the PATE approach, that provides dif-\nferential privacy for training data in a \u201cblack-box\u201d manner, i.e., independent of the learning\nalgorithm, as demonstrated by Section[4]and Appendix[C]\n\ne We improve upon the strategy outlined in[Hamm et al,] (2016) for learning machine models\nthat protect training data privacy. In particular, our student only accesses the teachers\u2019 top\nvote and the model does not need to be trained with a restricted class of convex losses.\n\ne We explore four different approaches for reducing the student\u2019s dependence on its teachers,\nand show how the application of GANs to semi-supervised learning of\ncan greatly reduce the privacy loss by radically reducing the need for supervision.\n\nWe present a new application of the moments accountant technique from[Abadi et al. {2016}\nfor improving the differential-privacy analysis of knowledge transfer, which allows the\ntraining of students with meaningful privacy bounds.\n\ne We evaluate our framework on MNIST and SVHN, allowing for a comparison of our results\nwith previous differentially private machine learning methods. Our classifiers achieve an\n(e, 6) differential-privacy bound of (2.04, 10~*) for MNIST and (8.19, 10~\u00b0) for SVHN,\nrespectively with accuracy of 98.00% and 90.66%. In comparison, for MNIST,\not ci a looser (8, 1075) privacy bound and 97% accuracy. For et Ree\nreport approx. 92% accuracy with \u00a2 > 2 per each of 300,000 model pa-\nrameters, naively making the total \u00a2 > 600,000, which guarantees no meaningful privacy.\n\ne Finally, we show that the PATE approach can be successfully applied to other model struc-\ntures and to datasets with different characteristics. In particular, in Appendix |C] PATE\nprotects the privacy of medical data used to train a model based on random forests.\nData partitioning and teachers: Instead of training a single model to solve the task associated with\ndataset (X,Y), where X denotes the set of inputs, and Y the set of labels, we partition the data inn\ndisjoint sets (X,,, Y;,) and train a model separately on each set. As evaluated in SectionfA.1 assum-\ning that 7 is not too large with respect to the dataset size and task complexity, we obtain n classifiers\nf; called teachers. We then deploy them as an ensemble making predictions on unseen inputs x by\nquerying each teacher for a prediction f;(x) and aggregating these into a single prediction.\nAggregation: The privacy guarantees of this teacher ensemble stems from its aggregation. Let m\nbe the number of classes in our task. The label count for a given class \u00a5 \u20ac [rm] and an input 2 is\nthe number of teachers that assigned class j to input Z: nj;(#) = |{i: 4 \u20ac [n], f,(@) = j}|. If we\nsimply apply plurality\u2014use the label with the largest count\u2014the ensemble\u2019s decision may depend\non a single teacher\u2019s vote. Indeed, when two labels have a vote count differing by at most one, there\nis a tie: the aggregated output changes if one teacher makes a different prediction. We add random\nnoise to the vote counts 7, to introduce ambiguity:\nWhile we could use an f such as above to make predictions, the noise required would increase as we\nmake more predictions, making the model useless after a bounded number of queries. Furthermore.\nprivacy guarantees do not hold when an adversary has access to the model parameters. Indeed.\nas each teacher f; was trained without taking into account privacy, it is conceivable that they have\nsufficient capacity to retain details of the training data. To address these limitations, we train anothei\nmodel, the student, using a fixed number of labels predicted by the teacher ensemble.\nWe train a student on nonsensitive and unlabeled data, some of which we label using the aggregation\nmechanism. This student model is the one deployed, in lieu of the teacher ensemble, so as to fix the\nprivacy loss to a value that does not grow with the number of user queries made to the student model.\nIndeed, the privacy loss is now determined by the number of queries made to the teacher ensemble\nduring student training and does not increase as end-users query the deployed student model. Thus,\nthe privacy of users who contributed to the original training dataset is preserved even if the student\u2019s\narchitecture and parameters are public or reverse-engineered by an adversary.\nWe considered several techniques to trade-off the student model\u2019s quality with the number of labels\nit needs to access: distillation, active learning, semi-supervised learning (see Appendix[B). Here, we\nonly describe the most successful one, used in PATE-G: semi-supervised learning with GANs.\nTraining the student with GANs: The GAN framework involves two machine learning models,\na generator and a discriminator. They are trained in a competing fashion, in what can be viewed\nas a two-player game (2014. The generator produces samples from the data\ndistribution by transforming vectors sampled from a Gaussian distribution. The discriminator is\ntrained to distinguish samples artificially produced by the generator from samples part of the real\ndata distribution. Models are trained via simultaneous gradient descent steps on both players\u2019 costs.\nIn practice, these dynamics are often difficult to control when the strategy set is non-convex (e.g., a\nDNN). In their application of GANs to semi-supervised learning, [Salimans et al.| (2016) made the\nfollowing modifications. The discriminator is extended from a binary classifier (data vs. generator\nsample) to a multi-class classifier (one of k classes of data samples, plus a class for generated\nsamples). This classifier is then trained to classify labeled real samples in the correct class, unlabeled\nreal samples in any of the k classes, and the generated samples in the additional class.\nIn this equation, \u00a5 is a privacy parameter and Lap(b) the Laplacian distribution with location 0 and\nscale 6. The parameter + influences the privacy guarantee we can prove. Intuitively, a large -y leads\nto a strong privacy guarantee, but can degrade the accuracy of the labels, as the noisy maximum f\nabove can differ from the true plurality.\nAlthough no formal results currently explain why yet, the technique was empirically demonstrated\nto greatly improve semi-supervised learning of classifiers on several datasets, especially when the\nclassifier is trained with feature matching loss (Salimans et al.|[2016}.\nTraining the student in a semi-supervised fashion makes better use of the entire data available to the\nstudent, while still only labeling a subset of it. Unlabeled inputs are used in unsupervised learning\nto estimate a good prior for the distribution. Labeled inputs are then used for supervised learning.\nWe now analyze the differential privacy guarantees of our PATE approach. Namely, we keep track\nof the privacy budget throughout the student\u2019s training using the moments accountant qAbadi et al.\n2016). When teachers reach a strong quorum, this allows us to bound privacy costs more strictly.\nDifferential privacy has established itself as a strong standard\nIt provides privacy guarantees for algorithms analyzing databases, which in our case is a machin\nlearning training algorithm processing a training dataset. Differential privacy is defined using pair:\nof adjacent databases: in the present work, these are datasets that only differ by one training example\nRecall the following variant of differential privacy introduced in/Dwork et al.|(2006a).\nDefinition 1. A randomized mechanism M with domain D and range R satisfies (e, 6)-differential\nprivacy if for any two adjacent inputs d, d' \u20ac D and for any subset of outputs S C R it holds that:\nThe privacy loss random variable C(M, aux, d, d') is defined as e(M(d); M, aux, d, d\u2019), ie. the\nrandom variable defined by evaluating the privacy loss at an outcome sampled from M(d).\nA natural way to bound our approach\u2019s privacy loss is to first bound the privacy cost of each label\nqueried by the student, and then use the strong composition theorem to derive\nthe total cost of training the student. For neighboring databases d, d\u2019, each teacher gets the same\ntraining data partition (that is, the same for the teacher with d and with d\u2019, not the same across\nteachers), with the exception of one teacher whose corresponding training data partition differs.\nTherefore, the label counts n;(Z) for any example @, on d and d\u2019 differ by at most 1 in at most two\nlocations. In the next subsection, we show that this yields loose guarantees.\nTo better keep track of the privacy cost, we use recent advances in privacy cost accounting. The\nmoments accountant was introduced by |Abadi et al.| (2016), building on previous work (Bun &\n2016} |[Dwork & Rothblum| [2016 2016).\nDefinition 3. Let M: D \u2014 R be a randomized mechanism and d, d' a pair of adjacent database.\nLet aux denote an auxiliary input. The moments accountant is defined as:\nThe following properties of the moments accountant are proved in|Abadi et al.]{2016).\nPr[M(d) \u20ac $] < e\u00ae Pr[M(d\u2019) \u20ac S] +5.\nnd Pr[M (aux, d) = o] /\nclo; M, aux, d,d\u2019) = log Pri M(aux, @!) = of\nana(A) & max, am (A; aux, dd\u2019)\nwhere a:n(A; aux, d, d\u2019) 3 log E[exp(AC'(M, aux, d, d\u2019))] is the moment generating function of\nthe privacy loss random variable.\nTheorem 1. 1. [Composability] Suppose that a mechanism M consists of a sequence of adap-\ntive mechanisms Mj,..., Mz where Mj: Wa Ry x D \u2014 R,. Then, for any output sequence\nO1,-+-,0%\u20141 and any\nk\nam(A;d,d\u2019) = Sram 01,-.-, 04-1, 4,d')\ni=1\n\u00e96= min exp(aai() \u2014 Xe).\nWe write down two important properties of the aggregation mechanism from Section eB The first\nproperty is proved in[Dwork & Roth] {2014}, and the second follows from[Bun & Steinke|{2016}.\nThe following theorem, proved in Appendix [A] provides a data-dependent bound on the moments\nof any differentially private mechanism where some specific outcome is very likely.\n1- t\narts aux, dd\u2019) <log((1\u2014 a) (Gag) + aexP2y))-\nTo upper bound g for our aggregation mechanism, we use the following simple lemma, also proved\nin Appendix[A]\nLemma 4. Let n be the label score vector for a database d with nj\u00bb > n; for all j. Then\nThis allows us to upper bound g for a specific score vector n, and hence bound specific moments, We\ntake the smaller of the bounds we get from Theorems|2]and[3] We compute these moments for a few\nvalues of A (integers up to 8). Theorem[I|allows us to add these bounds over successive steps, and\nderive an (\u20ac, 5) guarantee from the final a. Interested readers are referred to the script that we used\nto empirically compute these bounds, which is released along with our code: [https://qithub.\na(l; aux, d,d\u2019) < 2771(1+ 1)\nAt each step, we use the aggregation mechanism with noise Lap(+) which is (27, 0)-DP. Thus over\nT steps, we get (4T'y? + 2y,/2T In 4, 6)-differential privacy. This can be rather large: plugging\nin values that correspond to our SVHN result, y = 0.05, 7 = 1000, 6 = le\u20146 gives us e ~ 26 or\nalternatively plugging in values that correspond to our MNIST result, y = 0.05, T = 100, 6 = 1le\u20145\ngives us \u20ac 5.80.\nOur data-dependent privacy analysis takes advantage of the fact that when the quorum among the\nteachers is very strong, the majority outcome has overwhelming likelihood, in which case the pri-\nvacy cost is small whenever this outcome occurs. The moments accountant allows us analyze the\ncomposition of such mechanisms in a unified framework.\n2+ ylnje \u2014 05)\nPM) APS D Fexpeyenys = 4))\nSince the privacy moments are themselves now data dependent, the final \u00a2 is itself data-dependen'\nand should not be revealed. To get around this, we bound the smooth sensitivity\nof the moments and add noise proportional to it to the moments themselves. This gives us <\ndifferentially private estimate of the privacy cost. Our evaluation in Section[JJignores this overheac\nand reports the un-noised values of \u00a2. Indeed, in our experiments on MNIST and SVHN, the scale\nof the noise one needs to add to the released \u00a2 is smaller than 0.5 and 1.0 respectively.\nHow does the number of teachers affect the privacy cost? Recall that the student uses a noisy label\ncomputed in (1) which has a parameter -y. To ensure that the noisy label is likely to be the correct\none, the noise scale ; should be small compared to the the additive gap between the two largest\nvales of n;. While the exact dependence of y on the privacy cost in TheoremBlis subtle, as a general\nprinciple, a smaller -y leads to a smaller privacy cost. Thus, a larger gap translates to a smaller\nprivacy cost. Since the gap itself increases with the number of teachers, having more teachers would\nlower the privacy cost. This is true up to a point. With n teachers, each teacher only trains on a 2\nfraction of the training data, For large enough n, each teachers will have too little training data to be\naccurate."}, {"section_index": "3", "section_name": "4 EVALUATION", "section_text": "Tn our evaluation of PATE and its generative variant PATE-G, we first train a teacher ensemble for\neach dataset. The trade-off between the accuracy and privacy of labels predicted by the ensemble\nis greatly dependent on the number of teachers in the ensemble: being able to train a large set of\nteachers is essential to support the injection of noise yielding strong privacy guarantees while having\na limited impact on accuracy. Second, we minimize the privacy budget spent on learning the student\nby training it with as few queries to the ensemble as possible.\nOur experiments use MNIST and the extended SVHN datasets. Our MNIST model stacks two\nconvolutional layers with max-pooling and one fully connected layer with ReLUs. When trained on\nthe entire dataset, the non-private model has a 99.18% test accuracy. For SVHN, we add two hidden\nlayers|]] The non-private model achieves a 92.8% test accuracy, which is shy of the state-of-the-art.\nHowever, we are primarily interested in comparing the private student\u2019s accuracy with the one of a\nnon-private model trained on the entire dataset, for different privacy guarantees. The source code\nfor reproducing the results in this section is available on GitHub]\nAs mentioned above, compensating the noise introduced by the Laplacian mechanism presented it\nEquation[]requires large ensembles. We evaluate the extent to which the two datasets considered cat\nbe partitioned with a reasonable impact on the performance of individual teachers, Specifically, we\nshow that for MNIST and SVHN, we are able to train ensembles of 250 teachers. Their aggregatec\npredictions are accurate despite the injection of large amounts of random noise to ensure privacy\nThe aggregation mechanism output has an accuracy of 93.18% for MNIST and 87.79% for SVHN\nwhen evaluated on their respective test sets, while each query has a low privacy budget of e = 0.05\nPrediction accuracy: All other things being equal, the number 7 of teachers is limited by a trade-\noff between the classification task\u2019s complexity and the available data. We train n teachers by\npartitioning the training data n-way. Larger values of n lead to larger absolute gaps, hence poten-\ntially allowing for a larger noise level and stronger privacy guarantees. At the same time, a larger\nn implies a smaller training dataset for each teacher, potentially reducing the teacher accuracy. We\nempirically find appropriate values of n for the MNIST and SVHN datasets by measuring the test\nTo conclude, we note that our analysis is rather conservative in that it pessimistically assumes that,\neven if just one example in the training set for one teacher changes, the classifier produced by that\nteacher may change arbitrarily. One advantage of our approach, which enables its wide applica-\nbility, is that our analysis does not require any assumptions about the workings of the teachers.\nNevertheless, we expect that stronger privacy guarantees may perhaps be established in specific\nsettings\u2014when assumptions can be made on the learning algorithm used to train the teachers.\n100,\ngs \u2014\u2014\u2014\u2014\nov fr fo\" .\n& 80} of of oe\n2 ot fy Ce\nBooty Cee\n& sold | Li-*) = MNIST (n=10)\nBale lm \u00bb\u2014\u00ab MNIST (n=100)\n$ \u201cry ie \u00bb\u2014* MNIST (n=250)\n> 30t] fo >\u00bb SVHN (n=10)\nSag i ho\u00bb SVHN (n=100}\ng ee poe SVHN (n=250}\n\n486 OL 0.2 03 0.4 Os\n\nYY per label query\nFigure 2: How much noise can be injected\nto a query? Accuracy of the noisy aggrega-\ntion for three MNIST and SVHN teacher en-\nsembles and varying value per query. The\nnoise introduced to achieve a given +y scales\ninversely proportionally to the value of +:\nsmall values of -y on the left of the axis corre-\nspond to large noise amplitudes and large -y\nvalues on the right to small noise.\nPrediction confidence: As outlined in Section[3} the privacy of predictions made by an ensembl\nof teachers intuitively requires that a quorum of teachers generalizing well agree on identical labels\nThis observation is reflected by our data-dependent privacy analysis, which provides stricter privac\nbounds when the quorum is strong. We study the disparity of labels assigned by teachers. In othe\nwords, we count the number of votes for each possible label, and measure the difference in vote\nbetween the most popular label and the second most popular label, i-e., the gap. If the gap is smal:\nintroducing noise during aggregation might change the label assigned from the first to the seconc\nFigure [3] shows the gap normalized by the total number of teachers n. As n increases, the ga\nremains larger than 60% of the teachers, allowing for aggregation mechanisms to output the correc\nlabel in the presence of noise.\nNoisy aggregation: For MNIST and SVHN, we consider three ensembles of teachers with varying\nnumber of teachers n \u20ac {10, 100, 250}. For each of them, we perturb the vote counts with Laplacian\nnoise of inversed scale -y ranging between 0.01 and 1. This choice is justified below in Section[.2|\nWe report in Figure[|the accuracy of test set labels inferred by the noisy aggregation mechanism for\nthese values of \u00a2. Notice that the number of teachers needs to be large to compensate for the impact\nof noise injection on the accuracy.\n100,\ng\n2\n3\nSs 80\n3\n\u00a3\n6\n5 60\n3\n=e\n5\n2\nB 40\n3\n3\nBX\n3\nE 20\n2 |{c nist\n& |[co sven\noboe oe\n1 2 3 4 5 10 25 50 100 250\nSlumhar af taachare\nFigure 3: How certain is the aggregation of\nteacher predictions? Gap between the num-\nber of votes assigned to the most and second\nmost frequent labels normalized by the num-\nber of teachers in an ensemble. Larger gaps\nindicate that the ensemble is confident in as-\nsigning the labels, and will be robust to more\nnoise injection. Gaps were computed by av-\neraging over the test data.\nset accuracy of each teacher trained on one of the n partitions of the training data. We find that even\nfor n = 250, the average test accuracy of individual teachers is 83.86% for MNIST and 83.18% for\nSVHN. The larger size of SVHN compensates its increased task complexity.\nThe noisy aggregation mechanism labels the student\u2019s unlabeled training set in a privacy-preserving\nfashion. To reduce the privacy budget spent on student training, we are interested in making as few\nlabel queries to the teachers as possible. We therefore use the semi-supervised training approach de-\nscribed previously. Our MNIST and SVHN students with (<, 6) differential privacy of (2.04, 10-5)\nand (8.19, 10\u2014\u00b0) achieve accuracies of 98.00% and 90.66%. These results improve the differential\nprivacy state-of-the-art for these datasets. previously obtained 97% accurac\nwith a (8, 10\u2014\u00b0) bound on MNIST, starting from an inferior baseline model without privacy. [Shokri\n& Shmatikov reported about 92% accuracy on SVHN with \u00a2 > 2 per model parameter and a\nmodel with over 300,000 parameters. Naively, this corresponds to a total e > 600,000.\nFigure 4: Utility and privacy of the semi-supervised students: each row is a variant of the stu\ndent model trained with generative adversarial networks in a semi-supervised way, with a differen\nnumber of label queries made to the teachers through the noisy aggregation mechanism. The las\ncolumn reports the accuracy of the student and the second and third column the bound \u00a2 and failure\nprobability 6 of the (\u00a2, 5) differential privacy guarantee.\nWe apply semi-supervised learning with GANs to our problem using the following setup for eacl\ndataset. In the case of MNIST, the student has access to 9,000 samples, among which a subse\nof either 100, 500, or 1,000 samples are labeled using the noisy aggregation mechanism discusse:\nin Section 2.1] Its performance is evaluated on the 1,000 remaining samples of the test set. Not\nthat this may increase the variance of our test set accuracy measurements, when compared to thos:\ncomputed over the entire test data. For the MNIST dataset, we randomly shuffle the test set to ensun\nthat the different classes are balanced when selecting the (small) subset labeled to train the student\nFor SVHN, the student has access to 10,000 training inputs, among which it labels 500 or 1,00!\nsamples using the noisy aggregation mechanism. Its performance is evaluated on the remainin;\n16,032 samples. For both datasets, the ensemble is made up of 250 teachers. We use Laplacian scal:\nof 20 to guarantee an individual query privacy bound of \u00a2 = 0.05. These parameter choices ar\nmotivated by the results from Section[4.1]\nIn Figure [A] we report the values of the (\u00a2, 6) differential privacy guarantees provided and the cor-\nresponding student accuracy, as well as the number of queries made by each student. The MNIST\nstudent is able to learn a 98% accurate model, which is shy of 1% when compared to the accuracy\nof a model learned with the entire training set, with only 100 label queries. This results in a strict\ndifferentially private bound of \u00a2 = 2.04 for a failure probability fixed at 10\u2014>. The SVHN stu-\ndent achieves 90.66% accuracy, which is also comparable to the 92.80% accuracy of one teacher\nlearned with the entire training set. The corresponding privacy bound is \u00a2 = 8.19, which is higher\nthan for the MNIST dataset, likely because of the larger number of queries made to the aggregation\nmechanism,\nSeveral privacy definitions are found in the literature. For instance, k-anonymity requires information\nabout an individual to be indistinguishable from at least k \u2014 1 other individuals in the dataset CC\n[Sweeney][2002}. However, its lack of randomization gives rise to caveats (Dwork & Roth] 2014}, and\nattackers can infer properties of the dataset (Aggarwal][2005). An alternative definition, differential\nprivacy, established itself as a rigorous standard for providing privacy guarantees\n[2006b). In contrast to k-anonymity, differential privacy is a property of the randomized algorithm\nand not the dataset itself.\nA variety of approaches and mechanisms can guarantee differential privacy.\nshowed that randomized response, introduced. by [Warner] {1965}, can protect crowd-sourced data\ncollected from software users to compute statistics about user behaviors. Attempts to provide dif-\nferential privacy for machine learning models led to a series of efforts on shallow machine learning\nmodels, including work by (2014); (2009); [Pathak et al.\n2011}; [Song et al.]{2013), and[Wainwright et al.](2012)-\nDataset |< 6 Queries | Non-Private Baseline | Student Accuracy\nMNIST 98.00%\nMNIST 98.10%\nSVHN eM 82.72%\nSVHN | 8.19 | 1076 1000 92.80% 90.66%\nWe observe that our private student outperforms the aggregation\u2019s output in terms of accuracy, with\nor without the injection of Laplacian noise. While this shows the power of semi-supervised learning,\nthe student may not learn as well on different kinds of data (e.g., medical data), where categories are\nnot explicitly designed by humans to be salient in the input space. Encouragingly, as Appendix[C\nillustrates, the PATE approach can be successfully applied to at least some examples of such data.\nA privacy-preserving distributed SGD algorithm was introduced by[Shokri & Shmatikov| (2015p. I\napplies to non-convex models. However, its privacy bounds are given per-parameter, and the larg\u00ab\nnumber of parameters prevents the technique from providing a meaningful privacy guarantee. [Abad\nprovided stricter bounds on the privacy loss induced by a noisy SGD by introducing the\nmoments accountant. In comparison with these efforts, our work increases the accuracy of a private\nMNIST model from 97% to 98% while improving the privacy bound \u00a2 from 8 to 1.9. Furthermore\nthe PATE approach is independent of the learning algorithm, unlike this previous work. Suppor\nfor a wide range of architecture and training algorithms allows us to obtain good privacy bound:\non an accurate and private SVHN model. However, this comes at the cost of assuming that non:\nprivate unlabeled data is available, an assumption that is not shared by Shokri &\nShmatikov][2015}.\nfirst discussed secure multi-party aggregation of locally trained classifiers for a\nglobal classifier hosted by a trusted third-party. proposed the use of knowledge\ntransfer between a collection of models trained on individual devices into a single model guaran-\nteeing differential privacy. Their work studied linear student models with convex and continuously\ndifferentiable losses, bounded and c-Lipschitz derivatives, and bounded features. The PATE ap-\nproach of this paper is not constrained to such applications, but is more generally applicable.\nPrevious work also studied semi-supervised knowledge transfer from private models. For instance\nlearned privacy-preserving random forests. A key difference is that thei\napproach is tailored to decision trees. PATE works well for the specific case of decision trees, a:\ndemonstrated in Appendix[C] and is also applicable to other machine learning algorithms, including\nmore complex ones. Another key difference is that [Tagannathan et al] (2013) modified the classic\nmodel of a decision tree to include the Laplacian mechanism. Thus, the privacy guarantee does\nnot come from the disjoint sets of training data analyzed by different decision trees in the randor\nforest, but rather from the modified architecture. In contrast, partitioning is essential to the privacy\nguarantees of the PATE approach.\nTo protect the privacy of sensitive training data, this paper has advanced a learning strategy and <\ncorresponding privacy analysis. The PATE approach is based on knowledge aggregation and transfei\nfrom \u201cteacher\u201d models, trained on disjoint data, to a \u201cstudent\u201d model whose attributes may be mad\u00ab\npublic. In combination, the paper\u2019s techniques demonstrably achieve excellent utility on the MNIST\nand SVHN benchmark tasks, while simultaneously providing a formal, state-of-the-art bound or\nusers\u2019 privacy loss. While our results are not without limits\u2014e.g., they require disjoint training\ndata for a large number of teachers (whose number is likely to increase for tasks with many outpu\nclasses)\u2014they are encouraging, and highlight the advantages of combining semi-supervised learn.\ning with precise, data-dependent privacy analysis, which will hopefully trigger further work. Ir\nparticular, such future work may further investigate whether or not our semi-supervised approact\nwill also reduce teacher queries for tasks other than MNIST and SVHN, for example when the\ndiscrete output categories are not as distinctly defined by the salient input space features.\nA key advantage is that this paper\u2019s techniques establish a precise guarantee of training data pri-\nvacy in a manner that is both intuitive and rigorous. Therefore, they can be appealing, and easily\nexplained, to both an expert and non-expert audience. However, perhaps equally compelling are the\ntechniques\u2019 wide applicability. Both our learning approach and our analysis methods are \u201cblack-\nbox,\u201d i.e., independent of the learning algorithm for either teachers or students, and therefore apply,\nin general, to non-convex, deep learning, and other learning methods. Also, because our techniques\ndo not constrain the selection or partitioning of training data, they apply when training data is natu-\nrally and non-randomly partitioned\u2014e.g., because of privacy, regulatory, or competitive concerns\u2014\nor when each teacher is trained in isolation, with a different method. We look forward to such further\napplications, for example on RNNs and other sequence-based models."}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "Nicolas Papernot is supported by a Google PhD Fellowship in Security. The authors would like tc\nthank Ilya Mironov and Li Zhang for insightful discussions about early drafts of this document."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Dana Angluin. Queries and concept learning. Machine learning, 2(4):319-342, 1988.\nRaef Bassily, Adam Smith, and Abhradeep Thakurta. Differentially private empirical risk minimiza-\ntion: efficient algorithms and tight error bounds. arXiv preprint arXiv: 1405.7085, 2014.\nEric B Baum. Neural net algorithms that learn in polynomial time from examples and queries. /EE]\nTransactions on Neural Networks, 2(1):5-19, 1991.\nLeo Breiman. Bagging predictors. Machine Learning, 24(2):123-140, 1994.\nKamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical\nrisk minimization. Journal of Machine Learning Research, 12(Mar):1069-1109, 2011.\nThomas G Dietterich. Ensemble methods in machine learning. In International workshop on multi.\nple classifier systems, pp. 1-15. Springer, 2000.\nJane Bromley, James W Bentz, L\u00e9on Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard\nSackinger, and Roopak Shah. Signature verification using a \u201cSiamese\u201d time delay neural network.\nTnternatianal Tournal af Pattorn Recnonition and Artificial Intelliconce THMAY6AO6RR 1903\nCynthia Dwork. A firm foundation for private data analysis. Communications of the ACM, 54(1):\n86-95, 2011.\n\nCynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations\nand Trends in Theoretical Computer Science, 9(3-4):21 1-407, 2014.\n\nCynthia Dwork and Guy N Rothblum. Concentrated differential privacy. arXiv preprint\narXiv: 1603.01887, 2016.\n\nCynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data,\nourselves: privacy via distributed noise generation. In Advances in Cryptology-EUROCRYPT\n2006, pp. 486-503. Springer, 2006a.\n\nCynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity\nin private data analysis. In Theory of Cryptography, pp. 265-284. Springer, 2006b.\n\nCynthia Dwork, Guy N Rothblum, and Salil Vadhan. Boosting and differential privacy. In Pro-\nceedings of the 51st IEEE Symposium on Foundations of Computer Science, pp. 51-60. IEEE,\n2010.\n\nUlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. RAPPOR: Randomized aggregatable\nprivacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC Conference on\nComputer and Communications Security. pp. 1054-1067. ACM, 2014.\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXi\npreprint arXiv: 150302531, 2015.\nIgor Kononenko. Machine learning for medical diagnosis: history, state of the art and perspective.\nArtificial Intelligence in medicine, 23(1):89-109, 2001.\nIlya Mironov. Renyi differential privacy. manuscript, 2016.\nJason Poulos and Rafael Valle. Missing data imputation for supervised learning. arXiv preprint\narXiv: 1610.09075, 2016.\nihun Hamm, Paul Cao, and Mikhail Belkin, Learning privately from multiparty data. arXiv preprini\narXiv: 1602 03952. 2016\nfim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.\nImproved techniques for training GANs. arXiv preprint arXiv: 1606.03498, 2016.\nStanley L Warner. Randomized response: A survey technique for eliminating evasive answer bias.\nJournal of the American Statistical Association, 60(309):63-69, 1965.\n1- t\nails aux, dd\u2019) < log((1\u2014 (Gag) + 2exP2w).\nf(z) =(1-2) (73) + 267,\nWe next argue that this function is non-decreasing in (0, =4) under the conditions of the lemma.\nTowards this goal, define\nl-w)! ry\na() = (1-2)(s\u2014) +2e\"\",\"\nLemma|4] Let n be the label score vector for a database d with nj\u00bb > 13 for all j. Then\n24+ 9(nj\u00ab \u2014 75)\nPrIM(@) #7] s us 4exp(y(nj- \u2014 3)\noo\n[ (y+ lae\u2122Hlye dy = \u2014 [ a\n_ ye? dy =\ny=0 Y elzl [@ + yla|je\" dy = 1+ Ie|\ny= 4elel\n(Pua =o J!\napla(lsaux, da\u2019) =) PIM(d) = ol Sia @y =o) Pr[M(@) = 0]!\n3 wf Pr{M(d) = 0\u00b0] y + $2 Pr[M(@) = AM eMa)y Sol ayaa\n\n= PIM) =0'l( Saya) aor] x\n\n=\u00a2 )! [MM (d) = o](e77)!\n<(0-\u00a2)(35) +> Pe\n\n-\u00a2 ! fay\n< (Gay) +gem.\nand observe that f(z) = g(z,z). We can easily verify by differentiation that g(z, w) is increasing\nindividually in z and in w in the range of interest. This implies that f(g\u2019) < f(q) completing the\nproof. Oo\nProof. The probability that nj+ + Lap(+) <njt Lap(+) is equal to the probability that the sum\nof two independent Lap(1) random variables exceeds y(nj\u00ab \u2014 nj). The sum of two independent\nLap(1) variables has the same distribution as the difference of two Gamma(2, 1) random variables.\nRecalling that the Gamma(2, 1) distribution has pdf ze~\u201c, we can compute the pdf of the difference\nvia convolution as\nTn this appendix, we describe approaches that were considered to reduce the number of queries made\nto the teacher ensemble by the student during its training. As pointed out in Sections [3] andj] this\neffort is motivated by the direct impact of querying on the total privacy cost associated with student\ntraining. The first approach is based on distillation, a technique used for knowledge transfer and\nmodel compression (2015). The three other techniques considered were proposed\nin the context of active learning, with the intent of identifying training examples most useful for\nlearning. In Sections[2]andA] we described semi-supervised learning, which yielded the best results.\nThe student models in this appendix differ from those in Sections[2|and[4] which were trained using\nGANs. In contrast, all students in this appendix were learned in a fully supervised fashion from\na subset of public, labeled examples. Thus, the learning goal was to identify the subset of labels\nvielding the best learning performance."}, {"section_index": "6", "section_name": "B.1 TRAINING STUDENTS USING DISTILLATION", "section_text": "Distillation is a knowledge transfer technique introduced as a means of compressing large model:\ninto smaller ones, while retaining their accuracy (Bucilua et al,][2006}[Hinton et al.][2015). This is for\ninstance useful to train models in data centers before deploying compressed variants in phones. Th\u00ab\ntransfer is accomplished by training the smaller model on data that is labeled with probability vector:\nproduced by the first model, which encode the knowledge extracted from training data. Distillatior\nis parameterized by a temperature parameter T', which controls the smoothness of probabilitie:\noutput by the larger model: when produced at small temperatures, the vectors are discrete, wherea:\nat high temperature, all classes are assigned non-negligible values. Distillation is a natural candidate\nto compress the knowledge acquired by the ensemble of teachers, acting as the large model, into \u00ab\nstudent. which is much smaller with n times less trainable parameters compared to the n teachers.\nTo evaluate the applicability of distillation, we consider the ensemble of n = 50 teachers for SVHN.\nTn this experiment, we do not add noise to the vote counts when aggregating the teacher predictions.\nWe compare the accuracy of three student models: the first is a baseline trained with labels obtained\nby plurality, the second and third are trained with distillation at T \u20ac {1,5}. We use the first 10,00\u20ac\nsamples from the test set as unlabeled data. Figure[5] reports the accuracy of the student model on\nthe last 16,032 samples from the test set, which were not accessible to the model during training. Il\nis plotted with respect to the number of samples used to train the student (and hence the number of\nqueries made to the teacher ensemble). Although applying distillation yields classifiers that perform\nmore accurately, the increase in accuracy is too limited to justify the increased privacy cost of re-\nvealing the entire probability vector output by the ensemble instead of simply the class assigned the\nlargest number of votes. Thus, we turn to an investigation of active learning."}, {"section_index": "7", "section_name": "B.2 ACTIVE LEARNING OF THE STUDENT", "section_text": "Active learning is a class of techniques that aims to identify and prioritize points in the student\u2019:\ntraining set that have a high potential to contribute to leaming {Angluin| [1988] [Baum] [T991). If the\nlabel of an input in the student\u2019s training set can be predicted confidently from what we have learnec\nso far by querying the teachers, it is intuitive that querying it is not worth the privacy budget spent\nIn our experiments, we made several attempts before converging to a simpler final formulation.\nSiamese networks: Our first attempt was to train a pair of siamese networks, introduced by|Brom-\nley etal. in the context of one-shot learning and later improved by[Koch]{2015). The siamese\nnetworks take two images as input and return 1 if the images are equal and 0 otherwise. They are\ntwo identical networks trained with shared parameters to force them to produce similar represen.\ntations of the inputs, which are then compared using a distance metric to determine if the image:\nare identical or not. Once the siamese models are trained, we feed them a pair of images where\nthe first is unlabeled and the second labeled. If the unlabeled image is confidently matched with <\nknown labeled image, we can infer the class of the unknown image from the labeled image. In ow\n2xperiments, the siamese networks were able to say whether two images are identical or not, but dic\nnot generalize well: two images of the same class did not receive sufficiently confident matches. We\nalso tried a variant of this approach where we trained the siamese networks to output 1 when the twe\n90\n\n85 yo\no (\n\u00a9 80\n2\n3\no\na 75\na\n\u00a3\n\u2018S\no\n70\na\n\n65 x\u2014x Distilled Vectors\n\n\u00bb\u2014x Labels only\nx\u2014< Distilled Vectors at T=5\n60\n0 2000 4000 6000 8000 100\u00a2\nStudent share of samples in SVHN test set (out of 26032)\nimages are of the same class and 0 otherwise, but the learning task proved too complicated to be a\neffective means for reducing the number of queries made to teachers.\nCollection of binary experts: Our second attempt was to train a collection of binary experts, one\nper class. An expert for class 7 is trained to output 1 if the sample is in class 7 and 0 otherwise\nWe first trained the binary experts by making an initial batch of queries to the teachers. Using\nthe experts, we then selected available unlabeled student training points that had a candidate labe\nscore below 0.9 and at least 4 other experts assigning a score above 0.1. This gave us about 50(\nunconfident points for 1700 initial label queries, After labeling these unconfident points using the\nensemble of teachers, we trained the student. Using binary experts improved the student\u2019s accuracy\nwhen compared to the student trained on arbitrary data with the same number of teacher queries\nThe absolute increases in accuracy were however too limited\u2014between 1.5% and 2.5%.\nIdentifying unconfident points using the student: This last attempt was the simplest yet the mos\neffective. Instead of using binary experts to identify student training points that should be labeled\nthe teachers, we used the student itself. We asked the student to make predictions on each unlabele:\ntraining point available. We then sorted these samples by increasing values of the maximum proba\nbility assigned to a class for each sample. We queried the teachers to label these unconfident input\nfirst and trained the student again on this larger labeled training set. This improved the accuracy o\nthe student when compared to the student trained on arbitrary data. For the same number of teache\nqueries, the absolute increases in accuracy of the student trained on unconfident inputs first whe\ncompared to the student trained on arbitrary data were in the order of 4% \u2014 10%.\nFigure 5: Influence of distillation on the accuracy of the SVHN student trained with respect to the\ninitial number of training samples available to the student. The student is learning from n = 50\nteachers, whose predictions are aggregated without noise: in case where only the label is returned,\nwe use plurality, and in case a probability vector is returned, we sum the probability vectors output\nby each teacher before normalizing the resulting vector.\nC APPENDIX: ADDITIONAL EXPERIMENTS ON THE UCI ADULT AND\nDIABETES DATASETS\nUCI Adult dataset: The UCI Adult dataset is made up of census data, and the task is to predict\nwhen individuals make over $50k per year. Each input consists of 13 features (which include the age,\nworkplace, education, occupation\u2014see the UCI website for a full list. The only pre-processing we\napply to these features is to map all categorical features to numerical values by assigning an integei\nvalue to each possible category. The model is a random forest provided by the scikit-learn\nPython package. When training both our teachers and student, we keep all the default paramete:\nvalues, except for the number of estimators, which we set to 100. The data is split between a\ntraining set of 32,562 examples, and a test set of 16,282 inputs.\nUCI Diabetes dataset: The UCI Diabetes dataset includes de-identified records of diabetic patients\nand corresponding hospital outcomes, which we use to predict whether diabetic patients were read-\nmitted less than 30 days after their hospital release. To the best of our knowledge, no particular\nclassification task is considered to be a standard benchmark for this dataset. Even so, it is valuable\nto consider whether our approach is applicable to the likely classification tasks, such as readmission,\nsince this dataset is collected in a medical environment\u2014a setting where privacy concerns arise\nfrequently. We select a subset of 18 input features from the 55 available in the dataset (to avoid\nfeatures with missing values) and form a dataset balanced between the two output classes (see the\nUCI website for more detail Tn class 0, we include all patients that were readmitted in a 30-day\nwindow, while class 1 includes all patients that were readmitted after 30 days or never readmitted at\nall. Our balanced dataset contains 34,104 training samples and 12,702 evaluation samples. We use\na random forest model identical to the one described above in the presentation of the Adult dataset.\nExperimental results: We apply our approach described in Section[2] For both datasets, we trail\nensembles of nm = 250 random forests on partitions of the training data. We then use the nois:\naggregation mechanism, where vote counts are perturbed with Laplacian noise of scale 0.05 t\nprivately label the first 500 test set inputs. We train the student random forest on these 500 test se\ninputs and evaluate it on the last 11,282 test set inputs for the Adult dataset, and 6,352 test set input:\nfor the Diabetes dataset. These numbers deliberately leave out some of the test set, which allowec\nus to observe how the student performance-privacy trade-off was impacted by varying the numbe\nof private labels, as well as the Laplacian scale used when computing these labels.\nFor the Adult dataset, we find that our student model achieves an 83% accuracy for an (\u20ac,5) =\n(2.66, 10-5) differential privacy bound. Our non-private model on the dataset achieves 85% accu-\nracy, which is comparable to the state-of-the-art accuracy of 86% on this dataset\n(2016). For the Diabetes dataset, we find that our privacy-preserving student model achieves <\n93.94% accuracy for a (e,6) = (1.44, 1075) differential privacy bound. Our non-private mode:\non the dataset achieves 93.81% accuracy.\nIn order to further demonstrate the general applicability of our approach, we performed experiments\non two additional datasets. While our experiments on MNIST and SVHN in Section A]used con-\nvolutional neural networks and GANs, here we use random forests to train our teacher and student\nmodels for both of the datasets. Our new results on these datasets show that, despite the differing\ndata types and architectures, we are able to provide meaningful privacy guarantees."}] |
SyOvg6jxx | [{"section_index": "0", "section_name": "A Stupy oF CountT-BASED EXPLORATION\nFOR DEEP REINFORCEMENT LEARNING", "section_text": "Haoran Tang!*, Rein Houthooft\u00ae**, Davis Foote\u201d, Adam Stooke\u201d, Xi Chen\u201c,\nYan Duan\u2018, John Schulman\u2019, Filip De Turck?, Pieter Abbeel 24\nCount-based exploration algorithms are known to perform near-optimally when\nused in conjunction with tabular reinforcement learning (RL) methods for solving\nsmall discrete Markov decision processes (MDPs). It is generally thought that\ncount-based methods cannot be applied in high-dimensional state spaces, since\nmost states will only occur once. Recent deep RL exploration strategies are able to\ndeal with high-dimensional continuous state spaces through complex heuristics\noften relying on optimism in the face of uncertainty or intrinsic motivation. In\nthis work, we describe a surprising finding: a simple generalization of the classic\ncount-based approach can reach near state-of-the-art performance on various high.\ndimensional and/or continuous deep RL benchmarks, States are mapped to hash\ncodes, which allows to count their occurrences with a hash table. These counts\nare then used to compute a reward bonus according to the classic count-based\nexploration theory. We find that simple hash functions can achieve surprisingly good\nresults on many challenging tasks. Furthermore, we show that a domain-dependent\nlearned hash code may further improve these results. Detailed analysis reveals\nimportant aspects of a good hash function: 1) having appropriate granularity and\n2) encoding information relevant to solving the MDP. This exploration strategy\nachieves near state-of-the-art performance on both continuous control tasks and\nAtari 2600 games, hence providing a simple yet powerful baseline for solving\nMDPs that require considerable exploration."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Reinforcement learning (RL) studies an agent acting in an initially unknown environment, learning\nthrough trial and error to maximize rewards. It is impossible for the agent to act near-optimally unti\nit has sufficiently explored the environment and identified all of the opportunities for high reward, it\nall scenarios. A core challenge in RL is how to balance exploration\u2014actively seeking out novel state:\nand actions that might yield high rewards and lead to long-term gains; and exploitation\u2014maximizing\nshort-term rewards using the agent\u2019s current knowledge. While there are exploration technique:\nfor finite MDPs that enjoy theoretical guarantees, there are no fully satisfying techniques for high\ndimensional state spaces; therefore, developing more general and robust exploration techniques is at\nactive area of research.\nMost of the recent state-of-the-art RL results have been obtained using simple exploration strategies\nsuch as uniform sampling (Mnih et al] and i.i.d/correlated Gaussian noise\n(2015). Although these heuristics are sufficient in tasks with well-shaped\nrewards, the sample complexity can grow exponentially (with state space size) in tasks with sparse\nrewards [2016b). Recently developed exploration strategies for deep RL have led\nto significantly improved performance on environments with sparse rewards. Bootstrapped DQN\n*These authors contributed equally."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "led to faster learning in a range of Atari 2600 games by training an ensemble of\nQ-functions. Intrinsic motivation methods using pseudo-counts achieve state-of-the-art performance\non Montezuma\u2019s Revenge, an extremely challenging Atari 2600 game {Bellemare et al]\nVariational Information Maximizing Exploration (VIME, (2016}) encourages the\nagent to explore by acquiring information about environment dynamics, and performs well on various\nrobotic locomotion problems with sparse rewards. However, we have not seen a very simple and fast\nmethod that can work across different domains.\nOULUC UL LUG UiaSdIL, LISULCULALy~JUSULUCU CAPLULAUUL LCVD a6 VaDeU Ul LUULILIY Stato~aLuUl\nvisitations, and turning this count into a bonus reward. In the bandit setting, the well-known UCB\nalgorithm of|Lai & Robbins|(1985) chooses the action a; at time \u00a2 that maximizes 7(a;) +, | rose\nwhere #(a;) is the estimated reward, and n(a;) is the number of times action a, was previously chosen\nIn the MDP setting, some of the algorithms have similar structure, for example, Model Based Interval\nEstimation-Exploration Bonus (MBIE-EB) of/Strehl & Littman}({2008) counts state-action pairs with\na table n(s, a) and adding a bonus reward of the form to encourage exploring less visited pairs\nKolter & Ng}(2009) show that the inverse-square-root dependence is optimal. MBIE and related\nalgorithms assume that the augmented MDP is solved analytically at each timestep, which is only\npractical for small finite state spaces.\nThis paper presents a simple approach for exploration, which extends classic counting-based method\nto high-dimensional, continuous state spaces. We discretize the state space with a hash function anc\napply a bonus based on the state-visitation count. The hash function can be chosen to appropriatek\nbalance generalization across states, and distinguishing between states. We select problems from rllal\nand Atari 2600 featuring sparse rewards, and demonstrat:\nnear state-of-the-art performance on several games known to be hard for naive exploration strategie:\nThe main strength of the presented approach is that it is fast, flexible and complementary to mos\nexisting RL algorithms.\nInsummary, this paper proposes a generalization of classic count-based exploration to high-dimensional\nspaces through hashing (Section[2}; demonstrates its effectiveness on challenging deep RL benchmark\nproblems and analyzes key components of well-designed hash functions (Section[3}."}, {"section_index": "3", "section_name": "2.1 NoTATION", "section_text": "This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by\n(S,A,P.1r, p0,.y,T), in which S is the state space, A the action space, P a transition proba.\nbility distribution, r: S x A > Rzo a reward function, po an initial state distribution, y \u20ac (0, 1] a\ndiscount factor, and T the horizon. The goal of RL is to maximize the total expected discounted\nreward Ex, p [xt Y rsp a;)| over a policy 2, which outputs a distribution over actions given a state\nOur approach discretizes the state space with a hash function \u00a2: S \u2014 Z. An exploration bonus is\nadded to the reward function, defined as\n: B\n(sa) = fo,\nTSO GO)\nwhere \u00a3 \u20ac Rzo is the bonus coefficient. Initially the counts n(-) are set to zero for the whole range of\n\u00a2. For every state s; encountered at time step \u00a2, n(#(s;)) is increased by one. The agent is trained\nwith rewards (r + r*), while performance is evaluated as the sum of rewards without bonuses.\nNote that our approach is a departure from count-based exploration methods such as MBIE-EB since\nwe use a state-space count n(s) rather than a state-action count n(s, a). State-action counts n(s, a)\nare investigated in Appendix[A.6] but no significant performance gains over state counting could be\nwitnessed.\nAlgorithm 1: Count-based exploration through static hashing\nJMBULIU 2s WCUUUL-UadCU CAYLULAUUM WULUU 2 SLUG aS\n1 Define state preprocessor g : S > RX\n2 (In case of SimHash) Initialize A \u00a2 R*** with entries drawn i.i.d. from the standard Gaussian\ndistribution (0, 1)\n3 Initialize a hash table with values n(-) = 0\n4 for each iteration j do\n5 Collect a set of state-action samples {(s;, am)}_9 with policy z\n6 Compute hash codes through any LSH method, e.g., for SimHash, $(sy.) = sgn(Ag(sm))\n7 Update the hash table counts Yin: 0 < m < M as n(d(5m)) \u2014 2(6(5m)) +1\nM\n8 Update the policy m using rewards {rm Qn) + \u00e9s} with any RL algorithm\nAlgorithm[i]summarizes our method. The main idea is to use locality-sensitive hashing (LSH) to\nconvert continuous, high-dimensional data to discrete hash codes. LSH is a popular class of hash\nfunctions for querying nearest neighbors based on certain similarity metrics\nA computationally efficient type of LSH is SimHash (Charikar|[2002), which measures similarity by\nangular distance. SimHash retrieves a binary code of state s \u20ac S as"}, {"section_index": "4", "section_name": "2.3. Count-BAsED ExPLORATION VIA LEARNED HASHING", "section_text": "When the MDP states have a complex structure, as is the case with image observations, measurin;\ntheir similarity directly in pixel space fails to provide the semantic similarity measure one woul\ndesire. Previous work in computer vision [2016\nintroduce manually designed feature representations of images that are suitable for semantic task:\nincluding detection and classification. More recent methods learn complex features directly from dat:\nby training convolutional neural networks (Krizhevsky et al.|[2012}[Simonyan & Zisserman|[2014][Hi\nfet al|[2015}. Considering these results, it may be difficult for SimHash to cluster states appropriate\nusing only raw pixels.\n\\ ~ downsample @ \\ \\\nA RA \\ KA \\A \\\\\nKA KD Kt re\nAs Li Nixa] | Geal]] Be\na \\\nBO)\n96x5%5 YY 512 96x5%x5\n96x 1X1 Wo 96 x 10x 10\n96x 24.424 1024 2400 96x24 x24\n1x52x52 1x 52x52 64x 52x52\n\\ ~ downsample @ \\ \\\nMV \\ KA \\ KA \\A \\\\\nRY KD s\\ RN re\nAs Li Nixa] | Geal]] Be\nD |\nBO)\n96x5%x5 VY 512 96x5%x5\n96X IX Wo 96 x 10x 10\n96x 24 x24 1024 2400 96 x24 x24\n1x52x52 1x 52x52 645252\nFigure 1: The autoencoder (AE) architecture; the solid block represents the dense sigmoidal binary\ncode layer, after which noise U(\u2014a, a) is injected.\nClearly the performance of this method will strongly depend on the choice of hash function \u00a2. One\nimportant choice we can make regards the granularity of the discretization: we would like for \u201cdistant\u201d\nstates to be be counted separately while \u201csimilar\u201d states are merged. If desired, we can incorporate\nprior knowledge into the choice of \u00a2, if there would be a set of salient state features which are known\nto be relevant.\n#(s) = sgn(Ag(s)) \u20ac {-1, 1),\nwhere g : S > R\u00ae\u00a2 is an optional preprocessing function and A is a k x d matrix with i.id. entries\ndrawn from a standard Gaussian distribution NV (0, 1). The value for & controls the granularity: higher\nvalues lead to fewer collisions and are thus more likely to distinguish states.\nTherefore, we propose to use an autoencoder (AE) consisting of convolutional, dense, and transposed\nconvolutional layers to learn meaningful hash codes in one of its hidden layers. This AE takes as\ninput states s and contains one special dense layer comprised of K saturating activation functions,\nAlgorithm 2: Count-based exploration using learned hash codes\n1 Define state preprocessor g : S \u2014 B\u00a5 as the binary code resulting from the autoencoder (AE)\n2 Initialize A \u00a2 R*** with entries drawn i.id. from the standard Gaussian distribution N (0, 1)\n3 Initialize a hash table with values n(.) = 0\n4 for each iteration j do\n5 Collect a set of state-action samples {(sin, am) 9 with policy\n6 Add the state samples {sin}Hy to a FIFO replay pool R\n7 if j mod jupdate = 0 then\n8 Update the AE loss function in Eq. (3) using samples drawn from the replay pool\n\n| {sn} 1 ~ 8, for example using stochastic gradient descent\n9 Compute g(sy,) = Lb(s,)], the K-dim rounded hash code for s,, learned by the AE\n10 Project g(s,,) to a lower dimension k via SimHash as $(s,) = sgn(Ag(sn))\ni Update the hash table counts Vm : 0 < m < Mas n(\u00a2(sm)) \u2014 n(O(sm)) +1\n\nM\n\n2 Update the policy 7 using rewards {rom Qm) + es}. with any RL algorithm\nmore specifically sigmoid functions. By rounding the sigmoid output b(s) of this layer to the closes\nbinary number, any state s can be binarized.\nSince gradients cannot be back-propagated through a rounding function, an alternative method must\nbe used to ensure that distinct states are mapped to distinct binary codes. Therefore, uniform noise\nU(-a, a) is added to the sigmoid output. By choosing uniform noise with a sufficiently high variance.\nthe AE is only capable of reconstructing distinct inputs s if its hidden dense layer outputs values b(s)\nthat are sufficiently far apart from each other (Gregor et al.|[2016). Feeding a state s to the AE input,\nextracting b(s) and rounding it to | b(s)] yields a learned binary code. As such, the loss function L(-)\nover a set of collected states {si} 1 is defined as\nL(t) = Sf A> in (1 = Belsn))?s balsa)?\n(Calne) = ay Dy oaptsn) ~ ae Dymin{( = bin)? Bilsn)?} |.\nThis objective function consists of a cross-entropy term and a term that pressures the binary code laye\n\u2018o take on binary values, scaled by 2 \u20ac Ryo. The reasoning behind this is that uniform noise U(\u2014a, a\u2019\nilone is insufficient, in case the AE does not use a particular sigmoid unit. This term ensures that ar\ninused binary code output is assigned an arbitrary binary value. When omitting this term, the code i:\nnore prone to oscillations, causing unwanted bit flips, and destabilizing the counting process.\nOne the one hand, it is important that the mapping from state to code needs to remain relatively\nconsistent over time, which is nontrivial as the AE is constantly updated according to the latest datz\n(Algorithm [2]line[8). An obvious solution would be to significantly downsample the binary code to <\nvery low dimension, or by slowing down the training process. But on the other hand, the code has tc\nremain relatively unique for states that are both distinct and close together on the image manifold\nThis is tackled both by the second term in Eq. (3) and by the saturating behavior of the sigmoid units\nAs such, states that are already well represented in the AE hidden layers tend to saturate the sigmoic\nunits, causing the resulting loss gradients to be close to zero and making the code less prone to change\nIn order to make the AE train sufficiently fast\u2014which is required since it is updated during the agent\u2019s\ntraining\u2014we make use of a pixel-wise softmax output layer that shares\nweights between all pixels. The different softmax outputs merge together pixel intensities into discrete\nbins. The architectural details are described in Appendix[A-IJand are depicted in Figure[]] Because\nthe code dimension often needs to be large in order to correctly reconstruct the input, we apply a\ndownsampling procedure to the resulting binary code | b(s)], which can be done through random\nprojection to a lower-dimensional space via SimHash as in Eq. (2).\nTo answer question 1, we run the proposed method on deep RL benchmarks (rllab and ALE) that\nfeature sparse rewards, and compate it to other state-of-the-art algorithms. Question 2 is answered by\ntrying out different image preprocessors on Atari 2600 games. Finally, we investigate question 3 in\nSectionB.3]and3.4| Trust Region Policy Optimization (TRPO, [Schulman et al.]{2015)) is chosen\nas the RL algorithm for all experiments, because it can handle both discrete and continuous action\nspaces, it can conveniently ensure stable improvement in the policy performance, and is relatively\ninsensitive to hyperparameter changes. The hyperparameters settings are reported in Appendix[A.1"}, {"section_index": "5", "section_name": "3.1 Continuous ConTROL", "section_text": "The rllab benchmark consists of various control tasks to test deep RL algorithm:\nWe selected several variants of the basic and locomotion tasks that use sparse rewards, as show\nin Figure[2] and adopt the experimental setup as defined in (Houthooft et al] [2016}\u2014a descriptior\ncan be found in Appendix[A.2] These tasks are all highly difficult to solve with naive exploratioi\nstrategies, such as adding Gaussian noise to the actions.\nFigure 2: Illustrations of the rllab tasks used in the continuous control experiments, namely\nMountainCar, CartPoleSwingup, SimmerGather, and HalfCheetah; taken from (Duan et al] 2016).\nBe TT mee mre eer gamez ep oepe ny Beep\n\noR oe he BI \u2014\n\nOTT af gg tt\n\n\u201coo W TT gaff pees gal a) aki} HSL pgfan est af\n\n\u201coH Lehn ae pee AL TEER pepe eee\n(a) MountainCar (b) CartPoleS wingup (c) SwimmerGather (d) HalfCheetah\nFigure 3: Mean average return of different algorithms on rllab tasks with sparse rewards; the solid\nline represents the mean average return, while the shaded area represents one standard deviation, over\n5 seeds for the baseline and SimHash.\nFigure[3|shows the results of TRPO (baseline), TRPO-SimHash, and VIME\non the classic tasks MountainCar and CartPoleSwingup, the locomotion task HalfCheetah, and the\nhierarchical task SwimmerGather. Using count-based exploration with hashing is capable of reaching\nthe goal in all environments (which corresponds to a nonzero return), while baseline TRPO with\nGaussian control noise fails completely. Although TRPO-SimHash picks up the sparse reward on\nHalfCheetah, it does not perform as well as VIME. In contrast, the performance of SimHash is\ncomparable with VIME on MountainCar, while it outperforms VIME on SwimmerGather."}, {"section_index": "6", "section_name": "3.2 ARCADE LEARNING ENVIRONMENT", "section_text": "The Arcade Learning Environment (ALE, |Bellemare et al.](2012)), which consists of Atari 2600\nvideo games, is an important benchmark for deep RL due to its high-dimensional state space and wide\n1. Can count-based exploration through hashing improve performance significantly across\ndifferent domains? How does the proposed method compare to the current state of the art in\nexploration for deep RL?\n\n2. What is the impact of learned or static state preprocessing on the overall performance when\nimage observations are used?\n\n3. What factors contribute to good performance, e.g., what is the appropriate level of granularity\nof the hash function?\nAA\nvariety of games. In order to demonstrate the effectiveness of the proposed exploration strategy, six\ngames are selected featuring long horizons while requiring significant exploration: Freeway, Frostbite\nGravitar, Montezuma\u2019s Revenge, Solaris, and Venture. The agent is trained for 500 iterations in all\nexperiments, with each iteration consisting of 0.1 M steps (the TRPO batch size, corresponds to 0.4M\nframes). Policies and value functions are neural networks with identical architectures to (Mnih et al.\n(2016). Although the policy and baseline take into account the previous four frames, the counting\nalgorithm only looks at the latest frame.\nBASS To compare with the autoencoder-based learned hash code, we propose using Basic Abstrac-\ntion of the ScreenShots (BASS, also called Basic; see[Bellemare et al,|{2012}) as a static preprocessing\nfunction g. BASS is a hand-designed feature transformation for images in Atari 2600 games. BASS\nbuilds on the following observations specific to Atari: 1) the game screen has a low resolution, 2)\nmost objects are large and monochrome, and 3) winning depends mostly on knowing object locations\nand motions. We designed an adapted version of BAST} that divides the RGB screen into square\ncells, computes the average intensity of each color channel inside a cell, and assigns the resulting\nvalues to bins that uniformly partition the intensity range [0,255]. Mathematically, let C be the cell\nsize (width and height), B the number of bins, (i, j) cell location, (x, y) pixel location, and z the\nchannel.\nfeature(i, j,z) = | soe Leeye ce) 106\u00bb; 2) .\nAfterwards, the resulting integer-valued feature tensor is converted to an integer hash code ((s;) in\nLine[bjof Algorithm[]p. A BASS feature can be regarded as a miniature that efficiently encodes object\nlocations, but remains invariant to negligible object motions. It is easy to implement and introduces\nlittle computation overhead. However, it is designed for generic Atari game images and may nol\ncapture the structure of each specific game very well.\nTable 1: Atari 2600: average total reward after training for 50M time steps. Boldface numbers\nindicate best results. Italic numbers are the best among our methods.\nFreeway Frostbite! Gravitar Montezuma Solaris Venture\nTRPO (baseline) 16.5 2869 486 0 2758 121\n\u201cTRPO-pixel-SimHash 31.6 4683\u2014=\u2014(ti BCT\nTRPO-BASS-SimHash 28.4 3150 604 238 1201 616\nTRPO-AE-SimHash 33.5 5214 482 75 4467 445\nDouble-DQN 33.3 1683 412 0 3068 98.0\n\u201cDueling network ==\u201c \u201c(si .0siTD\u2014 (<sSBBOC ti OT\n\u201cGorilla LT 605 1084 NA 24\n\u201cDONPop-Art i334 3469117\n\u201cASCH \u201d\u201d~<\u201c\u2014~iT OT (st SC (i\u2018sa:SCSCIS\npseudo-count\u201d 29.2 1450 - 3439 - 369\n1 While]Vezhnevets et al. (2016} reported best score 8108, their evaluation was based on top 5 agents trained\nwith 500M time steps, hence not comparable.\n2 Results reported only for 25M time steps (100 M frames).\nWe compare our results to double DON {van Hasselt et al] [2016b}, dueling network {Wang et al]\n[2016}, A3C+ (Bellemare et al.|[2016), double DQN with pseudo-counts (Bellemare et al.][2016).\nGorila (Nair et al ][2015), and DQN Pop-Art on the \u201cnull op\u201d metriq?] We\nshow training curves in Figure[4[and summarize all results in Table 1. Surprisingly, TRPO-pixel-\nSimHash already outperforms the baseline by a large margin and beats the previous best result on\nFrostbite. TRPO-BASS-SimHash achieves significant improvement over TRPO-pixel-SimHash on\n1The original BASS exploits the fact that at most 128 colors can appear on the screen, Our adapted versio)\ndoes not make this assumption.\n2T ha nant tals na antinn fawn nandam nombar Grithin IN nf fenman at tha hacinnina nf anch anianda\nMontezuma\u2019s Revenge and Venture, where it captures object locations better than other methods}?\nTRPO-AE-SimHash achieves near state-of-the-art performance on Freeway, Frostbite and Solaris|4\nLed sb -BASS-SimHas i\nwl ] i 4 Py > po *\u00b0)|= Taropmetsintem| ill\npe , Ee Saas eget = | T : pune\nMf \u2014 ee 2 ee aed 500 i ick aasaadnle |\n~ : Loan as nee ald ene\n\u2014as\u2014sk$ te te LE on ERR peta\noo reeway we se0 Fata a\n1 : . . . (b) Frostbite se\nLg uni app (\u00a9) Gravitar\naf at) in inal | ul ae\n4 \u2018 Ny 09) + A atid) Hi\n+ L etl 7 ae shakin tctaGtibsrreei nN Lonel alates\n4 if et er EET A Cul mo ath We\na Fo 72000h Kae een\n(4) Montezuma\u2019s Revenge iss \u2014s se \u2014sho 4 a\ngt . 10 209 he serene\n(e) Solaris \u201d \u201c 500\n(f) Venture\nAs observed in Table 1, preprocessing images with BASS or using a learned hash code through the\nAE leads to much better performance on Gravitar, Montezuma\u2019s Revenge and Venture. Therefore, an\nstatic or adaptive preprocessing step can be important for a good hash function.\nIn conclusion, our count-based exploration method is able to achieve remarkable performance\ngains even with simple hash functions like SimHash on the raw pixel space. If coupled with\ndomain-dependent state preprocessing techniques, it can sometimes achieve far better results."}, {"section_index": "7", "section_name": "3.3. GRANULARITY", "section_text": "While our proposed method is able to achieve remarkable results without requiring much tunin;\nhe granularity of the hash function should be chosen wisely. Granularity plays a critical role i\n>ount-based exploration, where the hash function should cluster states without under-generalizin\nor over-generalizing. Table [2] summarizes granularity parameters for our hash functions. In Table|\nve summarize the performance of TRPO-pixel-SimHash under different granularities. We choos\nFrostbite and Venture on which TRPO-pixel-SimHash outperforms the baseline, and choose as rewar\nonus coefficient 8 = 0.01 x 256 to keep average bonus rewards at approximately the same scal\n\u00a2 = 16 only corresponds to 65536 distinct hash codes, which is insufficient to distinguish betwee\nsemantically distinct states and hence leads to worse performance. We observed that k = 512 tend\n\u20180 capture trivial image details in Frostbite, leading the agent to believe that every state is new an\n2qually worth exploring. Similar results are observed while tuning the granularity parameters fc\n[RPO-BASS-SimHash and TRPO-AE-SimHash.\nThe best granularity depends on both the hash function and the MDP. While adjusting granularity\nparameter, we observed that it is important to lower the bonus coefficient as granularity is increased\nThis is because a higher granularity is likely to cause lower state counts, leading to higher bonus\nrewards that may overwhelm the true rewards.\n3We provide videos of example game play and visualizations of the difference bewteen Pixel-SimHash and\nBASS Simla afigpeTww yout conan t-PA UNRGFEEGANW VENT pT TSS\n\n4Note that some design choices in other algorithms also impact exploration, such as 6-greedy and entropy\nregularization. Nevertheless, it is still valuable to position our results within the current literature.\nFigure 4: Atari 2600 games: the solid line is the mean average undiscounted return per iteration,\nwhile the shaded areas represent the one standard deviation, over 5 seeds for the baseline, TRPO-\npixel-SimHash, and TRPO-BASS-SimHash, while over 3 seeds for TRPO-AE-SimHash.\nTable 2: Granularity parameters of various hash functions\nTable 3: Average score at 50M time steps\nachieved by TRPO-pixel-SimHash\nk 16 64 128 256 512\nFrostbite 3326 4029 3932 4683 1117\nVenture 0 218 142 263 306\nMontezuma's Revenge is widely known for its extremely sparse rewards and difficult exploration\n(Bellemare et al [2016). While our method does not outperform[Bellemare et al.] (2016) on this game\nwe investigate the reasons behind this through various experiments. The experiment process below\nagain demonstrates the importance of a hash function having the correct granularity and encoding\nrelevant information for solving the MDP.\nOur first attempt is to use game RAM states instead of image observations as inputs to the policy\n(details in Appendix[A-Ip, which leads to a game score of 2500 with TRPO-BASS-SimHash. Oui\nsecond attempt is to manually design a hash function that incorporates domain knowledge, callec\nSmartHash, which uses an integer-valued vector consisting of the agent\u2019s (x, y) location, room numbe1\nand other useful RAM information as the hash code (details in Appendix[A.3p. The best SmartHast\nagent is able to obtain a score of 3500. Still the performance is not optimal. We observe that a slight\nchange in the agent\u2019s coordinates does not always result in a semantically distinct state, and thus the\nhash code may remain unchanged. Therefore we choose grid size s and replace the x coordinate by\nL(x \u2014 Xmin)/s] (similarly for y). The bonus coefficient is chosen as 8 = 0.01+/s to maintain the scale\nrelative to the true rewarq?] (see Table[A}. Finally, the best agent is able to obtain 6600 total rewards\nafter training for 1000 iterations (1000 M time steps), with a grid size s = 10.\n8000 1 T . 1 7\n\n= exact enemy locations \u2014eananpnpeantinty\n\nmt = ignore enemies : aE\" =\n\nsoco}| == random enemy locations} L tt i : |\n\ni Ll re a a AN DE\n\nee yi A en {\n\nme PF mkt Ow Ge LAP\n3000 : of ae tN ages ; \u2019\n\n2000 i Lt, i dria t , Ply ME {\n\n1000 LE ee en fl. we Lercannermpmegern\n\nwn ley \u201ca ee\n\nan ree TE h i. =\n\n0 trey olen coed\n\n- 10005 200 400 600 800 100C\nFigure 5: SmartHash results on Montezuma\u2019s Revenge (RAM observations): the solid line is the\nmean average undiscounted return per iteration, while the shaded areas represent the one standarc\ndeviation, over 5 seeds.\nDuring our pursuit, we had another interesting discovery that the ideal hash function should not\nsimply cluster states by their visual similarity, but instead by their relevance to solving the MDP. We\n>The bonus scaling is chosen by assuming all states are visited uniformly and the average bonus reward\nshould remain the same for any grid size.\nTable 4: Average score at 50M time steps\nachieved by TRPO-SmartHash on Montezuma\u2019s\nRevenge (RAM observations)\ngooal| \u2014_ &xact enemy locations i i cea cae quboa ana ota\n|| \u2018snore enemies io wi\n\nsovo|| == random enemy lecations| : yee | : |\n\nall Mal a a\n\n4000 L wt us aug al je wn Vv |\n\nye ODP yeah igen pst AM AO\n\nvocal en potmeggelarnyt W MeN |\n\n1000 LE Pen fe we LeCreaeemspeecnn|\n\n= eg \u201ca teaiecieee \u201c4\n\na a\n\n9 : : trey vet Na 7\n\n10005 200 400 600 2800 1000\nexperimented with including enemy locations in the first two rooms into SmartHash (s = 10), an\nobserved that average score dropped to 1672 (at iteration 1000). Though it is important for the agen\nto dodge enemies, the agent also erroneously \u201cenjoys\u201d watching enemy motions at distance (sinc:\nnew states are constantly observed) and \u201cforgets\u201d that his main objective is to enter other rooms. Al\nalternative hash function keeps the same entry \u201cenemy locations\u201d, but instead only puts random\nsampled values in it, which surprisingly achieves better performance (3112). However, by ignorin;\nenemy locations altogether, the agent achieves a much higher score (5661) (see Figure[5p. In retrospec\nwe examine the hash codes generated by BASS-SimHash and find that codes clearly distinguis]\nbetween visually different states (including various enemy locations), but fails to emphasize that th:\nagent needs to explore different rooms. Again this example showcases the importance of encodin;\nrelevant information in designing hash functions."}, {"section_index": "8", "section_name": "4 RELATED Work", "section_text": "Classic count-based methods such as MBIE (Strehl_& Littman] [2005), MBIE-EB and (Kolter &\n(2009) solve an approximate Bellman equation as an inner loop before the agent takes an actior\n{Strehl_& Littman] [2008). As such, bonus rewards are propagated immediately throughout the\nstate-action space. In contrast, contemporary deep RL algorithms propagate the bonus signal based or\nrollouts collected from interacting with environments, with value-based or policy\ngradient-based methods, at limited speed. In addition, ow\nproposed method is intended to work with contemporary deep RL algorithms, it differs from classica\u2019\ncount-based method in that our method relies on visiting unseen states first, before the bonus rewarc\ncan be assigned, making uninformed exploration strategies still a necessity at the beginning. Filling\nthe gaps between our method and classic theories is an important direction of future research.\nAnother type of exploration is curiosity-based exploration. These methods try to capture the agent\u2019s\nsurprise about transition dynamics. As the agent tries to optimize for surprise, it naturally discovers\nnovel states. We refer the reader to[Schmidhuber] 2010) and [Oudeyer & Kaplan] for an\nextensive review on curiosity and intrinsic rewards.\nThe most related exploration strategy is proposed by [Bellemare et al.](2016), in which an exploratior\nbonus is added inversely proportional to the square root of a pseudo-count quantity. A state pseudc\ncount is derived from its log-probability improvement according to a density model over the stat\nspace, which in the limit converges to the empirical count. Our method is similar to pseudo-coun\napproach in the sense that both methods are performing approximate counting to have the necessar\ngeneralization over unseen states. The difference is that a density model has to be designed anc\nlearned to achieve good generalization for pseudo-count whereas in our case generalization is obtainec\nby a wide range of simple hash functions (not necessarily SimHash). Another interesting connectior\nis that our method also implies a density model p(s) = HED over all visited states, where N is thi\ntotal number of states visited. Another method similar to hashing is proposed by[Abel ef al.]{2016]\nwhich clusters states and counts cluster centers instead of the true states, but this method has yet to bi\ntested on standard exploration benchmark problems.\nArelated line of classical exploration methods is based on the idea of optimism in the face of uncertainty\n(Brafman & Tennenholtz||2002) but not restricted to using counting to implement \u201coptimism\u201d, e.g.\nIE-Max (Brafman & Temesholt]2002), UCRL laksch etal] [2010}, and E\u00b0 (Kearns & Singh[3002),\nThese methods, similar to MBIE and MBIE-EB, have theoretical guarantees in tabular settings.\nBayesian RL methods (Kolter & Ng] [2009 2014] 201 1}[Ghavamzadeh et al.\n2015), which keep track of a distribution over MDPs, are an alternative to optimism-based methods.\nExtensions to continuous state space have been proposed by| (2013) and|Osband et al.\n(2016b).\nSeveral exploration strategies for deep RL have been proposed to handle high-dimensional state space\n\nrecently. propose VIME, in which information gain is measured in Bayesian\n\nneural networks modeling the MDP dynamics, which is used an exploration bonus.\npropose to use the prediction error of a learned dynamics model as an exploration bonus.\nThompson sampling through bootstrapping is proposed by[Osband et al] {2016ap, using bootstrapped\n\nQ-functions."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This\nresearch was funded in part by ONR through a PECASE award. Yan Duan was also supported by a\nBerkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a\nBerkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through\ngrant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through\nthe ARC Centre of Excellence for Mathematical and Statistical Frontiers. Adam Stooke gratefully\nacknowledges funding from a Fannie and John Hertz Foundation fellowship. Rein Houthooft is\nsupported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO)."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Marc G Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos.\nUnifying count-based exploration and intrinsic motivation. In Advances in Neural Information\nProcessing Systems, 2016.\nBurton H. Bloom. Space/time trade-offs in hash coding with allowable errors. Communications o;\nthe ACM, 13(7):422-426, 1970.\nMoses S Charikar. Similarity estimation techniques from rounding algorithms. In Proceedings of th\nthirty-fourth annual ACM symposium on Theory of computing, pp. 380-388, 2002.\nThis paper demonstrates that a generalization of classical counting techniques through hashing is able\nto provide an appropriate signal for exploration, even in continuous and/or high-dimensional MDPs\nusing function approximators, resulting in near state-of-the-art performance across benchmarks. It\nprovides a simple yet powerful baseline for solving MDPs that require informed exploration.\nMarc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment:\nAn evaluation platform for general agents. Journal of Artificial Intelligence Research, 2012.\nRonen! Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for near-optimal\nreinforcement learning. Journal of Machine Learning Research, 3:213-231, 2002.\nThomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning\nJournal of Machine Learning Research, 11:1563-1600, 2010.\nMichael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine\nLearning, 49(2-3):209-232, 2002.\nDavid G Lowe. Object recognition from local scale-invariant features. In Computer vision, 1999. The\nproceedings of the seventh IEEE international conference on. volume 2. pp. 1150-1157. Ieee. 1999\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Mare G Bellemare,\nAlex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control\nthrough deep reinforcement learning. Nature, 518(7540):529-533, 2015.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by\nreducing internal covariate shift. In International Conference on Machine Learning (ICML), pp.\n448-456, 2015.\nArun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria,\nVedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al. Massively\nparallel methods for deep reinforcement learning. arXiv preprint arXiv: 1507.04296, 2015.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image\nrecognition. arXiv preprint arXiv: 1409.1556, 2014.\nBradly C Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement\nlearning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.\nAlexander L Strehl and Michael L Littman. An analysis of model-based interval estimation for\nMarkov decision processes. Journal of Computer and System Sciences, 74(8):1309-1331, 2008.\nAaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. It\nInternational Conference on Machine Learning (ICML), 2016.\nHado van Hasselt, Arthur Guez, Matteo Hessel, and David Silver. Learning functions across many\norders of magnitudes. arXiv preprint arXiv:1602.07714, 2016a.\nAlexander Vezhnevets, Volodymyr Mnih, John Agapiou, Simon Osindero, Alex Graves, Oriol Vinyals,\nand Koray Kavukcuoglu. Strategic attentive writer for learning macro-actions. In Advances in\nNeural Information Processing Systems (NIPS), 2016.\nZiyu Wang, Nando de Freitas, and Marc Lanctot. Dueling network architectures for deep reinforcement\nlearning. In International Conference on Machine Learning (ICML), 2016.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel, Trust region\nanaling: nentieninatinn Tes Teed nee ati neal Cnn fbnene nn nen WA a nhiten an Ton cntenn SEOOMAT YY INI\nAlexander L Strehl and Michael L Littman. A theoretical analysis of model-based interval estimation.\nTn International Conference on Machine Learnine (ICM). nn. 856-863. 2005."}, {"section_index": "11", "section_name": "A.l HyperpARAMETER SETTINGS", "section_text": "For the rllab experiments, we used batch size 5000 for all tasks except SwimmerGather, for which w\nused batch size 50000. CartpoleSwingup makes use of a neural network policy with one layer of 3:\ntanh units. The other tasks make use of a two layer neural network policy of 32 tanh units each fo\nMountainCar and HalfCheetah, and of 64 and 32 tanh units for SwimmerGather. The outputs ar\nmodeled by a fully factorized Gaussian distribution NV (4, 07), in which yt is modeled as the networ\noutput, while o is a parameter. CartPoleSwingup makes use of a neural network baseline with on\nlayer of 32 ReLU units, while all other tasks make use of a linear baseline function. For all tasks, w\nused TRPO step size 0.01 and discount factor y = 0.99. We choose SimHash parameter k = 32 an\nbonus coefficient 8 = 0.01, found through a coarse grid search.\nFor Atari experiments, a batch size of 100000 is used, while the KL divergence step size is set tc\n0.01. The policy and baseline both have the following architecture: 2 convolutional layers witk\nrespectively 16 and 32 filters, sizes 8 x 8 and 4 x 4, strides 4 and 2, using no padding, feeding intc\na single hidden layer of 256 units. The nonlinearities are rectified linear units (ReLUs). The inpu!\nframes are downsampled to 52 x 52. The input to policy and baseline consists of the 4 previous\nframes, corresponding to the frame skip of 4. The discount factor was set to y = 0.995. All inputs\nare rescaled to [\u20141, 1] element-wise. All experiments used 5 different training seeds, except the\nexperiments with the learned hash code, which uses 3 different training seeds. Batch normalizatior\nis used at each policy and baseline layer. TRPO-pixel-SimHash uses binary\ncodes of size k = 256; BASS (TRPO-BASS-SimHash) extracts features using cell size C = 20 anc\nB = 20 bins. The autoencoder for the learned embedding (TRPO-AE-SimHash) uses a binary hidder\nlayer of 512 bit, which are projected to 64 bit.\nRAM states in Atari 2600 games are integer-valued vectors over length 128 in the range [0, 255].\nExperiments on Montezuma\u2019s Revenge with RAM observations use a policy consisting of 2 hidden\nlayers, each of size 32. RAM states are rescaled to a range [\u20141, 1]. Unlike images, only the current\nRAM is shown to the agent. Experiment results are averaged over 10 random seeds.\nThe autoencoder used for the learned hash code has a 512 bit binary code layer, using sigmoid units\nto which uniform noise U(\u2014a, a) with a = 0.3 is added. The loss function Eq. \u00ae). using A = 10\nis updated every jupdate = 3 iterations. The architecture looks as follows: an input layer of size\n52 x 52, representing the image luminance is followed by 3 consecutive 6 x 6 convolutional layers\nwith stride 2 and 96 filters feed into a fully connected layer of size 1024, which connects to the binary\ncode layer. This binary code layer feeds into a fully-connected layer of 1024 units, connecting to 4\nfully-connected layer of 2400 units. This layer feeds into 3 consecutive 6 x 6 transposed convolutional\nlayers of which the final one connects to a pixel-wise softmax layer with 64 bins, representing the\npixel intensities. Moreover, label smoothing is applied to the different softmax bins, in which the\nlog-probability of each of the bins is increased by 0.003, before normalizing. The softmax weights\nare shared among each pixel. All output nonlinearities are ReLUs; Adam (Kingma & Bal is\nused as an optimization scheme; batch Taran = Seeeee [2015) is applied to each layer\nThe architecture was shown in Figure[l]of Section|2.3"}, {"section_index": "12", "section_name": "A.2 DESCRIPTION OF THE ADAPTED RLLAB TASKS", "section_text": "This section describes the continuous control environments used in the experiments. The tasks are\nimplemented as described in[Duan et al] (2016), following the sparse reward adaptation of[Houthoof\nfeal}(2oig. The tasks have the following state and action dimensions: CartPoleSwingup, S \u00a2 R\u2019\nAC RB\u2019; MountainCar S c R?, A C R!; HalfCheetah, S C R\u201d, A C R\u00ae; SwimmerGathei\nS CR*, A CR\u2019. Por the sparse reward experiments, the tasks have been modified as follows. Ir\nCartPoleSwingup, the agent receives a reward of +1 when cos(8) > 0.8, with f the pole angle. Ir\nMountainCar, the agent receives a reward of +1 when the goal state is reached, namely escaping\nthe valley from the right side. Therefore, the agent has to figure out how to swing up the pole ir\nthe absence of any initial external rewards. In HalfCheetah, the agent receives a reward of +1 whet\nXbody > 5. As such, it has to figure out how to move forward without any initial external reward. The\ntime horizon is set to T = 500 for all tasks."}, {"section_index": "13", "section_name": "A.3 EXAMPLES OF ATARI 2600 RAM ENTRIES", "section_text": "Table 5: Interpretation of particular RAM entries in Montezuma\u2019s Revenge"}, {"section_index": "14", "section_name": "A4 ANALysis or LEARNED BINARY REPRESENTATION", "section_text": "Figure [6]shows the downsampled codes learned by the autoencoder for several Atari 2600 games\n(Frostbite, Freeway, and Montezuma\u2019s Revenge). Each row depicts 50 consecutive frames (from 0 tc\n49, going from left to right, top to bottom). The pictures in the right column depict the binary code:\nthat correspond with each of these frames (one frame per row). Figure[7]shows the reconstructions o1\nseveral subsequent images according to the autoencoder,\nTable[5]lists the semantic interpretation of certain RAM entries in Montezuma\u2019s Revenge. SmartHash,\nas described in SectionB.4] makes use of RAM indices 3, 42, 43, 27, and 67. \u201cBeam walls\u201d are\ndeadly barriers that occur periodically in some rooms.\nRAM index Group Meaning\n3 room room number\n\n\u201can agent ecoordinate\n43 agent y coordinate\n52 agent orientation (left/right)\n\n\u201coy peamwalls on/off ss\u2014iti\u2018\u201c<\u201c<CSCO\n83 beam walls beam wall countdown (on: 0, off: 36 \u2014 0)\n\n\u201c9 \"counter counts from Oto 255 andrepeas\n55 counter death scene countdown\n\n\"67 objects. existence of objects (doors, skull and key) in the Ist room\n47 skull x coordinate (both Ist and 2nd rooms)\nRaaSRaaaa Spares\nAasdaadaee i ty\nAscasnses (Hi it\nfadeedssaas (rela\nFigure 6: Frostbite, Freeway, and Montezuma\u2019s Revenge: subsequent frames (left) and corresponding\ncode (right); the frames are ordered from left (starting with frame number 0) to right, top to bottom;\nthe vertical axis in the right images correspond to the frame number.\nFigure 7: Freeway: subsequent frames and corresponding code (top); the frames are ordered from left\n(starting with frame number 0) to right, top to bottom; the vertical axis in the right images correspond\nto the frame number. Within each image, the left picture is the input frame, the middle picture the\nreconstruction, and the right picture, the reconstruction error.\nWe experimented with directly building a hashing dictionary with keys @(s) and values the stat\ncounts, but observed an unnecessary increase in computation time. Our implementation converts the\ninteger hash codes into binary numbers and then into the \u201cbytes\u201d type in Python. The hash table is :\ndictionary using those bytes as keys.\nHowever, an alternative technique called Count-Min (ran ALJ can cou a eT witl\na data structure identical to counting Bloom filters (Fan et al.][2000}, can count with a fixed intege\narray and thus reduce computation time. Specifically, let p*,..., p' be distinct large prime number\nand define \u00a2/(s) = \u00a2(s) mod p\u2019. The count of state s is returned as min; < j<l ni (gf (s)). To increas:\nthe count of s, we increment n/ (\u00a2 (s)) by 1 for all 7. Intuitively, the method replaces \u00a2 by weake\nhash functions, while it reduces the probability of over-counting by reporting counts agreed by al\nsuch weaker hash functions. The final hash code is represented as (6l(s), Leo! (s)).\nThroughout all experiments above, the prime numbers for the counting Bloom filter are 999931\n999953, 999959, 999961, 999979, and 999983, which we abbreviate as \u201c6M\u201d. In addition, we\nexperimented with 6 other prime numbers, each approximately 15 M, which we abbreviate as \u201c90 M\u201d\nAs we can see in Figure] counting states with a dictionary or with Bloom filters lead to simila\nperformance, but the computation time of latter is lower. Moreover, there is little difference betweer\ndirect counting and using a very larger table for Bloom filters, as the average bonus rewards are almos!\nthe same, indicating the same degree of exploration-exploitation trade-off. On the other hand, Bloom\nfilters require a fixed table size, which may not be known beforehand.\noe = \u2014\n\nee G | aly 1)\n\n2G At led abl 6\n\nsoll an pe : Ree bls al\n\nJ] ry) yet\n\nll gr st eT\nipememin| | pein inmerscates| |ieresece itenemes| [Emenee\n\u2014a_s ns: cs \u2014\u2014n <a sy\nfiscimmsemsias| | rete ssescomasscast\nEnos) | ES Schon ScRRsSaer| | Eeoeeses\nand define \u00a2/(s) = \u00a2(s) mod p\u2019. The count of state s is returned as min) <;<) n\u2019 (\u00a2/(s)}. To increase\n9000 + r . + i 0.012, ~ r . Y a\n== direct count| alan = direct count|\nmi = Bloom 6M + i pasts [col Bloom 6M i : i\nso00|| == Bloor 90M |... Last Pt eas anaabintle | \u2014 Bloom 90M\ni 5\n5000 : Le se : ( o.c0s| 4 i : 4\nsi: :\n4000 L ie L Ree.\npe MEG 9 0g] NSE ci, cs :\n2000 Ee. a L ! U ccca Sh. Namo,\n\u201cPOF { Se i\n2000-8 L begs ey \u201cSS Pepe kates eed\nopen: Dr grecrou gy onneuonmaan rae A So\nae _ ern\n20005 100 200 300 400 500 0.0005 100 200 300 400 500\n(a) Mean average undiscounted return (b) Average bonus reward\nFigure 8: Statistics of TRPO-pixel-SimHash (k = 256) on Frostbite. Solid lines are the mean, while\nthe shaded areas represent the one standard deviation. Results are derived from 10 random seeds\nDirect counting with a dictionary uses 2.7 times more computations than counting Bloom filters (6M\nor 90M).\nTheory of Bloom Filters Bloom filters are popular for determining whether a date\nsample s\u2019 belongs to a dataset D. Suppose we have / functions @ that independently assign eack\ndata sample to an integer between | and p uniformly at random. Initially 1,2,..., \u00bb are marked as 0\nThen every s \u20ac D is \u201cinserted\u201d through marking \u00a2/(s) as 1 for all j. A new sample s\u2019 is reported as z\nmember of 2 only if \u00a2/(s) are marked as 1 for all j. A bloom filter has zero false negative rate (any\ns \u20ac D is reported a member), while the false positive rate (probability of reporting a nonmember as z\nmember) decays exponentially in /.\nThough Bloom filters support data insertion, it does not allow data deletion. Counting Bloom filter:\n2000) maintain a counter n(-) for each number between 1 and p. Inserting/deleting .\ncorresponds to incrementing/decrementing n(g (s)) by 1 for all 7. Similarly, s is considered :\nmember if Vi : nfai(s)\\ = 0.\nCount-Min sketch is designed to support memory-efficient counting without introducing too many\nover-counts. It maintains a separate count n/ for each hash function \u00a2/ defined as @(s) = $(s)\nmod p/, where p\u2019 is a large prime number. For simplicity, we may assume that p/ ~ p Vj and \u00a2!\nassigns s to any of 1,..., p with uniform probability.\nWe now derive the probability of over-counting. Let s be a fixed data sample (not necessarily\ninserted yet) and suppose a dataset D of N samples are inserted. We assume that p! > N. Let\nn= mini<j< ni (o/ (s)) be the count returned by the Bloom filter. We are interested in computing\nDeakin ~ Ale d DV Dna ta aconmntiane chant Al wa lnaur ni (Ale ~ Rinamial (ar L\\ Tharefara\nProb(n > O|s \u00a2 D). Due to assumptions about \u00a2/, we know n/(\u00a2(s)) ~ Binomial (NV, +). Therefore\nn:= miny<;< 7 (d\u00e9 (s)) be the count returned by the Bloom filter. We are interested in computing\n5 _ Prob(n > 0,5 \u00a2 D)\nrob(n > O|s \u20ac DM) = prob \u00a2D) \u20acD)\n_ Prob(n > 0) \u2014 Prob(s \u20ac D)\n~ Prob(s \u00a2 D)\n_ Prob( > 0)\n~ Prob(s \u00a2 D)\n[a1 Prob(a! (6\"(s)) > 0)\n~ (= 1/p)%\n_ (-(-1/py\u00a5y\n~ (L= Tp)\n(l-eNipy!\n\u201cNip\na (\u2014e Ny,\nIn particular, the probability of over-counting decays exponentially in /. We refer the readers to\n(Cormode & Muthukrishnan] {2005} for other properties of the Count-Min sketch."}, {"section_index": "15", "section_name": "A.6 RoxsusTNEss ANALYSIS", "section_text": "Apart from the experimental results shown in Table[IJand TableB} additional experiments have been\nperformed to study several properties of our algorithm.\nHyperparameter sensitivity To study the performance sensitivity to hyperparameter changes, we\nfocus on evaluating TRPO-RAM-SimHash on the Atari 2600 game Frostbite, where the method has a\nclear advantage over the baseline. Because the final scores can vary between different random seeds,\nwe evaluated each set of hyperparameters with 30 seeds. To reduce computation time and cost, RAM\nstates are used instead of image observations.\nTable 6: TRPO-RAM-SimHash performance robustness to hyperparameter changes on Frostbit\u00ab\nThe results are summarized in Table|6] Herein, k refers to the length of the binary code for hashing\nwhile 6 is the multiplicative coefficient for the reward bonus, as defined in Section [2.2] This\ntable demonstrates that most hyperparameter settings outperform the baseline (8 = 0) significantly\nMoreover, the final scores show a clear pattern in response to changing hyperparameters. Small\n\u00a3-values lead to insufficient exploration, while large B-values cause the bonus rewards to overwhelm\nthe true rewards. With a fixed k, the scores are roughly concave in f, peaking at around 0.2. Higher\ngranularity & leads to better performance. Therefore, it can be concluded that the proposed exploration\nmethod is robust to hyperparameter changes in comparison to the baseline, and that the best parameter\nsettings can obtained from a relatively coarse-grained grid search.\nState and state-action counting Continuing the results in Table[6] the performance of state-action\ncounting is studied using the same experimental setup, summarized in Table [7] In particular, a\nbonus reward rt = wes instead of rt = a is assigned. These results show that the relative\nperformance of state counting compared to state-action counting depends highly on the selected\nhyperparameter settings. However, we notice that the best performance is achieved using state\ncounting with k = 256 and f = 0.2.\nTable 7: Performance comparison between state counting (left of the slash) and state-action counting\n(right of the slash) using TRPO-RAM-SimHash on Frostbite\nk 0.01 0.05 0.1 0.2 0.4 0.8 1.6\n64 879/976 2464/1491 2243/3954 2489/5523 1587/5985 1107/2052 441/742\n128 1475808 4248/4302 2801/4802 3239/7291 3621/4243 1543/1941 395/362\n256 2583/1584 4497/5402 4437/5431 7849/4872 3516/3175 2260/1238 374/96\nB\nk 0 O01 005 O1 02 04 O8 16\n- 397 - = - eee\n64 - 879 2464 2243 2489 1587 1107 441\n128 \u2014 1475 4248 2801 3239 3621 1543 395\n256 - 2583 4497 4437 7849 3516 2260 374\np\nk 0 O01 005 O1 02 04 O8 16\n- 397 - - - - - - =\n64 - 879 2464 2243 2489 1587 1107 441\n128 \u2014 1475 4248 2801 3239 3621 1543 395\n256 - 2583 4497 4437 7849 3516 2260 374"}] |
Sk8csP5ex | [{"section_index": "0", "section_name": "THE LOSS SURFACE OF RESIDUAL NETWORKS:\nENSEMBLES & THE ROLE OF BATCH NORMALIZATIOD", "section_text": "Etai Littwin & Lior Wolf"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Residual Networks (He et al.||2015) (ResNets) are neural networks with skip connections. Thes\nnetworks, which are a specific case of Highway Networks (Srivastava et al.| {2015), present stat\n\nof the art results in the most competitive computer vision tasks including image classification anc\nobject detection.\nOur analysis reveals the mechanism for this dynamic behavior and explains the driving force behind\nit. This mechanism remarkably takes place within the parameters of Batch Normalization\n\n2015), which is mostly considered as a normalization and a fine-grained whitening\n\nmechanism that addresses the problem of internal covariate shift and allows for faster learning rates.\nWe show that the scaling introduced by batch normalization determines the depth distribution in the\nvirtual ensemble of the ResNet. These scales dynamically grow as training progresses, shifting the\neffective ensemble distribution to bigger depths.\nThe main tool we employ in our analysis is spin glass models. |Choromanska et al. (2015a) have\ncreated a link between conventional networks and such models, which leads to a comprehensive\n\nstudy of the critical points of neural networks based on the spin glass analysis of |Auffinger et al.\n(2013). In our work, we generalize these results and link ResNets to generalized spin glass models.\nThese models allow us to analyze the dynamic behavior presented above. Finally, we apply the\nresults of Auffinger & Arous|(2013) in order to study the loss surface of ResNets."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deep Residual Networks present a premium in performance in comparison to con-\nventional networks of the same depth and are trainable at extreme depths. It has\nrecently been shown that Residual Networks behave like ensembles of relatively\nshallow networks. We show that these ensembles are dynamic: while initially\nthe virtual ensemble is mostly at depths lower than half the network\u2019s depth, as\nraining progresses, it becomes deeper and deeper. The main mechanism that con-\nols the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch\nNormalization technique. We explain this behavior and demonstrate the driving\nforce behind it. As a main tool in our analysis, we employ generalized spin glass\nmodels, which we also use in order to study the number of critical points in the\nyptimization of Residual Networks.\nThe success of residual vee (Heart was BUTE to the ability to train very deep networks when\nemploying skip connections A complementary view is presented by [Veit et al.\n, who attribute it to eae power 4 Sot oles and present an unraveled view of ResNets that\ndepicts ResNets as an ensemble of networks that share weights, with a binomial depth distribution\naround half depth. They also present experimental evidence that short paths of lengths shorter than\nhalf-depth dominate the ResNet gradient during training.\nThe analysis presented here shows that ResNets are ensembles with a dynamic depth behavior.\nWhen starting the training process, the ensemble is dominated by shallow networks, with depths\nlower than half-depth. As training progresses, the effective depth of the ensemble increases. This\nincrease in depth allows the ResNet to increase its effective capacity as the network becomes more\nand more accurate."}, {"section_index": "3", "section_name": "2 A RECAP OF|CHOROMANSKA ET AL.|(2015", "section_text": "path are producing positive activations, and the product | |,,_, w;.\u2019 represents the specific weight\nDefinition 1. The mass of the network N is defined as ) = []?_, n.\nE4lY] = Yowell wi\n\ni=1 j=\nLhe(w) =E,[max(0,1-Y;Y)], L%-(w) = Ea[|\u00a52 \u2014 Y|]\nwhere Y,, is a random variable corresponding to the true label of sample x. In order to equate either\nloss with the hamiltonian of the p-spherical spin glass model, a few key approximations are made:\nA4 Spherical constraint - The following is assumed:\nThese assumptions are made for the sake of analysis, and do not necessarily hold. The validity of\nthese assumption was posed as an open problem in|Choromanska et al.|(2015b), where a different\ndegree of plausibility was assigned to each. Specifically, A1, as well as the independence assumption\nof A;;, were deemed unrealistic, and A2 - A4 as plausible. For example, Al does not hold since\neach input x; is associated with many different paths and 7;; = xj2 = ...tjy. See c horomanska\n\n2015a) for further justification of these approximations.\nWe briefly summarize |Choromanska et al.|(2015a), which connects the loss function of multilayer\n\nnetworks with the hamiltonian of the p spherical spin glass model, and state their main contributions\nand results. The notations of our paper are summarized in Appendix[A]and slightly differ from those\n\nin|Choromanska et al. (2015a).\nA simple feed forward fully connected network NV, with p layers and a single output unit is consid-\nered. Let n; be the number of units in layer 7, such that no is the dimension of the input, and n, = 1.\nIt is further assumed that the ReLU activation functions denoted by R() are used. The output Y of\nthe network given an input vector x \u20ac R\u00a2@ can be expressed as\nwhere the first summation is over the network inputs x 1...cqg, and the second is over all paths from\ninput to output. There are y = Th, n, such paths and Vi, xj) = 22 = ...%;,. The variable\nAijy \u20ac {0, 1} denotes whether the path is active, i.e., whether all of the ReLU units along this\npath are producing Positive activations, and the product Th. =] wh *) represents the specific weight\n\nconfiguration w} ij wk *; multiplying x; given path j. It is assumed throughout the paper that the input\nvariables are sampled i 1.i.d from a normal Gaussian distribution.\nThe variables A;; are modeled as independent Bernoulli random variables with a success probability\np, i.e., each path is equally likely to be active. Therefore,\nThe task of binary classification using the network V with parameters w is considered, using either\nthe hinge loss \u00a3\u201d, or the absolute loss \u00a34;:\n\\2 Redundancy in network parameterization - It is assumed the set of all the network weights\n[w1, W2...wy] contains only A unique weights such that A < N.\n\n\\3 Uniformity - It is assumed that all unique weights are close to being evenly distributed on the\ngraph of connections defining the network A\u2019. Practically, this means that we assume every\nnode is adjacent to an edge with any one of the A unique weights.\nA\n\nSow? =C\n\ni=l\nUnder A1\u2014A4, the loss takes the form of a centered Gaussian process on the sphere S$4~!(v\u2018A).\nSpecifically, it is shown to resemble the hamiltonian of the a spherical p-spin glass model given by:\nwhere x;,...;,, are independent normal Gaussian variables.\nIn[Auffinger et al.|(2013), the asymptotic complexity of spherical p spin glass model is analyzed\nbased on random matrix theory. In[Choromanska et al] (20752) these results are used in order to\nshed light on the optimization process of neural networks. For example, the asymptotic complexity\nof spherical spin glasses reveals a layered structure of low-index critical points near the global op-\ntimum. These findings are then given as a possible explanation to several central phenomena found\nin neural networks optimization, such as similar performance of large nets, and the improbability of\n\ngetting stuck in a \u201cbad\u201d local minima.\nAs part of our work, we follow a similar path. First, a link is formed between residual networks and\nthe hamiltonian of a general multi-interaction spherical spin glass model as given by:\nwhere \u20ac,...\u20ac, are positive constants. Then, using/Auffinger & Arous| , we obtain insights on\n\nresidual networks. The other part of our work studies the dynamic behavior of residual networks,\nwhere we relax the assumptions made for the spin glass model.\nWe begin by establishing a connection between the loss function of deep residual networks and the\nhamiltonian of the general spherical spin glass model. We consider a simple feed forward fully\nconnected network A\u2019, with ReLU activation functions and residual connections. For simplicity o!\nnotations without the loss of generality, we assume nj = ... = Ny\u00bb = n. no = das before. In ou\nResNet model, there exist p \u2014 1 identity connections skipping a single layer each, starting from the\nfirst hidden layer. The output of layer / > 1 is given by:\nNi(a) = R(W,' Nii (x)) + M1 (2)\nd\nye ly. ral oT wom\n\nr=1 i=1 j=l\nDefinition 2. The mass of a depth r subnetwork in N is defined as\nThe properties of redundancy in network parameters and their uniform distribution, as described in\nSec.|2} allow us to re-index Eq.|9\nwhere W denotes the weight matrix connecting layer | \u2014 1 with layer /. Notice that the first hidden\nlayer has no parallel skip connection, and so Ni (x) = R(W,' x). Without loss of generality, the\nscalar output of the network is the sum of the outputs of the output layer p and is expressed as\nwhere Ay \u20ac {0,1} denotes whether path 7 of length r is open, and Vj, j\u2019,7,7\u2019 x7. The\nresidual connections in NV imply that the output Y is now the sum of products of different \u2018haste\nindexed by r. Since our ResNet model attaches a skip connection to every layer except the first,\n1 <r <p. See Sec.[6]regarding models with less frequent skip connections.\nEach path of length r includes r \u2014 1 non-skip connections (those involving the first term in Eq.\nand not the second, identity term) out of layers | = 2..p. Therefore, 7, = (eo i)n . We define the\nfollowing measure on the network:\nLemma 1. Assuming assumptions A2 \u2014 A4 hold, and % \u20ac Z, then the output can be expressed\nafter reindexing as:\nIn order to connect ResNets to generalized spherical spin glass models, we denote the variables:\n_ j - Singin\nbinionin = oT i ie Bait = Ee. 5 5\n\nj= 1 12...tr\n\nie\n\nNIK\n\n\u00bb\nLemma 2. Assuming A2 \u2014 A3 hold, and 2 \u20ac N then V,.;,__;.. the following holds:\nThe independence assumption A1 was not assumed yet, and[I4]holds regardless. Assuming A4 and\ndenoting the scaled weights \u00ab; = 4w,, we can link the distribution of Y to the distribution on :\nThe following lemma gives a generalized expression for the binary and hinge losses of the network.\nwhere C1, C2 are positive constants that do not affect the optimization process\nNote that since the input variables :\npendent or not), then the set of variable:\n\nXa are sampled from a centered Gaussian distribution (de-\n\n| io...4,. are dependent normal Gaussian variables.\n1 wv, Wp\n\n5 (at)? SEG ia. nind SRE.\nWe approximate the expected output E,(Y) with Y by assuming the minimal value in|13|holds\nsuch that V,.i,..i, ElE? :5.;,] = 4(4\u00a3)?. This approximation holds exactly when A = n, since\nall weight configurations of a particular length in Eq. [10] will appear the same number of times.\nWhen A # n, the uniformity assumption dictates that each configuration of weights would appear\napproximately equally regardless of the inputs, and the expectation values would be very close to\n\nthe lower hound. The followjne exnreccion for Y ic thue ohtained:\nLy (x) = C1) + C2Y\nWe denote the important quantities\nTheorem 1. Assuming op EN, we have that:\n1 __6\nlim - arg max(e,) = i+B\npoco p r\nTheorem 2. For any a, < we. <Q, and assuming ayp, a2p, op EN, it holds that:\n2p\nThm. 2] implies that for deep residual networks, the contribution of weight products of order far\naway from the maximum weap is negligible. The loss is, therefor, similar in complexity to that of\n\nan ensemble of potentially shallow conventional nets. The next Lemma shows that we can shift the\neffective depth to any value by simply controlling C.\nLemma 4. For any integer 1 < k < p there exists a global scaling parameter C' such tha\narg max,,(e,(C)) = k.\nThe expression for the output of a residual net in Eq.}15|provides valuable insights into the machinery\nat work when optimizing such models. Thm. [IJand mply that the loss surface resembles that of an\nensemble of shallow nets (although not a real ensemble due to obvious dependencies), with various\ndepths concentrated in a narrow band. As noticed in|Veit et al.| ), viewing ResNets as ensembles\nof relatively shallow networks helps in explaining some of the apparent advantages of these models,\nparticularly the apparent ease of optimization of extremely deep models, since deep paths barely\naffect the overall loss of the network. However, this alone does not explain the increase in accuracy\nof deep residual nets over actual ensembles of standard networks. In order to explain the improved\nperformance of ResNets, we make the following claims:\nhas the form of a spin glass model, except for the dependency between the\nvariables Z;, i....;,. We later use an assumption similar to A1 of independence between these vari-\nables in order to link the two binary classification losses and the general spherical spin glass model.\nHowever, for the results in this section, this is not necessary.\nThe series (\u20ac,)?_, determines the weight of interactions of a specific length in the loss surface. No-\ntice that for constant depth p and large enough {, arg max,.(\u00a2,) = p. Therefore, for wide networks,\nwhere n and, therefore, ( are large, interactions of order p sane the loss surface, and the effect\nof the residual connections diminishes. Conversely, for constant ( and a large enough p (deep net-\nworks), we have that arg max,.(\u00a2,.) < p, and can expect interactions of order r < p to dominate the\nloss. The asymptotic behavior of \u20ac is captured by the following lemma:\n\\s the next theorem shows, the epsilons are concentrated in a narrow band near the maximal value\nA simple global scaling of the weights is, therefore, enough to change the loss surface, from an\n\nensemble of shallow conventional nets, to an ensemble of deep nets. This is illustrated in Fig. [IJa-c\nfor various values of 3. Ina common weight initialization scheme for neural networks, C = Sa (Orr\n\n& Miiller| 2003}|Glorot & Bengio}|2010). With this initialization and A = n, 3 = p and the maximal\nweight is obtained at less than half the network\u2019s depth limp_,.. arg max,.(\u20ac,) < b. Therefore, at\nthe initialization, the loss function is primarily influenced by interactions of considerably lower order\n\nthan the depth p. which facilitates easier optimization.\n1. The distribution of the depths of the networks within the ensemble is controlled by th\nscaling parameter C.\nfor the remainder of Sec.4, we relax all assumptions, and assume that at some point in_time\n\nx by _, w? = C?, and A = N. Using Eq.|9|for the output of the network Y in Lemma.|3| the\nloss can be expressed:\nd\nLy (x,w) =C, > S> SS APT [ue (*)\n\nr=1 i=l j=1 k=1\nIr\n\nOLn(z,w) _ C2 . (r) 4(r) (r)(h)\nae -ohrh Xi; Ay Ile!\n\nr=1 j=l k=1\nNotice that the addition of a multiplier r indicates that the derivative is increasingly influenced by\ndeeper networks."}, {"section_index": "4", "section_name": "4.1 BATCH NORMALIZATION", "section_text": "Batch normalization has shown to be a crucial factor in the successful training of deep residual\nnetworks. As we will show, batch normalization layers offer an easy starting condition for the\nnetwork, such that the gradients from early in the training process will originate from extremely\nshallow paths.\nWe consider a simple batch normalization procedure, which ignores the additive terms, has the out-\nput of each ReLU unit in layer / normalized by a factor o; and then is multiplied by some parameter\nA,. The output of layer / > 1 is therefore:\nNila) = SRW Ni-a(e)) +N (a)\nwhere a; is (e} mean of the estimated standard deviations of various elements in the vecto1\n\nRW M- 1(x)). Furthermore, a typical initialization of batch normalization parameters is to set\nVi, A= 1. In this case, providing that units in the same layer have equal variance o;, the recursive\nrelation EN} 41(x)7] = 1 + ELM (x)]] holds for any unit J in layer J. This, in turn, implies that the\noutput of the ReLU units should have increasing variance o as a function of depth. Multiplying the\nweight parameters in deep layers with an increasingly small scaling factor oe effectively reduces\nthe influence of deeper paths, so that extremely short paths will dominate the early stages of opti-\nmization. We next analyze how the weight scaling, as introduced by batch normalization, provides\na driving force for the effective ensemble to become deeper as training progresses.\nWe consider a simple network of depth p, with a single residual connection skipping p \u2014 m layers.\nWe further assume that batch normalization is applied at the output of each ReLU unit as described\nin Eq. [22] [22] We denote by 1)...1;, the | indices of layers that are not skipped by the residual connection,\n\nN\nand \\m Te,\n\ni=1 Gi,\n\n\u00bb Ap Pe iA . Since every path of length m is multiplied by Mins and every\n2. During training, C' changes and causes a shift of focus from a shallow ensemble to deeper\nand deeper ensembles, which leads to an additional capacity.\n\n3. In networks that employ batch normalization, C is directly embodied as the scale parameter\nX. The starting condition of A = 1 offers a good starting condition that involves extremely\nshallow nets.\nwhere C, C'2 are some constants that do not affect the optimization process. In order to gain addi-\ntional insight into this dynamic mechanism, we investigate the derivative of the loss with respect to\nthe scale parameter C\u2019. Using Eq.|9]for the output, we obtain:\n(c)\n\n(a)\n\n(d)\n\n(f)\n\n(e)\nFigure 1: (a) A histogram of e,(8), r = 1..p, for 8 = 0.1 and p = 100. (b) Same for 6 = 0.5\n(c) Same for 8 = 2. (d) Values (y-axis) of the batch normalization parameters ; (x-axis) for\n10 layers ResNet trained to discriminate between 50 multivariate Gaussians (see Appendix [C] for\nmore details). Higher plot lines indicate later stages of training. (e) The norm of the weights of a\nresidual network, which does not employ batch normalization, as a function of the iteration. (f) The\nasymptotic of the mean number of critical points of a finite index as a function of 6.\nd Ym\n\n(,w =n OT A T) wl 45 oyna! 2) ae) I~ ()\n\ni=l j=l k=1 i=1 j=l k=1\n= L(x, w) + Lp(x,u\nWe denote by V the derivative operator with respect to the parameters w, and the gradient g =\nVwLy(x,w) = gm + gp evaluated at point w.\nOL (x, w \u2014 Hg)\nX= Ko\n\nOn\n\n> |r|\nOLN (x, w \u2014 Hg)\n\nAL Eb OM\n\n> |Ar|\nrhm. [3] suggests that |;| will increase for layers / that do not have skip-connections. Conversely,\nf layer J has a parallel skip connection, then |.;| will increase if ||g,||2 > ||gm||2, where the later\n-ondition implies that shallow paths are nearing a local minima. Notice that an increase in |Aj\u00a21, ...1,,,|\n\n\u2018esults in an increase in pl, while \\m| remains unchanged, therefore shifting the balance into\nJeeper ensembles.\nThis steady increase of |A;|, as predicted in our theoretical analysis, is also backed in experimen-\ntal results, as depicted in Fig. {I{d). Note that the first layer, which cannot be skipped, behaves\ndifferently than the other layers. More experiments can be found in Appendix|C]\nd Ym r\n\nCy = OPS SI A\u201d TP we OSS AMA TL ao\nk=1\n\ni=1 j=l k=1 i=l j=l\n= Lp(a, w) + Ly(2, w)\nOLN (x, w \u2014 Lg ,\n19) 1.2 (mllgn 3 + vllgp 3 + (m +P); 9)\nThm. /4 indicates that if either ||g,||2 or ||gm||2 is dominant (for example, near local minimas of\nthe shallow network, or at the start of training), the scaling of the weights C\u2019 will increase. This\nexpansion will, in turn, emphasize the contribution of deeper paths over shallow paths, and in-\ncrease the overall capacity of the residual network. This dynamic behavior of the effective depth of\nresidual networks is of key importance in understanding the effectiveness of these models. While\noptimization starts off rather easily with gradients largely originating from shallow paths, the overall\nadvantage of depth is still maintained by the dvnamic increase of the effective depth.\nWe now present the results of|Auffinger & Arous|(2013) regarding the asymptotic complexity in the\n\ncase of lim,_,., of the multi-spherical spin glass model given by:\nA\n\nr ~ ~\nTi, i, Wig Wi,\n\n1\nau\u201d ul\n=v +u \u2014v\n\n2\n\nQa\nNote that for the single interaction spherical spin model a? = 0. The index of a critical point of\nHZ,,x is defined as the number of negative eigenvalues in the hessian V?He.a evaluated at the critical\npoint w.\nDefinition 4. For any 0 < k < Aandu \u00a9 R, we denote the random number Crt ,;,(u, \u20ac) as the\n\nnumber of critical points of the hamiltonian in the set BX = {AX|X \u20ac (\u2014o00, u)} with index k.\nTho\u00a2 toe\nCrtan(ue)= So {Hea \u20ac Au} 1 {i(V? Hea) =f}\n\nw:V He,a=0\nIt is worth noting that the mechanism for this dynamic property of residual networks can also be\nobserved without the use of batch normalization, as a steady increase in the L2 norm of the weights,\nas shown in Fig. {Tfe). In order to model this, consider the residual network as discussed above,\nwithout batch normalization layers. Recalling, ||w||2 = CVA,w = %, the loss of this network is\nexpressed as:\nwhere Jj; are independent centered standard Gaussian variables, and \u20ac = (\u20ac,),>2 are positive\nreal numbers such that re \u20ac,2\" < oo. A configuration w of the spin spherical spin-glass model\nis a vector in R\u201c satisfying the spherical constraint:\nFurthermore, define 6;(u, \u20ac) = lima_.oo x log E[Crta,,(ue)]. Corollary 1.1 of|Auffinger & Arou:\n(2013) states that for any k > 0:\nEq. [33]provides the asymptotic mean total number of critical points with non-diverging index k. It is\npresumed that the SGD algorithm will easily avoid critical points with a high index that have many\ndescent directions, and maneuver towards low index critical points. We, therefore, investigate how\nthe mean total number of low index critical points vary as the ensemble distribution embodied in\n(\u20ac,),s2 Changes its shape by a steady increase in (3.\nTheorem 5. For any k \u20ac N,p > 1, we denote the solution to the following constrained optimization\nnrohblemes:\nP\ne* = argmax0;(R,e) s.t e=l\n\u20ac rae\nThm. |5]implies that any heterogeneous mixture of spin glasses contains fewer critical points of a\nfinite index, than a mixture in which only p interactions are considered. Therefore, for any distribu-\ntion of \u20ac that is attainable during the training of a ResNet of depth p, the number of critical points is\nlower than the number of critical points for a conventional network of depth p."}, {"section_index": "5", "section_name": "6 DISCUSSION", "section_text": "In this work, we use spin glass analysis in order to understand the dynamic behavior ResNets dis-\nplay during training and to study their loss surface. In particular, we use at one point or another the\nassumptions of redundancy in network parameters, near uniform distribution of network weights, in-\ndependence between the inputs and the paths and independence between the different copies of the\n\ninput as described in{Choromanska et al.](2015a). The last two assumptions, i.e., the two indepen-\ndence assumptions, are deemed in|Choromanska et al.\n\nas unrealistic, while the remaining\nare considered plausible.\nOur analysis of critical points in ensembles (Sec. 5) requires all of the above assumptions. However\nThm. | and 2, as well as Lemma. 4, do not assume the last assumption, i.e., the independence\nbetween the different copies of the input. Moreover, the analysis of the dynamic behavior of residua\nnets (Sec. 4) does not assume any of the above assumptions.\nOur results are well aligned with some of the results shown in [Larsson et al.] , where it is\nnoted empirically that the deepest column trains last. This is reminiscent of our claim that the deeper\nnetworks of the ensemble become more prominent as training progresses. The authors of [Larsson\n6) hypothesize that this is a result of the shallower columns being stabilized at a certain\npoint of the training process. In our work, we discover the exact driving force that comes into play.\n[n addition, our work offers an insight into the mechanics of the recently proposed densely connectec\nnetworks 2016). Following the analysis we provide in Sec. 3, the additional shortcu\npaths decrease the initial capacity of the network by offering many more short paths from inpu'\n(0 output, thereby contributing to the ease of optimization when training starts. The driving force\nmechanism described in Sec. 4.2 will then cause the effective capacity of the network to increase.\nNote that the analysis presented in Sec. 3 can be generalized to architectures with arbitrary skip\nconnections, including dense nets. This is done directly by including all of the induced sub networks\nin Eq.[9] The reformulation of Eq.[T0|would still holds, given that W,. is modified accordingly.\n1 v!\n0,(R,\u20ac) = gloat) ral\nFig. [If shows that as the ensemble progresses towards deeper networks, the mean amount of low\nindex critical points increases, which might cause the SGD optimizer to get stuck in local minima.\nThis is, however, resolved by the the fact that by the time the ensemble becomes deep enough,\nthe loss function has already reached a point of low energy as shallower ensembles were more\ndominant earlier in the training. In the following theorem, we assume a finite ensemble such that\n\u00abjl T=P\n7 0, otherwise"}, {"section_index": "6", "section_name": "7 CONCLUSION", "section_text": "Ensembles are a powerful model for ResNets, which unravels some of the key questions that have\nsurrounded ResNets since their introduction. Here, we show that ResNets display a dynamic en-\nsemble behavior, which explains the ease of training such networks even at very large depths, while\nstill maintaining the advantage of depth. As far as we know, the dynamic behavior of the effective\ncapacity is unlike anything documented in the deep learning literature. Surprisingly, the dynamic\nmechanism typically takes place within the outer multiplicative factor of the batch normalization\nmodule."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Antonio Auffinger and Gerard Ben Arous. Complexity of random smooth functions on the high\ndimensional sphere. Annals of Probability, 41(6):42 14-4247, 11 2013.\nAnna Choromanska, Yann LeCun, and G\u00e9rard Ben Arous. Open problem: The landscape of the loss\nsurfaces of multilayer networks. In COLT, pp. 1756-1760, 2015b.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural\nnetworks. In AJSTATS, 2010.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-\nnition. arXiv preprint arXiv:1512.03385, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual\nnetworks. arXiv preprint arXiv: 1603.05027, 2016.\nGao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks.\narXiv preprint arXiv: 1608.06993, 2016.\nSergey loffe and Christian Szegedy. Batch normalization: Accelerating deep network training by\nreducing internal covariate shift. In JCML, pp. 448-456, 2015.\nGustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net-\nworks without residuals. arXiv preprint arXiv: 1605.07648, 2016.\nGenevieve B Orr and Klaus-Robert Miiller. Neural networks: tricks of the trade. Springer, 2003."}, {"section_index": "8", "section_name": "A SUMMARY OF NOTATIONS", "section_text": "Table[T]presents the various symbols used throughout this work and their meaning\nAnna Choromanska, Mikael Henaff, Micha\u00e9l Mathieu, G\u00e9rard Ben Arous, and Yann LeCun. The\nloss surfaces of multilayer networks. In AJSTATS, 2015a.\nRupesh Kumar Srivastava, Klaus Greff, and Jiirgen Schmidhuber. Highway networks. arXiv preprint\narXiv: 1505.00387, 2015.\nAndreas Veit, Michael Wilber, and Serge Belongie. Residual networks behave like ensembles of\nrelatively shallow networks. In NIPS, 2016."}, {"section_index": "9", "section_name": "SYMBOL", "section_text": "The dimensionality of the input x\n\nThe output of layer 7 of network NV given input x\n\nThe final output of the network VV\n\nTrue label of input x\n\nLoss function of network VV\n\nHinge loss\n\nAbsolute loss\n\nThe depth of network VV\n\nWeights of the network w \u20ac R\u201c\n\nA positive scale factor such that ||w||2 = VAC\n\nScaled weights such that w = ow\n\nThe number of units in layers 1 > 0\n\nThe number of unique weights in the network\n\nThe total number of weights in the network VV\n\nThe weight matrix connecting layer / \u2014 1 to layer J in NV.\n\nThe hamiltonian of the p interaction spherical spin glass model.\n\nThe hamiltonian of the general spherical spin glass model.\n\nTotal number of paths from input to output in network VV\n\nyd\n\nTotal number of paths from input to output in network VV of length r\nyd\n\nReLU activation function\n\nBernoulli random variable associated with the ReLU activation functio\nParameter of the Bernoulli distribution associated with the ReLU unit\n\nmultiplier associated with paths of length r in NV.\npnC\n\nNormalization factor.\n\nBatch normalization multiplicative factor in layer |.\n\nThe mean of the estimated standard deviation various elements in (VV\nProof of Lemma{2| From|12| we have that \u20ac;, i,...i,. is defined as a sum of 2 Ar inputs. Since there are\nonly p distinct inputs, it holds that for each &;, i....;,. there exists a sequence @ = (ova \u20ac N such\nthat 4, a = Be and Ei: i9.cin = = 1, a;2;. We, therefore, have that E[\u00e9? 2 ini.) = llaull.\nNote that the minimum value of E[\u00e9? , is a solution to the following:\n\nby\nYr 7 yp\nmin(E[E, i,...i,]) = Mina (|lall2) 8. lalla = Ar? (aidiar EN,\nProof of Lemma{]| There are a total of w, paths of length r from input to output, and a total of\nA\u201d unique r length configurations of weights. The uniformity assumption then implies that each\n\nconfiguration of weights is repeated \u201c= times. By summing over the unique configurations, and re\nindexing the input we arrive at Eq\njim Zloa(( 2.) 3%\") = H(0) + atag(9)\nProof of Tm For brevity, we provide a sketch of the proof. It is enough to show that\naip\n\nlimy 50 yoo , \u20ac. = 0 for 8 < 1. Ignoring the constants in the binomial terms, we have:\nap ym P (py? pr\n\n2 2201p\nP \"6\n1P(atp) 4\n< im \u2014\u2014.___\n6; = lim\n\n5 2\n\u201c poo\npoo z\nhere 27 = S~?_, (?)\u201d6?\", which can be expressed using the Legendre polynomial of order p:\nProof of Lemma{A| For simplicity, we ignore the constants in the binomial coefficient, and assume\n\n\u20ac, = 4() 8\". Notice that for B= (b b)> we have that arg max,.(\u00a2,.(8*)) = p, arg max,.(e,\n1 and arg max,.(e,(1)) =\n\n. From the monotonicity and continuity of 6\", any value 1 > k > pcan\nbe attained. The linear dependency B(C) = ene completes the proof.\nOLn(2,w\u2014pg) _ WLy(x,w) OLN (x, w)\nON ~\u2014~on WNw\u2014 ay, 9\nOLy (x, w \u2014 wg. 1 , )\nDA w) \u00a9 0= 2>-(Gm + Gp)\" (Gm + Gp) = \u2014HZIl9m + Gpll3 <6\nOr vf 1\nIn(1 +2 3\nxz ligm + gpl) = Lal + 45\n|Ai|(1 + yp) 2 bl\nOLy (x, w = Hg)\nOr\n\n1 1 \u2018\n~0-4 )\" Gp \u201chy (9n9p + \\IGpll:\n\nui (9m + Gp\n2\n\n=(1-\n\nBPP \u2014 ae\n\n1+\n\n3?)\nC\u00bb(a, w)). Using taylor series expansion:\n\nOLy(z,w\u2014pg) _ OLn(x,w) OLN (x, w)\n: x V - 40\non an NN an, \u201c)\nSubstituting Vy XG) = x (Gm +p) in|40]we have:\nOLy (x, w \u2014 pg 1 1\n0 \u2014 pm + 9p)\" Om + Gp) =\u2014H [lm + 9pl3 <0 (AL)\nOr MI ay)\nAnd hence:\nALN (x, w \u2014 HG) 1\nMa Dy TE HPS lhgm + plo\n1\n=N(1+ 2? sy |l9m + Gplz) 42)\n7\nFinally:\n1 1\n(1 + 1\u00b0 S5llgm + Gplla)| = Ad + #55) = Pal (43)\nHT HT\n2. Since paths of length m skip layer /, we have that Vz Pex) = XIp- Therefore:\nOLy (x, w \u2014 ng) 1 1\nDy \u00a90\u2014 BY (Gm + Ge)\" 9p = HZ (GmGe + ligell) (44)\n\nThe condition ||gp||2 > ||gm||2 implies that g,\",gp + ||gp||3 > 0, completing the proof.\nOLy (a,w) VA 1 tT\nVw\u2014 aq 9 = (mL (a, w) + pLy(a, w))Vw oe? + Gilman +P9p)' g\nwig 1 1\n= (ML n(x, w) + pLp(x, w)) rors t GlMGm t Pv) 'g Gilman t PQ\u00bb) ' 9;\nJL N (az, W \u2014 HGw) 1\nYel =0- HG (MI t PG\u00bb) (Gm + Jp)\n\n1\nHe (mllgp|l3 + Pllgp|la + (m+ P)gp Gm\nelV\"e) e'(V\" \u2014V'e\ne!V'e el (V\" + Vie\n\n1\nOx(R, \u20ac) = Slog(\n1 el Ve el (V\" Ve\nmare, (R,\u20ac) < mare(slog( rr, )) - mine OT Ve)\n\n2\n\nploa(p \u2014 1) ~ (= =) = 6x(Re*)"}, {"section_index": "10", "section_name": "\u2019 ADDITIONAL EXPERIMENTS", "section_text": "Fig. 1(d) and 1(e) report the experimental results of a straightforward setting, in which the task i\nto classify a mixture of 10 multivariate Gaussians in 50D. The input is therefore of size 50. Th\nloss employed is the cross entropy loss of ten classes. The network has 10 blocks, each containin;\n20 hidden neurons, a batch normalization layer, and a skip connection. Training was performed o1\n10,000 samples, using SGD with minibatches of 50 samples.\nNext, we provide additional experiments performed on the public CIFAR-10 and CIFAR-100 dat\n\nsets (Krizhevsky||2009). The public ResNet code of|https://github.com/facebook/fb.\n\nresnet.torchiis used for networks of depth 32.\nAs noted in Sec. 4.2, the dynamic behavior can be present in the Batch Normalization multiplica:\ntive coefficient or in the weight matrices themselves. In the following experiments, it seems tha\nis orthogonal to the weights. We have that den (at) =a L(mLm(a,w) + pLlp(x, w)). Using taylor\nseries expansion we have:\n\nOLn (a, w \u2014 wg) w CEn (a, w)\noC oC\n\nOLy (x, w)\noc\n\nUV w (45)\n\nFor the last term we have:\n\nAL (0.2) 9 _ VK\n\n1\nVw = (MLin(x, w) + pLp(x, w)) Vw 9 + (mgm + pgp) 'g\n|wll2\" Cc\n\noc\nwl 1 1\n= (ML m(0, 8) + PL yl, w)) P+ BlMGm + PG)\" I= GlMIn + PG)\" 9 (46)\nwhere the last step stems from the fact that w'g = 0. Substituting V., dex tow) = Fal (mgm +PGp)\n\nin[45]we have:\n\nALy(e,w=HGw) 9 pa. (a, 4\naC \u00a9 0\u2014 EG (mgm + PIP)! (Gm + Gp)\n\n1\nHe (mllgpl3 + Pllgplla + (m+ P)gp Im) (AT)\n\nProof of Thm{5| Inserting Eq. [31]into Eq.[33|we have that:\n\n1 P_,e\u20acr(r\u20141) Py er(r \u2014 2)\nO4(R, \u20ac) = slog \"Sp\" ) \u2014 \"Spa (48)\nr=2 Gl r=2 &\nWe denote the matrices V\u2019 and V\" such that Vj; = r6;; and V{i = r(r \u2014 1)d;;. We then have:\n1 elV\"e. el (V\"-V'e\nO.(R sl \u2014 4\nw(R,\u20ac) 2 og TV? el(V\" + Ve )\n\nelV\"e) . (ev \u2014V'e\ne'Vle mine el(V\" FV)\n\n1\n= Flog (macs(Vivii*)) \u2014 mins ( (48 = Van(vel + Vay\")\n\n1\nmaze; (R, \u20ac) < maxe(5log(\n\nslog(p 1) ~~ =) =A(Re*) 50)\n\nnm A writ natant orwrenrnmrrama\nOLN (x, w)\n\nOLy(z,w\u2014 pg) _ OLn(x,w)\nYa S9G Hw\n\nOC\nP\nr=2\n\ne2r(r \u2014 2)\n\nP\n\ne2r2\n\nr=\u20142 &p\nFig. 2 depicts the results. There are two types of plots: Fig. Pfa.c) presents for CIFAR-10 anc\nCIFAR-100 respectively the magnitude of the various convolutional layers for multiple epochs (sim:\nilar in type to Fig. 1(d) in the paper). Fig. 2[b.d) depict for the two datasets the mean of these norm:\nover all convolutional layers as a function of epoch (similar to Fig. 1(e)).\nAs can be seen, the dynamic phenomenon we describe is very prominent in the public ResNet\nimplementation when applied to these conventional datasets: the dominance of paths with fewer\nskip connections increases over time. Moreover, once the learning rate is reduced in epoch 81 the\nphenomenon we describe speeds up.\nIn Fig. [3] we present the multiplicative coefficient of the Batch Normalization when not absorbed.\nAs future work, we would like to better understand why these coefficients start to decrease once the\nlearning rate is reduced. As shown above, taking the magnitude of the convolutions into account.\nthe dynamic phenomenon we study becomes even more prominent at this point. The change of\nlocation from the multiplicative coefficient of the Batch Normalization layers to the convolutions\nthemselves might indicate that Batch Normalization is no longer required at this point. Indeed.\nBatch Normalization enables larger training rates and this shift happens exactly when the training\nrate is reduced. A complete analysis is left for future work.\nuntil the learning rate is reduced, the dynamic behavior is manifested in the Batch Normaliza-\n\u2018ion multiplicative coefficients and then it moves to the convolution layers themselves. We there-\nfore absorb the BN coefficients into the convolutional layer using the public code of\n//github.com/e-lab/torch-toolbox/tree/master/BN- absorber} Note that the\nmultiplicative coefficient of Batch Normalization is typically refereed to as y. However, throughout\nour paper, since we follow the notation of |Choromanska et al. (2015p, 7 refers to the number of\npaths. The multiplicative factor of Batch normalization appears as \\ in Sec. 4.\n\u2018mean weight nom igamma is absorbed)\n\n(b)\n\nMean norm of convolution layers a8 a function of epoch for cifarso0\n\n\u2018mean weight nom igamma is absorbed)\nA A i a A\n\u201cigure 2: (a,c) The Norm of the convolutional layers once the factors of the subsequent Batch\nNormalization layers are absorbed, shown for CIFAR-10 and CIFAR-100 respectively. Each graph\ns a different epoch, see legend. Waving is due to the interleaving architecture of the convolutional\nayers. (b,d) Respectively for CIFAR-10 and CIFAR-100, the mean of the norm of the convolutional\nayers\u2019 weights per epoch.\nFigure 3: The norms of the multiplicative Batch Normalization coefficient vectors. (a,c) The Norn\nof the coefficients, shown for CIFAR-10 and CIFAR-100 respectively. Each graph is a differen\nepoch (see legend). Since there is no monotonic increase between the epochs in this graph, it is\nharder to interpret. (b,d) Respectively for CIFAR-10 and CIFAR-100, the mean of the norm of the\nmultiplicative factors per epoch.\nBatch Normalization gamma per layer for multiple epochs for cifario\n\n\u2018Mean norm of Batch Normalization gamma vectors as a function of epoch for cifario\n\nMean norm of Batch Normalization gamma vectors aa function of epoch for eifars0o\n\n(a)\n\nBatch Normalization gamma per layer for multiple epochs for cifarso0\n\n(c)"}] |
BJxhLAuxg | "[{\"section_index\": \"0\", \"section_name\": \"A DEEP LEARNING APPROACH FOR JOINT VIDEC\\nFRAME AN(...TRUNCATED) |
BJbD_Pqlg | "[{\"section_index\": \"0\", \"section_name\": \"HUMAN PERCEPTION IN COMPUTER VISION /\\nCONFERENCE (...TRUNCATED) |
HJ0NvFzxl | "[{\"section_index\": \"0\", \"section_name\": \"LEARNING GRAPHICAL STATE TRANSITIONS\", \"section_t(...TRUNCATED) |
S1Bb3D5gg | "[{\"section_index\": \"0\", \"section_name\": \"LEARNING END-TO-END GOAL-ORIENTED DIALOG\", \"secti(...TRUNCATED) |
r10FA8Kxg | "[{\"section_index\": \"0\", \"section_name\": \"1 INTRODUCTION\", \"section_text\": \"Cybenko prove(...TRUNCATED) |
Hk85q85ee | "[{\"section_index\": \"0\", \"section_name\": \"1 INTRODUCTION\", \"section_text\": \"In this paper(...TRUNCATED) |
Skvgqgqxe | "[{\"section_index\": \"0\", \"section_name\": \"LEARNING TO COMPOSE WORDS INTO SENTENCES\\nWITH REI(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 99