{ "paper_id": "N15-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:34:29.879969Z" }, "title": "When and why are log-linear models self-normalizing?", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Berkeley" } }, "email": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Berkeley" } }, "email": "klein@cs.berkeley.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Several techniques have recently been proposed for training \"self-normalized\" discriminative models. These attempt to find parameter settings for which unnormalized model scores approximate the true label probability. However, the theoretical properties of such techniques (and of self-normalization generally) have not been investigated. This paper examines the conditions under which we can expect self-normalization to work. We characterize a general class of distributions that admit self-normalization, and prove generalization bounds for procedures that minimize empirical normalizer variance. Motivated by these results, we describe a novel variant of an established procedure for training self-normalized models. The new procedure avoids computing normalizers for most training examples, and decreases training time by as much as factor of ten while preserving model quality.", "pdf_parse": { "paper_id": "N15-1027", "_pdf_hash": "", "abstract": [ { "text": "Several techniques have recently been proposed for training \"self-normalized\" discriminative models. These attempt to find parameter settings for which unnormalized model scores approximate the true label probability. However, the theoretical properties of such techniques (and of self-normalization generally) have not been investigated. This paper examines the conditions under which we can expect self-normalization to work. We characterize a general class of distributions that admit self-normalization, and prove generalization bounds for procedures that minimize empirical normalizer variance. Motivated by these results, we describe a novel variant of an established procedure for training self-normalized models. The new procedure avoids computing normalizers for most training examples, and decreases training time by as much as factor of ten while preserving model quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper investigates the theoretical properties of log-linear models trained to make their unnormalized scores approximately sum to one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent years have seen a resurgence of interest in log-linear approaches to language modeling. This includes both conventional log-linear models (Rosenfeld, 1994; Biadsy et al., 2014) and neural networks with a log-linear output layer (Bengio et al., 2006) . On a variety of tasks, these LMs have produced substantial gains over conventional generative models based on counting n-grams. Successes include machine translation (Devlin et al., 2014) and speech recognition (Graves et al., 2013) . However, log-linear LMs come at a significant cost for computational efficiency. In order to output a well-formed probability distribution over words, such models must typically calculate a normalizing constant whose computational cost grows linearly in the size of the vocabulary.", "cite_spans": [ { "start": 145, "end": 162, "text": "(Rosenfeld, 1994;", "ref_id": "BIBREF8" }, { "start": 163, "end": 183, "text": "Biadsy et al., 2014)", "ref_id": "BIBREF2" }, { "start": 235, "end": 256, "text": "(Bengio et al., 2006)", "ref_id": "BIBREF1" }, { "start": 425, "end": 446, "text": "(Devlin et al., 2014)", "ref_id": "BIBREF3" }, { "start": 470, "end": 491, "text": "(Graves et al., 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Fortunately, many applications of LMs remain well-behaved even if LM scores do not actually correspond to probability distributions. For example, if a machine translation decoder uses output from a pre-trained LM as a feature inside a larger model, it suffices to have all output scores on approximately the same scale, even if these do not sum to one for every LM context. There has thus been considerable research interest around training procedures capable of ensuring that unnormalized outputs for every context are \"close\" to a probability distribution. We are aware of at least two such techniques: noisecontrastive estimation (NCE) (Vaswani et al., 2013; Gutmann and Hyv\u00e4rinen, 2010) and explicit penalization of the log-normalizer (Devlin et al., 2014) . Both approaches have advantages and disadvantages. NCE allows fast training by dispensing with the need to ever compute a normalizer. Explicit penalization requires full normalizers to be computed during training but parameterizes the relative importance of the likelihood and the \"sum-to-one\" constraint, allowing system designers to tune the objective for optimal performance.", "cite_spans": [ { "start": 639, "end": 661, "text": "(Vaswani et al., 2013;", "ref_id": "BIBREF9" }, { "start": 662, "end": 690, "text": "Gutmann and Hyv\u00e4rinen, 2010)", "ref_id": "BIBREF6" }, { "start": 739, "end": 760, "text": "(Devlin et al., 2014)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While both NCE and explicit penalization are observed to work in practice, their theoretical properties have not been investigated. It is a classical result that empirical minimization of classification error yields models whose predictions generalize well. This paper instead investigates a notion of normalization error, and attempts to understand the conditions under which unnormalized model scores are a reliable surrogate for probabilities. While language modeling serves as a motivation and running example, our results apply to any log-linear model, and may be of general use for efficient classification and decoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our goals are twofold: primarily, to provide intuition about how self-normalization works, and why it behaves as observed; secondarily, to back these intuitions with formal guarantees, both about classes of normalizable distributions and parameter estimation procedures. The paper is built around two questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "When can self-normalization work-for which distributions do good parameter settings exist? And why should self-normalization work-how does variance of the normalizer on held-out data relate to variance of the normalizer during training? Analysis of these questions suggests an improvement to the training procedure described by Devlin et al., and we conclude with empirical results demonstrating that our new procedure can reduce training time for self-normalized models by an order of magnitude.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Consider a log-linear model of the form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y|x; \u03b8) = exp{\u03b8 y x} y exp{\u03b8 y x}", "eq_num": "(1)" } ], "section": "Preliminaries", "sec_num": "2" }, { "text": "We can think of this as a function from a context x to a probability distribution over decisions y i , where each decision is parameterized by a weight vector \u03b8 y . 1 For concreteness, consider a language modeling problem in which we are trying to predict the next word after the context the ostrich. Here x is a vector of features on the context (e.g. x = {1-2=the ostrich, 1=the, 2=ostrich, . . . }), and y ranges over the full vocabulary (e.g. y 1 = the, y 2 = runs, . . . ). Our analysis will focus on the standard log-linear case, though later in the paper we will also relate these results to neural networks. We are specifically concerned with the behavior of the normalizer or partition function", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "Z(x; \u03b8) def = y exp{\u03b8 y x}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "(2) and in particular with choices of \u03b8 for which Z(x; \u03b8) \u2248 1 for most x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "To formalize the questions in the title of this paper, we introduce the following definitions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "Definition 1. A log-linear model p(y|x, \u03b8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "is normalized with respect to a set X if for every x \u2208 X , Z(x; \u03b8) = 1. In this case we call X normalizable and \u03b8 normalizing. Now we can state our questions precisely: What distributions are normalizable? Given data points from a normalizable X , how do we find a normalizing \u03b8?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "In sections 3 and 4, we do not analyze whether the setting of \u03b8 corresponds to a good classifier-only a good normalizer. In practice we require both good normalization and good classification; in section 5 we provide empirical evidence that both are achievable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "Some notation: Weight vectors \u03b8 (and feature vectors x) are d-dimensional. There are k output classes, so the total number of parameters in \u03b8 is kd. || \u2022 || p is the p vector norm, and || \u2022 || \u221e specifically is the max norm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "3 When should self-normalization work?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "In this section, we characterize a large class of datasets (i.e. distributions p(y|x)) that are normalizable either exactly, or approximately in terms of their marginal distribution over contexts p(x). We begin by noting simple features of Equation 2: it is convex in x, so in particular its level sets enclose convex regions, and are manifolds of lower dimension than the embedding space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "As our definition of normalizability requires the existence of a normalizing \u03b8, it makes sense to begin by fixing \u03b8 and considering contexts x for which it is normalizing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "Observation. Solutions x to Z(x; \u03b8) = 1, if any exist, lie on the boundary of a convex region in R d . This follows immediately from the definition of a convex function, but provides a concrete example of a set for which \u03b8 is normalizing: the solution set of Z(x; \u03b8) = 1 has a simple geometric interpretation as a particular kind of smooth surface. An example is depicted in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 375, "end": 383, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "We cannot expect real datasets to be this well behaved, so seems reasonable to ask whether \"goodenough\" self-normalization is possible for datasets (i.e. distributions p(x)) which are only close to some exactly normalizable distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "Definition 2. A context distribution p(x) is D-close to a set X if E p inf x * \u2208X ||X \u2212 x * || \u221e = D (3) Definition 3. A context distribution p(x) is \u03b5- approximately normalizable if E p | log Z(X; \u03b8)| \u2264 \u03b5. Theorem 1. Suppose p(x) is D-close to {x : Z(x; \u03b8) = 1}, and each ||\u03b8 i || \u221e \u2264 B. Then p(x) is dBD-approximately normalizable. Proof sketch. 2 Represent each X as X * + X \u2212 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "where X * solves the optimization problem in Equation 3. Then it is possible to bound the normalizer by log exp {\u03b8 X \u2212 }, where\u03b8 maximizes the magnitude of the inner product with X \u2212 over \u03b8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "In keeping with intuition, data distributions that are close to normalizable sets are themselves approximately normalizable on the same scale. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "So far we have given a picture of what approximately normalizable distributions look like, but nothing about how to find normalizing \u03b8 from training data in practice. In this section we prove that any procedure that causes training contexts to approximately normalize will also have log-normalizers close to zero in unseen contexts. As noted in the introduction, this does not follow immediately from corresponding results for classification with log-linear models. While the two problems are related (it would be quite surprising to have uniform convergence for classification but not normalization), we nonetheless have a Theorem 2. Consider a sample (X 1 , X 2 , . . . ), with all ||X|| \u221e \u2264 R, and \u03b8 with each", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why should self-normalization work?", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "||\u03b8 i || \u221e \u2264 B. Ad- ditionally defineL = 1 n i | log Z(X i )| and L = E| log Z(X)|. Then with probability 1 \u2212 \u03b4, |L \u2212 L| \u2264 2 dk(log dBR + log n) + log 1 \u03b4 2n + 2 n", "eq_num": "(4)" } ], "section": "Why should self-normalization work?", "sec_num": "4" }, { "text": "Proof sketch. Empirical process theory provides standard bounds of the form of Equation 4 (Kakade, 2011) in terms of the size of a cover of the function class under consideration (here Z(\u2022; \u03b8)). In particular, given some \u03b1, we must construct a finite set of Z(\u2022; \u03b8) such that some\u1e90 is everywhere a distance of at most \u03b1 from every Z. To provide this cover, it suffices to provide a cover\u03b8 for \u03b8. If the\u03b8 are spaced at intervals of length D, the size of the cover is (B/D) kd , from which the given bound follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why should self-normalization work?", "sec_num": "4" }, { "text": "This result applies uniformly across choices of \u03b8 regardless of the training procedure used-in particular, \u03b8 can be found with NCE, explicit penalization, or the variant described in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why should self-normalization work?", "sec_num": "4" }, { "text": "As hoped, sample complexity grows as the number of features, and not the number of contexts. In particular, skip-gram models that treat context words independently will have sample efficiency multiplicative, rather than exponential, in the size of the conditioning context. Moreover, if some features are correlated (so that data points lie in a subspace smaller than d dimensions), similar techniques can be used to prove that sample requirements depend only on this effective dimension, and not the true feature vector size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why should self-normalization work?", "sec_num": "4" }, { "text": "We emphasize again that this result says nothing about the quality of the self-normalized model (e.g. the likelihood it assigns to held-out data). We defer a theoretical treatment of that question to future work. In the following section, however, we provide experimental evidence that self-normalization does not significantly degrade model quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why should self-normalization work?", "sec_num": "4" }, { "text": "As noted in the introduction, previous approaches to learning approximately self-normalizing distributions have either relied on explicitly computing the normalizer for each training example, or at least keeping track of an estimate of the normalizer for each training example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "5" }, { "text": "Our results here suggest that it should be possible to obtain approximate self-normalizing behavior without any representation of the normalizer on some training examples-as long as a sufficiently large fraction of training examples are normalized, then we have some guarantee that with high probability the normalizer will be close to one on the remaining training examples as well. Thus an unnormalized likelihood objective, coupled with a penalty term that looks at only a small number of normalizers, might nonetheless produce a good model. This suggests the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "5" }, { "text": "l(\u03b8) = i \u03b8 y i x i + \u03b1 \u03b3 h\u2208H (log Z(x h ; \u03b8)) 2 (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "5" }, { "text": "where the parameter \u03b1 controls the relative importance of the self-normalizing constraint, H is the set of indices to which the constraint should be applied, and \u03b3 controls the size of H, with |H| = n\u03b3 . Unlike the objective used by Devlin et al. (2014) most examples are never normalized during training. Our approach combines the best properties of the two techniques for self-normalization previously discussed: like NCE, it does not require computation of the normalizer on all training examples, but like explicit penalization it allows fine-grained control over the tradeoff between the likelihood and the quality of the approximation to the normalizer. We evaluate the usefulness of this objective with a set of small language modeling experiments. We train a log-linear LM with features similar to Biadsy et al. (2014) on a small prefix of the Europarl corpus of approximately 10M words. 4 We optimize the objective in Equation 5 using Adagrad (Duchi et al., 2011) . The normalized set H is chosen randomly for each new minibatch. We evaluate using two metrics: BLEU on a downstream machine translation task, and normalization risk R, the average magnitude of the log-normalizer on held-out data. We measure the response of our training to changes in \u03b3 and \u03b1. Results are shown in Table 1 and Table 2. Normalized fraction (\u03b3) 0 0.001 0.01 0.1 1 R train 22.0 1.7 1.5 1.5 1.5 R test 21.6 1.7 1.5 1.5 1.5 BLEU 1.5 19.1 19.2 20.0 20.0 Table 1 : Result of varying normalized fraction \u03b3, with \u03b1 = 1. When no normalization is applied, the model's behavior is pathological, but when normalizing only a small fraction of the training set, performance on the downstream translation task remains good.", "cite_spans": [ { "start": 233, "end": 253, "text": "Devlin et al. (2014)", "ref_id": "BIBREF3" }, { "start": 896, "end": 897, "text": "4", "ref_id": null }, { "start": 952, "end": 972, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 1289, "end": 1309, "text": "Table 1 and Table 2.", "ref_id": null }, { "start": 1439, "end": 1446, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Applications", "sec_num": "5" }, { "text": "Normalization strength (\u03b1) \u03b1 0.01 0.1 1 10", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "5" }, { "text": "R train 20.4 9.7 1.5 0.5 R test 20.1 9.7 1.5 0.5 BLEU 1.5 2.6 20.0 16.9 Table 2 : Result of varying normalization parameter \u03b1, with \u03b3 = 0.1. Normalization either too weak or too strong results in poor performance on the translation task, emphasizing the importance of training procedures with a tunable normalization parameter. Table 1 shows that with small enough \u03b1, normalization risk grows quite large. Table 2 shows that forcing the risk closer to zero is not necessarily desirable for a downstream machine translation task. As can be seen, no noticeable performance penalty is incurred when normalizing only a tenth of the training set. Performance gains are considerable: setting \u03b3 = 0.1, we observe a roughly tenfold speedup over \u03b3 = 1.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 2", "ref_id": null }, { "start": 328, "end": 335, "text": "Table 1", "ref_id": null }, { "start": 406, "end": 413, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Applications", "sec_num": "5" }, { "text": "On this corpus, the original training procedure of Devlin et al. with \u03b1 = 0.1 gives a BLEU score of 20.1 and R test of 2.7. Training time is equivalent to choosing \u03b3 = 1, and larger values of \u03b1 result in decreased BLEU, while smaller values result in significantly increased normalizer risk. Thus we see that we can achieve smaller normalizer variance and an order-of-magnitude decrease in training time with a loss of only 0.1 BLEU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "5" }, { "text": "comes from deeper networks. All of the proof techniques used in this paper can be combined straightforwardly with existing tools for covering the output spaces of neural networks (Anthony and Bartlett, 2009) . If optimization of the self-normalizing portion of the objective is deferred to a post-processing step after standard (likelihood) training, and restricted to parameters in the output layers, then Theorem 2 applies exactly.", "cite_spans": [ { "start": 179, "end": 207, "text": "(Anthony and Bartlett, 2009)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "5" }, { "text": "We have provided both qualitative and formal characterizations of \"self-normalizing\" log-linear models, including what we believe to be the first theoretical guarantees for self-normalizing training procedures. Motivated by these results, we have described a novel objective for training self-normalized log-linear models, and demonstrated that this objective achieves significant performance improvements without a decrease in the quality of the models learned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Proof of Theorem 1. Using the definitions of X * , X \u2212 and\u03b8 given in the proof sketch for Theorem 1, ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Quality of the approximation", "sec_num": null }, { "text": "E| log( exp{\u03b8 i X})| = E| log( exp{\u03b8 i (X * + X \u2212 )})| \u2264 E| log(exp{\u03b8 X \u2212 } exp{\u03b8 i X * })| \u2264 E| log(exp{\u03b8 X \u2212 })| \u2264 dDB B Generalization error", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Quality of the approximation", "sec_num": null }, { "text": "An alternative, equivalent formulation has a single weight vector and a feature function from contexts and decisions onto feature vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Full proofs of all results may be found in the Appendix. 3 Here (and throughout) it is straightforward to replace quantities of the form dB with B by working in 2 instead of \u221e. different function class and a different loss, and need new analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This prefix was chosen to give the fully-normalized model time to finish training, allowing a complete comparison. Due to the limited LM training data, these translation results are far from state-of-the-art.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Relation to neural networksOur discussion has focused on log-linear models. While these can be thought of as a class of singlelayer neural networks, in practice much of the demand for fast training and querying of log-linear LMs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank Peter Bartlett, Robert Nishihara and Maxim Rabinovich for useful discussions. This work was partially supported by BBN under DARPA contract HR0011-12-C-0014. The first author is supported by a National Science Foundation Graduate Fellowship.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural network learning: theoretical foundations", "authors": [ { "first": "Martin", "middle": [], "last": "Anthony", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Bartlett", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Anthony and Peter Bartlett. 2009. Neural net- work learning: theoretical foundations. Cambridge University Press.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural probabilistic language models", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Jean-S\u00e9bastien", "middle": [], "last": "Sen\u00e9cal", "suffix": "" }, { "first": "Fr\u00e9deric", "middle": [], "last": "Morin", "suffix": "" }, { "first": "Jean-Luc", "middle": [], "last": "Gauvain", "suffix": "" } ], "year": 2006, "venue": "Innovations in Machine Learning", "volume": "", "issue": "", "pages": "137--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Holger Schwenk, Jean-S\u00e9bastien Sen\u00e9cal, Fr\u00e9deric Morin, and Jean-Luc Gauvain. 2006. Neu- ral probabilistic language models. In Innovations in Machine Learning, pages 137-186. Springer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Backoff inspired features for maximum entropy language models", "authors": [ { "first": "Fadi", "middle": [], "last": "Biadsy", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Moreno", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fadi Biadsy, Keith Hall, Pedro Moreno, and Brian Roark. 2014. Backoff inspired features for maximum entropy language models. In Proceedings of the Conference of the International Speech Communication Association.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Fast and robust neural network joint models for statistical machine translation", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Rabih", "middle": [], "last": "Zbib", "suffix": "" }, { "first": "Zhongqiang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Lamar", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statisti- cal machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguis- tics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Hybrid speech recognition with deep bidirectional LSTM", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "Abdel-Rahman", "middle": [], "last": "Mo", "suffix": "" } ], "year": 2013, "venue": "IEEE Workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "273--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves, Navdeep Jaitly, and Abdel-rahman Mo- hamed. 2013. Hybrid speech recognition with deep bidirectional LSTM. In IEEE Workshop on Automatic Speech Recognition and Understanding, pages 273- 278.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Noisecontrastive estimation: A new estimation principle for unnormalized statistical models", "authors": [ { "first": "Michael", "middle": [], "last": "Gutmann", "suffix": "" }, { "first": "Aapo", "middle": [], "last": "Hyv\u00e4rinen", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "297--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Gutmann and Aapo Hyv\u00e4rinen. 2010. Noise- contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the International Conference on Artificial Intelligence and Statistics, pages 297-304.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Uniform and empirical covering numbers", "authors": [ { "first": "", "middle": [], "last": "Sham Kakade", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sham Kakade. 2011. Uniform and empirical cov- ering numbers. http://stat.wharton.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Adaptive statistical language modeling: a maximum entropy approach", "authors": [ { "first": "Ronald", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronald Rosenfeld. 1994. Adaptive statistical language modeling: a maximum entropy approach. Ph.D. thesis.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Decoding with large-scale neural language models improves translation", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Yinggong", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Fossum", "suffix": "" }, { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. 2013. Decoding with large-scale neural language models improves translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "A normalizable set, the solutions [x, y] to Z([x, y]; {[\u22121, 1], [\u22121, \u22122]}) = 1. The set forms a smooth one-dimensional manifold bounded on either side by the hyperplanes normal to [\u22121, 1] and [\u22121, \u22122]." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "For any \u03b8 1 , \u03b8 2 with ||\u03b8 1,i \u2212 \u03b8 2,i || \u221e \u2264 D def = \u03b1/dR for all i, || log Z(x; \u03b8 1 )| \u2212 | log Z(x; \u03b8 2 )|| \u2264 \u03b1(6)Proof.|| log Z(x; \u03b8 1 )| \u2212 | log Z(x; \u03b8 2 )|| \u2264 | log Z(x; \u03b8 1 ) \u2212 log Z(x; \u03b8 2 )| \u2264 log Z(x; \u03b8 1 ) Z(x; \u03b8 2 ) (w.l.o.g.) = log i exp (\u03b8 1i \u2212 \u03b8 2i ) x exp \u03b8 2i x i exp \u03b8 2i x \u2264 dDR + log Z(x; \u03b8 2 ) Z(x; \u03b8 2 ) = \u03b1Corollary 4. The set of partition functions Z = {Z(\u2022; \u03b8) : ||\u03b8|| \u221e \u2264 B \u2200\u03b8 \u2208 \u03b8} can be covered on on the \u221e ball of radius R by a grid of\u03b8 with distance D. The size of this cover is Proof of Theorem 2. From a standard discretization lemma (Kakade, 2011) and Corollary 4, we immediately have that with probabilty 1 \u2212 \u03b4," } } } }