n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
λ :P (w n |w n−2 w n−1 ) = λ 1 P(w n ) +λ 2 P(w n |w n−1 ) +λ 3 P(w n |w n−2 w n−1 ) (3.26)
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
The λ s must sum to 1, making Eq. 3.26 equivalent to a weighted average:
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
i λ i = 1 (3.27)
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
In a slightly more sophisticated version of linear interpolation, each λ weight is computed by conditioning on the context. This way, if we have particularly accurate counts for a particular bigram, we assume that the counts of the trigrams based on this bigram will be more trustworthy, so we can make the λ s for those trigrams higher and thus give that trigram more weight in the interpolation. Equation 3.28 shows the equation for interpolation with context-conditioned weights:
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
P(w n |w n−2 w n−1 ) = λ 1 (w n−2:n−1 )P(w n ) +λ 2 (w n−2:n−1 )P(w n |w n−1 ) + λ 3 (w n−2:n−1 )P(w n |w n−2 w n−1 ) (3.28)
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
How are these λ values set? Both the simple interpolation and conditional interpolation λ s are learned from a held-out corpus. A held-out corpus is an additional held-out training corpus that we use to set hyperparameters like these λ values, by choosing the λ values that maximize the likelihood of the held-out corpus. That is, we fix the n-gram probabilities and then search for the λ values that-when plugged into Eq. 3.26-give us the highest probability of the held-out set. There are various ways to find this optimal set of λ s. One way is to use the EM algorithm, an iterative learning algorithm that converges on locally optimal λ s (Jelinek and Mercer, 1980) .
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
In a backoff n-gram model, if the n-gram we need has zero counts, we approximate it by backing off to the (N-1)-gram. We continue backing off until we reach a history that has some counts.
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
In order for a backoff model to give a correct probability distribution, we have to discount the higher-order n-grams to save some probability mass for the lower discount order n-grams. Just as with add-one smoothing, if the higher-order n-grams aren't discounted and we just used the undiscounted MLE probability, then as soon as we replaced an n-gram which has zero probability with a lower-order n-gram, we would be adding probability mass, and the total probability assigned to all possible strings by the language model would be greater than 1! In addition to this explicit discount factor, we'll need a function α to distribute this probability mass to the lower order n-grams.
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
This kind of backoff with discounting is also called Katz backoff. In Katz back-
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
off we rely on a discounted probability P * if we've seen this n-gram before (i.e., if we have non-zero counts). Otherwise, we recursively back off to the Katz probability for the shorter-history (N-1)-gram. The probability for a backoff n-gram P BO is thus computed as follows:
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
P BO (w n |w n−N+1:n−1 ) =    P * (w n |w n−N+1:n−1 ), if C(w n−N+1:n ) > 0 α(w n−N+1:n−1 )P BO (w n |w n−N+2:n−1 ), otherwise. (3.29)
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
Katz backoff is often combined with a smoothing method called Good-Turing.
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
The combined Good-Turing backoff algorithm involves quite detailed computation for estimating the Good-Turing smoothing and the P * and α values.
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
One of the most commonly used and best performing n-gram smoothing methods is the interpolated Kneser-Ney algorithm (Kneser and Ney 1995, Chen and Goodman
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
Kneser-Ney 1998).
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
Kneser-Ney has its roots in a method called absolute discounting. Recall that discounting of the counts for frequent n-grams is necessary to save some probability mass for the smoothing algorithm to distribute to the unseen n-grams.
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
To see this, we can use a clever idea from Church and Gale (1991) . Consider an n-gram that has count 4. We need to discount this count by some amount. But how much should we discount it? Church and Gale's clever idea was to look at a held-out corpus and just see what the count is for all those bigrams that had count 4 in the training set. They computed a bigram grammar from 22 million words of AP newswire and then checked the counts of each of these bigrams in another 22 million words. On average, a bigram that occurred 4 times in the first 22 million words occurred 3.23 times in the next 22 million words. Fig. 3 .9 from Church and Gale (1991) shows these counts for bigrams with c from 0 to 9. .9 For all bigrams in 22 million words of AP newswire of count 0, 1, 2,...,9, the counts of these bigrams in a held-out corpus also of 22 million words.
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
Notice in Fig. 3 .9 that except for the held-out counts for 0 and 1, all the other bigram counts in the held-out set could be estimated pretty well by just subtracting 0.75 from the count in the training set! Absolute discounting formalizes this intuition by subtracting a fixed (absolute) discount d from each count. The intuition is that since we have good estimates already for the very high counts, a small discount d won't affect them much. It will mainly modify the smaller counts, for which we don't necessarily trust the estimate anyway, and Fig. 3 .9 suggests that in practice this discount is actually a good one for bigrams with counts 2 through 9. The equation for interpolated absolute discounting applied to bigrams:
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
P AbsoluteDiscounting (w i |w i−1 ) = C(w i−1 w i ) − d v C(w i−1 v) + λ (w i−1 )P(w i ) (3.30)
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
The first term is the discounted bigram, and the second term is the unigram with an interpolation weight λ . We could just set all the d values to .75, or we could keep a separate discount value of 0.5 for the bigrams with counts of 1.
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
Kneser-Ney discounting (Kneser and Ney, 1995) augments absolute discounting with a more sophisticated way to handle the lower-order unigram distribution. Consider the job of predicting the next word in this sentence, assuming we are interpolating a bigram and a unigram model. I can't see without my reading . The word glasses seems much more likely to follow here than, say, the word Kong, so we'd like our unigram model to prefer glasses. But in fact it's Kong that is more common, since Hong Kong is a very frequent word. A standard unigram model will assign Kong a higher probability than glasses. We would like to capture the intuition that although Kong is frequent, it is mainly only frequent in the phrase Hong Kong, that is, after the word Hong. The word glasses has a much wider distribution.
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
In other words, instead of P(w), which answers the question "How likely is w?", we'd like to create a unigram model that we might call P CONTINUATION , which answers the question "How likely is w to appear as a novel continuation?". How can we estimate this probability of seeing the word w as a novel continuation, in a new unseen context? The Kneser-Ney intuition is to base our estimate of P CONTINUATION on the number of different contexts word w has appeared in, that is, the number of bigram types it completes. Every bigram type was a novel continuation the first time it was seen. We hypothesize that words that have appeared in more contexts in the past are more likely to appear in some new context as well. The number of times a word w appears as a novel continuation can be expressed as:
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
P CONTINUATION (w) ∝ |{v : C(vw) > 0}| (3.31)
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
To turn this count into a probability, we normalize by the total number of word bigram types. In summary:
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
P CONTINUATION (w) = |{v : C(vw) > 0}| |{(u , w ) : C(u w ) > 0}| (3.32)
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
An equivalent formulation based on a different metaphor is to use the number of word types seen to precede w (Eq. 3.31 repeated):
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
P CONTINUATION (w) ∝ |{v : C(vw) > 0}| (3.33)
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
normalized by the number of words preceding all words, as follows:
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
P CONTINUATION (w) = |{v : C(vw) > 0}| w |{v : C(vw ) > 0}| (3.34)
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
A frequent word (Kong) occurring in only one context (Hong) will have a low continuation probability.
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
The final equation for Interpolated Kneser-Ney smoothing for bigrams is then:
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
Interpolated Kneser-Ney P KN (w i |w i−1 ) = max(C(w i−1 w i ) − d, 0) C(w i−1 ) + λ (w i−1 )P CONTINUATION (w i ) (3.35)
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
The λ is a normalizing constant that is used to distribute the probability mass we've discounted.:
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
λ (w i−1 ) = d v C(w i−1 v) |{w : C(w i−1 w) > 0}| (3.36) The first term, d v C(w i−1 v)
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
, is the normalized discount. The second term, |{w : C(w i−1 w) > 0}|, is the number of word types that can follow w i−1 or, equivalently, the number of word types that we discounted; in other words, the number of times we applied the normalized discount. The general recursive formulation is as follows: The continuation count is the number of unique single word contexts for •. At the termination of the recursion, unigrams are interpolated with the uniform distribution, where the parameter is the empty string:
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
EQUATION
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
P KN (w) = max(c KN (w) − d, 0) w c KN (w ) + λ ( ) 1 V (3.39)
3
N-gram Language Models
3.6
Kneser-Ney Smoothing
nan
nan
If we want to include an unknown word <UNK>, it's just included as a regular vocabulary entry with count zero, and hence its probability will be a lambda-weighted uniform distribution λ ( ) V . The best performing version of Kneser-Ney smoothing is called modified Kneser-Ney smoothing, and is due to Chen and Goodman (1998). Rather than use a single modified Kneser-Ney fixed discount d, modified Kneser-Ney uses three different discounts d 1 , d 2 , and d 3+ for n-grams with counts of 1, 2 and three or more, respectively. See Chen and Goodman (1998, p. 19) or Heafield et al. 2013for the details.
3
N-gram Language Models
3.7
Huge Language Models and Stupid Backoff
nan
nan
By using text from the web or other enormous collections, it is possible to build extremely large language models. The Web 1 Trillion 5-gram corpus released by Google includes various large sets of n-grams, including 1-grams through 5-grams from all the five-word sequences that appear in at least 40 distinct books from 1,024,908,267,229 words of text from publicly accessible Web pages in English (Franz and Brants, 2006) . Google has also released Google Books Ngrams corpora with n-grams drawn from their book collections, including another 800 billion tokens of n-grams from Chinese, English, French, German, Hebrew, Italian, Russian, and Spanish (Lin et al., 2012a) . Smaller but more carefully curated n-gram corpora for English include the million most frequent n-grams drawn from the COCA (Corpus of Contemporary American English) 1 billion word corpus of American English (Davies, 2020). COCA is a balanced corpora, meaning that it has roughly equal numbers of words from different genres: web, newspapers, spoken conversation transcripts, fiction, and so on, drawn from the period 1990-2019, and has the context of each n-gram as well as labels for genre and provenance).
3
N-gram Language Models
3.7
Huge Language Models and Stupid Backoff
nan
nan
Some example 4-grams from the Google Web corpus:
3
N-gram Language Models
3.7
Huge Language Models and Stupid Backoff
nan
nan
4-gram Count serve as the incoming 92 serve as the incubator 99 serve as the independent 794 serve as the index 223 serve as the indication 72 serve as the indicator 120 serve as the indicators Efficiency considerations are important when building language models that use such large sets of n-grams. Rather than store each word as a string, it is generally represented in memory as a 64-bit hash number, with the words themselves stored on disk. Probabilities are generally quantized using only 4-8 bits (instead of 8-byte floats), and n-grams are stored in reverse tries.
3
N-gram Language Models
3.7
Huge Language Models and Stupid Backoff
nan
nan
An n-gram language model can also be shrunk by pruning, for example only storing n-grams with counts greater than some threshold (such as the count threshold of 40 used for the Google n-gram release) or using entropy to prune less-important n-grams (Stolcke, 1998) . Another option is to build approximate language models using techniques like Bloom filters (Talbot and Osborne 2007, Church et al. 2007) .
3
N-gram Language Models
3.7
Huge Language Models and Stupid Backoff
nan
nan
Finally, efficient language model toolkits like KenLM (Heafield 2011, Heafield et al. 2013) use sorted arrays, efficiently combine probabilities and backoffs in a single value, and use merge sorts to efficiently build the probability tables in a minimal number of passes through a large corpus.
3
N-gram Language Models
3.7
Huge Language Models and Stupid Backoff
nan
nan
Although with these toolkits it is possible to build web-scale language models using full Kneser-Ney smoothing, Brants et al. (2007) show that with very large language models a much simpler algorithm may be sufficient. The algorithm is called stupid backoff. Stupid backoff gives up the idea of trying to make the language stupid backoff model a true probability distribution. There is no discounting of the higher-order probabilities. If a higher-order n-gram has a zero count, we simply backoff to a lower order n-gram, weighed by a fixed (context-independent) weight. This algorithm does not produce a probability distribution, so we'll follow Brants et al. (2007) in referring to it as S:
3
N-gram Language Models
3.7
Huge Language Models and Stupid Backoff
nan
nan
S(w i |w i−k+1 : i−1 ) =    count(w i−k+1 : i ) count(w i−k+1 : i−1 ) if count(w i−k+1 : i ) > 0 λ S(w i |w i−k+2 : i−1 ) otherwise (3.40)
3
N-gram Language Models
3.7
Huge Language Models and Stupid Backoff
nan
nan
The backoff terminates in the unigram, which has probability S(w) = count(w)
3
N-gram Language Models
3.7
Huge Language Models and Stupid Backoff
nan
nan
. Brants et al. (2007) find that a value of 0.4 worked well for λ .
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
We introduced perplexity in Section 3.2.1 as a way to evaluate n-gram models on a test set. A better n-gram model is one that assigns a higher probability to the test data, and perplexity is a normalized version of the probability of the test set. The perplexity measure actually arises from the information-theoretic concept of cross-entropy, which explains otherwise mysterious properties of perplexity (why the inverse probability, for example?) and its relationship to entropy. Entropy is a Entropy measure of information. Given a random variable X ranging over whatever we are predicting (words, letters, parts of speech, the set of which we'll call χ) and with a particular probability function, call it p(x), the entropy of the random variable X is:
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
H(X) = − x∈χ p(x) log 2 p(x) (3.41)
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
The log can, in principle, be computed in any base. If we use log base 2, the resulting value of entropy will be measured in bits.
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
One intuitive way to think about entropy is as a lower bound on the number of bits it would take to encode a certain decision or piece of information in the optimal coding scheme.
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
Consider an example from the standard information theory textbook Cover and Thomas (1991) . Imagine that we want to place a bet on a horse race but it is too far to go all the way to Yonkers Racetrack, so we'd like to send a short message to the bookie to tell him which of the eight horses to bet on. One way to encode this message is just to use the binary representation of the horse's number as the code; thus, horse 1 would be 001, horse 2 010, horse 3 011, and so on, with horse 8 coded as 000. If we spend the whole day betting and each horse is coded with 3 bits, on average we would be sending 3 bits per race.
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
Can we do better? Suppose that the spread is the actual distribution of the bets placed and that we represent it as the prior probability of each horse as follows: The entropy of the random variable X that ranges over horses gives us a lower bound on the number of bits and is
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
H(X) = − i=8 i=1 p(i) log p(i) = − 1 2 log 1 2 − 1 4 log 1 4 − 1 8 log 1 8 − 1 16 log 1 16 −4( 1 64 log 1 64 ) = 2 bits (3.42)
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
A code that averages 2 bits per race can be built with short encodings for more probable horses, and longer encodings for less probable horses. For example, we could encode the most likely horse with the code 0, and the remaining horses as 10, then 110, 1110, 111100, 111101, 111110, and 111111.
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
What if the horses are equally likely? We saw above that if we used an equallength binary code for the horse numbers, each horse took 3 bits to code, so the average was 3. Is the entropy the same? In this case each horse would have a probability of 1 8 . The entropy of the choice of horses is then
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
H(X) = − i=8 i=1 1 8 log 1 8 = − log 1 8 = 3 bits (3.43)
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
Until now we have been computing the entropy of a single variable. But most of what we will use entropy for involves sequences. For a grammar, for example, we will be computing the entropy of some sequence of words W = {w 1 , w 2 , . . . , w n }. One way to do this is to have a variable that ranges over sequences of words. For example we can compute the entropy of a random variable that ranges over all finite sequences of words of length n in some language L as follows:
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
H(w 1 , w 2 , . . . , w n ) = − w 1 : n ∈L p(w 1 : n ) log p(w 1 : n ) (3.44)
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
We could define the entropy rate (we could also think of this as the per-word entropy rate entropy) as the entropy of this sequence divided by the number of words:
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
1 n H(w 1 : n ) = − 1 n w 1 : n ∈L p(w 1 : n ) log p(w 1 : n ) (3.45)
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
But to measure the true entropy of a language, we need to consider sequences of infinite length. If we think of a language as a stochastic process L that produces a sequence of words, and allow W to represent the sequence of words w 1 , . . . , w n , then L's entropy rate H(L) is defined as
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
H(L) = lim n→∞ 1 n H(w 1 , w 2 , . . . , w n ) = − lim n→∞ 1 n W ∈L p(w 1 , . . . , w n ) log p(w 1 , . . . , w n ) (3.46)
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
The Shannon-McMillan-Breiman theorem (Algoet and Cover 1988, Cover and Thomas 1991) states that if the language is regular in certain ways (to be exact, if it is both stationary and ergodic),
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
H(L) = lim n→∞ − 1 n log p(w 1 w 2 . . . w n ) (3.47)
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
That is, we can take a single sequence that is long enough instead of summing over all possible sequences. The intuition of the Shannon-McMillan-Breiman theorem is that a long-enough sequence of words will contain in it many other shorter sequences and that each of these shorter sequences will reoccur in the longer sequence according to their probabilities.
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
A stochastic process is said to be stationary if the probabilities it assigns to a Stationary sequence are invariant with respect to shifts in the time index. In other words, the probability distribution for words at time t is the same as the probability distribution at time t + 1. Markov models, and hence n-grams, are stationary. For example, in a bigram, P i is dependent only on P i−1 . So if we shift our time index by x, P i+x is still dependent on P i+x−1 . But natural language is not stationary, since as we show in Chapter 12, the probability of upcoming words can be dependent on events that were arbitrarily distant and time dependent. Thus, our statistical models only give an approximation to the correct distributions and entropies of natural language. To summarize, by making some incorrect but convenient simplifying assumptions, we can compute the entropy of some stochastic process by taking a very long sample of the output and computing its average log probability. Now we are ready to introduce cross-entropy. The cross-entropy is useful when cross-entropy we don't know the actual probability distribution p that generated some data. It allows us to use some m, which is a model of p (i.e., an approximation to p). The cross-entropy of m on p is defined by
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
H(p, m) = lim n→∞ − 1 n W ∈L p(w 1 , . . . , w n ) log m(w 1 , . . . , w n ) (3.48)
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
That is, we draw sequences according to the probability distribution p, but sum the log of their probabilities according to m.
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
Again, following the Shannon-McMillan-Breiman theorem, for a stationary ergodic process:
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
H(p, m) = lim n→∞ − 1 n log m(w 1 w 2 . . . w n ) (3.49)
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
This means that, as for entropy, we can estimate the cross-entropy of a model m on some distribution p by taking a single sequence that is long enough instead of summing over all possible sequences.
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
What makes the cross-entropy useful is that the cross-entropy H(p, m) is an upper bound on the entropy H(p). For any model m:
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
H(p) ≤ H(p, m) (3.50)
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
This means that we can use some simplified model m to help estimate the true entropy of a sequence of symbols drawn according to probability p. The more accurate m is, the closer the cross-entropy H(p, m) will be to the true entropy H(p). Thus, the difference between H(p, m) and H(p) is a measure of how accurate a model is. Between two models m 1 and m 2 , the more accurate model will be the one with the lower cross-entropy. (The cross-entropy can never be lower than the true entropy, so a model cannot err by underestimating the true entropy.)
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
We are finally ready to see the relation between perplexity and cross-entropy as we saw it in Eq. 3.49. Cross-entropy is defined in the limit as the length of the observed word sequence goes to infinity. We will need an approximation to crossentropy, relying on a (sufficiently long) sequence of fixed length. This approximation to the cross-entropy of a model
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
M = P(w i |w i−N+1 : i−1 ) on a sequence of words W is H(W ) = − 1 N log P(w 1 w 2 . . . w N ) (3.51)
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
The perplexity of a model P on a sequence of words W is now formally defined as perplexity 2 raised to the power of this cross-entropy:
3
N-gram Language Models
3.8
Perplexity's Relation to Entropy
nan
nan
Perplexity(W ) = 2 H(W ) = P(w 1 w 2 . . . w N ) − 1 N = N 1 P(w 1 w 2 . . . w N ) = N N i=1 1 P(w i |w 1 . . . w i−1 ) (3.52)
3
N-gram Language Models
3.9
Summary
nan
nan
This chapter introduced language modeling and the n-gram, one of the most widely used tools in language processing.
3
N-gram Language Models
3.9
Summary
nan
nan
• Language models offer a way to assign a probability to a sentence or other sequence of words, and to predict a word from preceding words. • n-grams are Markov models that estimate words from a fixed window of previous words. n-gram probabilities can be estimated by counting in a corpus and normalizing (the maximum likelihood estimate). • n-gram language models are evaluated extrinsically in some task, or intrinsically using perplexity. • The perplexity of a test set according to a language model is the geometric mean of the inverse test set probability computed by the model. • Smoothing algorithms provide a more sophisticated way to estimate the probability of n-grams. Commonly used smoothing algorithms for n-grams rely on lower-order n-gram counts through backoff or interpolation.
3
N-gram Language Models
3.9
Summary
nan
nan
• Both backoff and interpolation require discounting to create a probability distribution. • Kneser-Ney smoothing makes use of the probability of a word being a novel continuation. The interpolated Kneser-Ney smoothing algorithm mixes a discounted probability with a lower-order continuation probability.
3
N-gram Language Models
3.1
Bibliographical and Historical Notes
nan
nan
The underlying mathematics of the n-gram was first proposed by Markov (1913) , who used what are now called Markov chains (bigrams and trigrams) to predict whether an upcoming letter in Pushkin's Eugene Onegin would be a vowel or a consonant. Markov classified 20,000 letters as V or C and computed the bigram and trigram probability that a given letter would be a vowel given the previous one or two letters. Shannon (1948) applied n-grams to compute approximations to English word sequences. Based on Shannon's work, Markov models were commonly used in engineering, linguistic, and psychological work on modeling word sequences by the 1950s. In a series of extremely influential papers starting with Chomsky (1956) and including Chomsky (1957) and Miller and Chomsky (1963) , Noam Chomsky argued that "finite-state Markov processes", while a possibly useful engineering heuristic, were incapable of being a complete cognitive model of human grammatical knowledge. These arguments led many linguists and computational linguists to ignore work in statistical modeling for decades. The resurgence of n-gram models came from Jelinek and colleagues at the IBM Thomas J. Watson Research Center, who were influenced by Shannon, and Baker at CMU, who was influenced by the work of Baum and colleagues. Independently these two labs successfully used n-grams in their speech recognition systems (Baker 1975b , Jelinek 1976 , Baker 1975a , Bahl et al. 1983 , Jelinek 1990 ).
3
N-gram Language Models
3.1
Bibliographical and Historical Notes
nan
nan
Add-one smoothing derives from Laplace's 1812 law of succession and was first applied as an engineering solution to the zero frequency problem by Jeffreys (1948) based on an earlier Add-K suggestion by Johnson (1932) . Problems with the addone algorithm are summarized in Gale and Church (1994) .
3
N-gram Language Models
3.1
Bibliographical and Historical Notes
nan
nan
A wide variety of different language modeling and smoothing techniques were proposed in the 80s and 90s, including Good-Turing discounting-first applied to the n-gram smoothing at IBM by Katz (Nádas 1984, Church and Gale 1991)-Witten-Bell discounting (Witten and Bell, 1991) , and varieties of class-based ngram models that used information about word classes.
3
N-gram Language Models
3.1
Bibliographical and Historical Notes
nan
nan
Starting in the late 1990s, Chen and Goodman performed a number of carefully controlled experiments comparing different discounting algorithms, cache models, class-based models, and other language model parameters (Chen and Goodman 1999, Goodman 2006, inter alia) . They showed the advantages of Modified Interpolated Kneser-Ney, which became the standard baseline for n-gram language modeling, especially because they showed that caches and class-based models provided only minor additional improvement. These papers are recommended for any reader with further interest in n-gram language modeling. SRILM (Stolcke, 2002) and KenLM (Heafield 2011 , Heafield et al. 2013 are publicly available toolkits for building n-gram language models.
3
N-gram Language Models
3.1
Bibliographical and Historical Notes
nan
nan
Modern language modeling is more commonly done with neural network language models, which solve the major problems with n-grams: the number of parameters increases exponentially as the n-gram order increases, and n-grams have no way to generalize from training to test set. Neural language models instead project words into a continuous space in which words with similar contexts have similar representations. We'll introduce both feedforward language models (Bengio et al. 2006 , Schwenk 2007 in Chapter 7, and recurrent language models (Mikolov, 2012) in Chapter 9.
4
Naive Bayes and Sentiment Classification
nan
nan
nan
nan
Classification lies at the heart of both human and machine intelligence. Deciding what letter, word, or image has been presented to our senses, recognizing faces or voices, sorting mail, assigning grades to homeworks; these are all examples of assigning a category to an input. The potential challenges of this task are highlighted by the fabulist Jorge Luis Borges 1964, who imagined classifying animals into:
4
Naive Bayes and Sentiment Classification
nan
nan
nan
nan
(a) those that belong to the Emperor, (b) embalmed ones, (c) those that are trained, (d) suckling pigs, (e) mermaids, (f) fabulous ones, (g) stray dogs, (h) those that are included in this classification, (i) those that tremble as if they were mad, (j) innumerable ones, (k) those drawn with a very fine camel's hair brush, (l) others, (m) those that have just broken a flower vase, (n) those that resemble flies from a distance.
4
Naive Bayes and Sentiment Classification
nan
nan
nan
nan
Many language processing tasks involve classification, although luckily our classes are much easier to define than those of Borges. In this chapter we introduce the naive Bayes algorithm and apply it to text categorization, the task of assigning a label or text categorization category to an entire text or document.
4
Naive Bayes and Sentiment Classification
nan
nan
nan
nan
We focus on one common text categorization task, sentiment analysis, the extraction of sentiment, the positive or negative orientation that a writer expresses toward some object. A review of a movie, book, or product on the web expresses the author's sentiment toward the product, while an editorial or political text expresses sentiment toward a candidate or political action. Extracting consumer or public sentiment is thus relevant for fields from marketing to politics. The simplest version of sentiment analysis is a binary classification task, and the words of the review provide excellent cues. Consider, for example, the following phrases extracted from positive and negative reviews of movies and restaurants. Words like great, richly, awesome, and pathetic, and awful and ridiculously are very informative cues: + ...zany characters and richly applied satire, and some great plot twists − It was pathetic. The worst part about it was the boxing scenes... + ...awesome caramel sauce and sweet toasty almonds. I love this place! − ...awful pizza and ridiculously overpriced... Spam detection is another important commercial application, the binary classpam detection sification task of assigning an email to one of the two classes spam or not-spam. Many lexical and other features can be used to perform this classification. For example you might quite reasonably be suspicious of an email containing phrases like "online pharmaceutical" or "WITHOUT ANY COST" or "Dear Winner".
4
Naive Bayes and Sentiment Classification
nan
nan
nan
nan
Another thing we might want to know about a text is the language it's written in. Texts on social media, for example, can be in any number of languages and we'll need to apply different processing. The task of language id is thus the first language id step in most language processing pipelines. Related text classification tasks like authorship attributiondetermining a text's author-are also relevant to the digital authorship attribution humanities, social sciences, and forensic linguistics.
4
Naive Bayes and Sentiment Classification
nan
nan
nan
nan
Finally, one of the oldest tasks in text classification is assigning a library subject category or topic label to a text. Deciding whether a research paper concerns epidemiology or instead, perhaps, embryology, is an important component of information retrieval. Various sets of subject categories exist, such as the MeSH (Medical Subject Headings) thesaurus. In fact, as we will see, subject category classification is the task for which the naive Bayes algorithm was invented in 1961.
4
Naive Bayes and Sentiment Classification
nan
nan
nan
nan
Classification is essential for tasks below the level of the document as well. We've already seen period disambiguation (deciding if a period is the end of a sentence or part of a word), and word tokenization (deciding if a character should be a word boundary). Even language modeling can be viewed as classification: each word can be thought of as a class, and so predicting the next word is classifying the context-so-far into a class for each next word. A part-of-speech tagger (Chapter 8) classifies each occurrence of a word in a sentence as, e.g., a noun or a verb.
4
Naive Bayes and Sentiment Classification
nan
nan
nan
nan
The goal of classification is to take a single observation, extract some useful features, and thereby classify the observation into one of a set of discrete classes. One method for classifying text is to use handwritten rules. There are many areas of language processing where handwritten rule-based classifiers constitute a state-ofthe-art system, or at least part of it.
4
Naive Bayes and Sentiment Classification
nan
nan
nan
nan
Rules can be fragile, however, as situations or data change over time, and for some tasks humans aren't necessarily good at coming up with the rules. Most cases of classification in language processing are instead done via supervised machine learning, and this will be the subject of the remainder of this chapter. In supervised supervised machine learning learning, we have a data set of input observations, each associated with some correct output (a 'supervision signal'). The goal of the algorithm is to learn how to map from a new observation to a correct output. Formally, the task of supervised classification is to take an input x and a fixed set of output classes Y = y 1 , y 2 , ..., y M and return a predicted class y ∈ Y . For text classification, we'll sometimes talk about c (for "class") instead of y as our output variable, and d (for "document") instead of x as our input variable. In the supervised situation we have a training set of N documents that have each been hand-labeled with a class:
4
Naive Bayes and Sentiment Classification
nan
nan
nan
nan
(d 1 , c 1 ), ...., (d N , c N ).
4
Naive Bayes and Sentiment Classification
nan
nan
nan
nan
Our goal is to learn a classifier that is capable of mapping from a new document d to its correct class c ∈ C. A probabilistic classifier additionally will tell us the probability of the observation being in the class. This full distribution over the classes can be useful information for downstream decisions; avoiding making discrete decisions early on can be useful when combining systems.
4
Naive Bayes and Sentiment Classification
nan
nan
nan
nan
Many kinds of machine learning algorithms are used to build classifiers. This chapter introduces naive Bayes; the following one introduces logistic regression. These exemplify two ways of doing classification. Generative classifiers like naive Bayes build a model of how a class could generate some input data. Given an observation, they return the class most likely to have generated the observation. Discriminative classifiers like logistic regression instead learn what features from the input are most useful to discriminate between the different possible classes. While discriminative systems are often more accurate and hence more commonly used, generative classifiers still have a role.
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
In this section we introduce the multinomial naive Bayes classifier, so called because it is a Bayesian classifier that makes a simplifying (naive) assumption about how the features interact. The intuition of the classifier is shown in Fig. 4 .1. We represent a text document as if it were a bag-of-words, that is, an unordered set of words with their position bag-of-words ignored, keeping only their frequency in the document. In the example in the figure, instead of representing the word order in all the phrases like "I love this movie" and "I would recommend it", we simply note that the word I occurred 5 times in the entire excerpt, the word it 6 times, the words love, recommend, and movie once, and so on.