n_chapter
stringclasses 10
values | chapter
stringclasses 10
values | n_section
stringlengths 3
5
| section
stringlengths 3
48
| n_subsection
stringlengths 3
6
| subsection
stringlengths 3
51
| text
stringlengths 1
2.65k
|
---|---|---|---|---|---|---|
7 | Neural Networks and Neural Language Models | nan | nan | nan | nan | Neural networks share much of the same mathematics as logistic regression. But neural networks are a more powerful classifier than logistic regression, and indeed a minimal neural network (technically one with a single 'hidden layer') can be shown to learn any function. |
7 | Neural Networks and Neural Language Models | nan | nan | nan | nan | Neural net classifiers are different from logistic regression in another way. With logistic regression, we applied the regression classifier to many different tasks by developing many rich kinds of feature templates based on domain knowledge. When working with neural networks, it is more common to avoid most uses of rich handderived features, instead building neural networks that take raw words as inputs and learn to induce features as part of the process of learning to classify. We saw examples of this kind of representation learning for embeddings in Chapter 6. Nets that are very deep are particularly good at representation learning. For that reason deep neural nets are the right tool for large scale problems that offer sufficient data to learn features automatically. |
7 | Neural Networks and Neural Language Models | nan | nan | nan | nan | In this chapter we'll introduce feedforward networks as classifiers, and also apply them to the simple task of language modeling: assigning probabilities to word sequences and predicting upcoming words. In subsequent chapters we'll introduce many other aspects of neural models, such as recurrent neural networks and the Transformer (Chapter 9), contextual embeddings like BERT (Chapter 11), and encoder-decoder models and attention (Chapter 10). |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | The building block of a neural network is a single computational unit. A unit takes a set of real valued numbers as input, performs some computation on them, and produces an output. |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | At its heart, a neural unit is taking a weighted sum of its inputs, with one additional term in the sum called a bias term. Given a set of inputs x 1 ...x n , a unit has bias term a set of corresponding weights w 1 ...w n and a bias b, so the weighted sum z can be represented as: |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | z = b + i w i x i (7.1) |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | Often it's more convenient to express this weighted sum using vector notation; recall from linear algebra that a vector is, at heart, just a list or array of numbers. Thus vector we'll talk about z in terms of a weight vector w, a scalar bias b, and an input vector x, and we'll replace the sum with the convenient dot product: |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | z = w β’ x + b (7.2) |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | As defined in Eq. 7.2, z is just a real valued number. Finally, instead of using z, a linear function of x, as the output, neural units apply a non-linear function f to z. We will refer to the output of this function as the activation value for the unit, a. Since we are just modeling a single unit, the activation activation for the node is in fact the final output of the network, which we'll generally call y. So the value y is defined as: |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | y = a = f (z) |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | We'll discuss three popular non-linear functions f () below (the sigmoid, the tanh, and the rectified linear ReLU) but it's pedagogically convenient to start with the sigmoid function since we saw it in Chapter 5: |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | sigmoid y = Ο (z) = 1 1 + e βz (7.3) |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | The sigmoid (shown in Fig. 7 .1) has a number of advantages; it maps the output into the range [0, 1], which is useful in squashing outliers toward 0 or 1. And it's differentiable, which as we saw in Section 5.8 will be handy for learning. Substituting Eq. 7.2 into Eq. 7.3 gives us the output of a neural unit: |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | EQUATION |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | 4) Fig. 7 .2 shows a final schematic of a basic neural unit. In this example the unit takes 3 input values x 1 , x 2 , and x 3 , and computes a weighted sum, multiplying each value by a weight (w 1 , w 2 , and w 3 , respectively), adds them to a bias term b, and then passes the resulting sum through a sigmoid function to result in a number between 0 and 1. |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | x 1 x 2 x 3 y w 1 w 2 w 3 β b Ο +1 |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | z a Figure 7 .2 A neural unit, taking 3 inputs x 1 , x 2 , and x 3 (and a bias b that we represent as a weight for an input clamped at +1) and producing an output y. We include some convenient intermediate variables: the output of the summation, z, and the output of the sigmoid, a. In this case the output of the unit y is the same as a, but in deeper networks we'll reserve y to mean the final output of the entire network, leaving a as the activation of an individual node. |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | Let's walk through an example just to get an intuition. Let's suppose we have a unit with the following weight vector and bias: |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | w = [0.2, 0.3, 0.9] b = 0.5 |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | What would this unit do with the following input vector: |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | x = [0.5, 0.6, 0.1] |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | The resulting output y would be: |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | y = Ο (w β’ x + b) = 1 1 + e β(wβ’x+b) = 1 |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | 1 + e β(.5 * .2+.6 * .3+.1 * .9+.5) = 1 1 + e β0.87 = .70 |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | In practice, the sigmoid is not commonly used as an activation function. A function that is very similar but almost always better is the tanh function shown in Fig. 7.3a; tanh tanh is a variant of the sigmoid that ranges from -1 to +1: |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | y = |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | e z β e βz e z + e βz (7.5) |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | The simplest activation function, and perhaps the most commonly used, is the rectified linear unit, also called the ReLU, shown in Fig. 7 .3b. It's just the same as z ReLU when z is positive, and 0 otherwise: |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | y = max(z, 0) (7.6) |
7 | Neural Networks and Neural Language Models | 7.1 | Units | nan | nan | These activation functions have different properties that make them useful for different language applications or network architectures. For example, the tanh function has the nice properties of being smoothly differentiable and mapping outlier values toward the mean. The rectifier function, on the other hand has nice properties that result from it being very close to linear. In the sigmoid or tanh functions, very high values of z result in values of y that are saturated, i.e., extremely close to 1, saturated and have derivatives very close to 0. Zero derivatives cause problems for learning, because as we'll see in Section 7.6, we'll train networks by propagating an error signal backwards, multiplying gradients (partial derivatives) from each layer of the network; gradients that are almost 0 cause the error signal to get smaller and smaller until it is too small to be used for training, a problem called the vanishing gradient vanishing gradient problem. Rectifiers don't have this problem, since the derivative of ReLU for high values of z is 1 rather than very close to 0. |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | nan | nan | Early in the history of neural networks it was realized that the power of neural networks, as with the real neurons that inspired them, comes from combining these units into larger networks. One of the most clever demonstrations of the need for multi-layer networks was the proof by Minsky and Papert (1969) that a single neural unit cannot compute some very simple functions of its input. Consider the task of computing elementary logical functions of two inputs, like AND, OR, and XOR. As a reminder, here are the truth tables for those functions: |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | nan | nan | AND OR XOR x1 x2 y x1 x2 y x1 x2 y 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 1 1 1 0 0 1 0 1 1 0 1 1 1 1 1 1 1 1 1 0 |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | nan | nan | This example was first shown for the perceptron, which is a very simple neural perceptron unit that has a binary output and does not have a non-linear activation function. The output y of a perceptron is 0 or 1, and is computed as follows (using the same weight w, input x, and bias b as in Eq. 7.2): |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | nan | nan | y = 0, if w β’ x + b β€ 0 1, if w β’ x + b > 0 (7.7) |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | nan | nan | It's very easy to build a perceptron that can compute the logical AND and OR functions of its binary inputs; Fig. 7 .4 shows the necessary weights. It turns out, however, that it's not possible to build a perceptron to compute logical XOR! (It's worth spending a moment to give it a try!) |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | nan | nan | EQUATION |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | nan | nan | The intuition behind this important result relies on understanding that a perceptron is a linear classifier. For a two-dimensional input x 1 and x 2 , the perception equation, w 1 x 1 + w 2 x 2 + b = 0 is the equation of a line. (We can see this by putting it in the standard linear format: |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | nan | nan | x 2 = (βw 1 /w 2 )x 1 + (βb/w 2 ) |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | nan | nan | .) This line acts as a decision boundary in two-dimensional space in which the output 0 is assigned to all decision boundary inputs lying on one side of the line, and the output 1 to all input points lying on the other side of the line. If we had more than 2 inputs, the decision boundary becomes a hyperplane instead of a line, but the idea is the same, separating the space into two categories. Fig. 7 .5 shows the possible logical inputs (00, 01, 10, and 11) and the line drawn by one possible set of parameters for an AND and an OR classifier. Notice that there is simply no way to draw a line that separates the positive cases of XOR (01 and 10) from the negative cases (00 and 11). We say that XOR is not a linearly separable linearly separable function. Of course we could draw a boundary with a curve, or some other function, but not a single line. |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | 7.2.1 | The solution: neural networks | While the XOR function cannot be calculated by a single perceptron, it can be calculated by a layered network of units. Let's see an example of how to do this from Goodfellow et al. (2016) that computes XOR using two layers of ReLU-based units. Fig. 7 .6 shows a figure with the input being processed by two layers of neural units. The middle layer (called h) has two units, and the output layer (called y) has one unit. A set of weights and biases are shown for each ReLU that correctly computes the XOR function. |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | 7.2.1 | The solution: neural networks | Let's walk through what happens with the input x = [0, 0]. If we multiply each input value by the appropriate weight, sum, and then add the bias b, we get the vector [0, -1], and we then apply the rectified linear transformation to give the output of the h layer as [0, 0]. Now we once again multiply by the weights, sum, and add the bias (0 in this case) resulting in the value 0. The reader should work through the computation of the remaining 3 possible input pairs to see that the resulting y values are 1 for the inputs [0, 1] ? Figure 7 .5 The functions AND, OR, and XOR, represented with input x 1 on the x-axis and input x 2 on the y axis. Filled circles represent perceptron outputs of 1, and white circles perceptron outputs of 0. There is no way to draw a line that correctly separates the two categories for XOR. Figure styled after Russell and Norvig (2002) . |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | 7.2.1 | The solution: neural networks | x 1 x 2 h 1 h 2 y 1 +1 1 -1 1 1 1 -2 0 1 +1 0 Figure 7 |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | 7.2.1 | The solution: neural networks | .6 XOR solution after Goodfellow et al. (2016). There are three ReLU units, in two layers; we've called them h 1 , h 2 (h for "hidden layer") and y 1 . As before, the numbers on the arrows represent the weights w for each unit, and we represent the bias b as a weight on a unit clamped to +1, with the bias weights/units in gray. |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | 7.2.1 | The solution: neural networks | It's also instructive to look at the intermediate results, the outputs of the two hidden nodes h 1 and h 2 . We showed in the previous paragraph that the h vector for the inputs x = [0, 0] was [0, 0]. Fig. 7.7b shows the values of the h layer for all 4 inputs. Notice that hidden representations of the two input points x = [0, 1] and x = [1, 0] (the two cases with XOR output = 1) are merged to the single point h = [1, 0]. The merger makes it easy to linearly separate the positive and negative cases of XOR. In other words, we can view the hidden layer of the network as forming a representation for the input. |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | 7.2.1 | The solution: neural networks | In this example we just stipulated the weights in Fig. 7 .6. But for real examples the weights for neural networks are learned automatically using the error backpropagation algorithm to be introduced in Section 7.6. That means the hidden layers will learn to form useful representations. This intuition, that neural networks can automatically learn useful representations of the input, is one of their key advantages, and one that we will return to again and again in later chapters. |
7 | Neural Networks and Neural Language Models | 7.2 | The XOR Problem | 7.2.1 | The solution: neural networks | Note that the solution to the XOR problem requires a network of units with nonlinear activation functions. A network made up of simple linear (perceptron) units cannot solve the XOR problem. This is because a network formed by many layers of purely linear units can always be reduced (i.e., shown to be computationally identical to) a single layer of linear units with appropriate weights, and we've already shown (visually, in Fig. 7 .5) that a single unit cannot solve the XOR problem. We'll return to this question on page 137. |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | Let's now walk through a slightly more formal presentation of the simplest kind of neural network, the feedforward network. A feedforward network is a multilayer feedforward network network in which the units are connected with no cycles; the outputs from units in each layer are passed to units in the next higher layer, and no outputs are passed back to lower layers. (In Chapter 9 we'll introduce networks with cycles, called recurrent neural networks.) For historical reasons multilayer networks, especially feedforward networks, are sometimes called multi-layer perceptrons (or MLPs); this is a technical misnomer, multi-layer perceptrons MLP since the units in modern multilayer networks aren't perceptrons (perceptrons are purely linear, but modern networks are made up of units with non-linearities like sigmoids), but at some point the name stuck. Simple feedforward networks have three kinds of nodes: input units, hidden units, and output units. Fig. 7.8 shows a picture. |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | The input layer x is a vector of simple scalar values just as we saw in Fig. 7 .2. The core of the neural network is the hidden layer h formed of hidden units h i , hidden layer each of which is a neural unit as described in Section 7.1, taking a weighted sum of its inputs and then applying a non-linearity. In the standard architecture, each layer is fully-connected, meaning that each unit in each layer takes as input the outputs fully-connected from all the units in the previous layer, and there is a link between every pair of units from two adjacent layers. Thus each hidden unit sums over all the input units. |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | Recall that a single hidden unit has as parameters a weight vector and a bias. We represent the parameters for the entire hidden layer by combining the weight vector and bias for each unit i into a single weight matrix W and a single bias vector b for the whole layer (see Fig. 7.8) . Each element W ji of the weight matrix W represents the weight of the connection from the ith input unit x i to the jth hidden unit h j . Figure 7 .8 A simple 2-layer feedforward network, with one hidden layer, one output layer, and one input layer (the input layer is usually not counted when enumerating layers). |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | The advantage of using a single matrix W for the weights of the entire layer is that now the hidden layer computation for a feedforward network can be done very efficiently with simple matrix operations. In fact, the computation only has three steps: multiplying the weight matrix by the input vector x, adding the bias vector b, and applying the activation function g (such as the sigmoid, tanh, or ReLU activation function defined above). |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | The output of the hidden layer, the vector h, is thus the following; (for this example we'll use the sigmoid function Ο as our activation function): |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | h = Ο (Wx + b) (7.8) |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | Notice that we're applying the Ο function here to a vector, while in Eq. 7.3 it was applied to a scalar. We're thus allowing Ο (β’), and indeed any activation function g(β’), to apply to a vector element-wise, so |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | g[z 1 , z 2 , z 3 ] = [g(z 1 ), g(z 2 ), g(z 3 )]. |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | Let's introduce some constants to represent the dimensionalities of these vectors and matrices. We'll refer to the input layer as layer 0 of the network, and have n 0 represent the number of inputs, so x is a vector of real numbers of dimension n 0 , or more formally x β R n 0 , a column vector of dimensionality [n 0 , 1]. Let's call the hidden layer layer 1 and the output layer layer 2. The hidden layer has dimensionality n 1 , so h β R n 1 and also b β R n 1 (since each hidden unit can take a different bias value). And the weight matrix W has dimensionality W β R n 1 Γn 0 , i.e. [n 1 , n 0 ]. |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | Take a moment to convince yourself that the matrix multiplication in Eq. 7.8 will compute the value of each h j as Ο |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | n 0 i=1 W ji x i + b j . |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | As we saw in Section 7.2, the resulting value h (for hidden but also for hypothesis) forms a representation of the input. The role of the output layer is to take this new representation h and compute a final output. This output could be a realvalued number, but in many cases the goal of the network is to make some sort of classification decision, and so we will focus on the case of classification. |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | If we are doing a binary task like sentiment classification, we might have a single output node, and its scalar value y is the probability of positive versus negative sentiment. If we are doing multinomial classification, such as assigning a part-ofspeech tag, we might have one output node for each potential part-of-speech, whose output value is the probability of that part-of-speech, and the values of all the output nodes must sum to one. The output layer is thus a vector y that gives a probability distribution across the output nodes. |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | Let's see how this happens. Like the hidden layer, the output layer has a weight matrix (let's call it U), but some models don't include a bias vector b in the output layer, so we'll simplify by eliminating the bias vector in this example. The weight matrix is multiplied by its input vector (h) to produce the intermediate output z. z = Uh There are n 2 output nodes, so z β R n 2 , weight matrix U has dimensionality U β R n 2 Γn 1 , and element U i j is the weight from unit j in the hidden layer to unit i in the output layer. |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | However, z can't be the output of the classifier, since it's a vector of real-valued numbers, while what we need for classification is a vector of probabilities. There is a convenient function for normalizing a vector of real values, by which we mean normalizing converting it to a vector that encodes a probability distribution (all the numbers lie between 0 and 1 and sum to 1): the softmax function that we saw on page 91 of softmax Chapter 5. For a vector z of dimensionality d, the softmax is defined as: |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | softmax(z i ) = exp(z i ) d j=1 exp(z j ) 1 β€ i β€ d (7.9) |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | Thus for example given a vector |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | EQUATION |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | The softmax function will normalize it to a probability distribution: You may recall that softmax was exactly what is used to create a probability distribution from a vector of real-valued numbers (computed from summing weights times features) in the multinomial version of logistic regression in Chapter 5. That means we can think of a neural network classifier with one hidden layer as building a vector h which is a hidden layer representation of the input, and then running standard logistic regression on the features that the network develops in h. By contrast, in Chapter 5 the features were mainly designed by hand via feature templates. So a neural network is like logistic regression, but (a) with many layers, since a deep neural network is like layer after layer of logistic regression classifiers, and (b) rather than forming the features by feature templates, the prior layers of the network induce the feature representations themselves. |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | Here are the final equations for a feedforward network with a single hidden layer, which takes an input vector x, outputs a probability distribution y, and is parameterized by weight matrices W and U and a bias vector b: |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | h = Ο (Wx + b) z = Uh y = softmax(z) (7.12) |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | nan | nan | We'll call this network a 2-layer network (we traditionally don't count the input layer when numbering layers, but do count the output layer). So by this terminology logistic regression is a 1-layer network. |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | Let's now set up some notation to make it easier to talk about deeper networks of depth more than 2. We'll use superscripts in square brackets to mean layer numbers, starting at 0 for the input layer. So W [1] will mean the weight matrix for the (first) hidden layer, and b [1] will mean the bias vector for the (first) hidden layer. n j will mean the number of units at layer j. We'll use g(β’) to stand for the activation function, which will tend to be ReLU or tanh for intermediate layers and softmax for output layers. We'll use a [i] to mean the output from layer i, and z [i] to mean the combination of weights and biases W [i] a [iβ1] + b [i] . The 0th layer is for inputs, so the inputs x we'll refer to more generally as a [0] . |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | Thus we can re-represent our 2-layer net from Eq. 7.12 as follows: |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | z [1] = W [1] a [0] + b [1] |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | a [1] = g [1] (z [1] ) |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | z [2] = W [2] a [1] + b [2] |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | a [2] = g [2] (z [2] ) y = a [2] (7.13) |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | Note that with this notation, the equations for the computation done at each layer are the same. The algorithm for computing the forward step in an n-layer feedforward network, given the input vector a [0] is thus simply: |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | for i in 1..n z [i] = W [i] a [iβ1] + b [i] |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | a [i] = g [i] (z [i] ) y = a [n] The activation functions g(β’) are generally different at the final layer. Thus g [2] might be softmax for multinomial classification or sigmoid for binary classification, while ReLU or tanh might be the activation function g(β’) at the internal layers. |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | More on the need for non-linear activation functions We mentioned in Section 7.2 that one of the reasons we use non-linear activation functions for each layer in a neural network is that if we did not, the resulting network is exactly equivalent to a single-layer network. Now that we have the notation for multilayer networks, we can see that intuition is more detail. Imagine the first two layers of such a network of purely linear layers: |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | z [1] = W [1] x + b [1] z [2] = W [2] z [1] + b [2] |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | We can rewrite the function that the network is computing as: [2] = W [2] (W [1] x + b [1] |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | z [2] = W [2] z [1] + b |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | ) + b [2] = W [2] W [1] x + W [2] b [1] + b [2] = W x + b (7.14) |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | This generalizes to any number of layers. So without non-linear activation functions, a multilayer network is just a notational variant of a single layer network with a different set of weights, and we lose all the representational power of multilayer networks as we discussed in Section 7.2. |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | Replacing the bias unit In describing networks, we will often use a slightly simplified notation that represents exactly the same function without referring to an explicit bias node b. Instead, we add a dummy node a 0 to each layer whose value will always be 1. Thus layer 0, the input layer, will have a dummy node a we'll use: |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | h = Ο (Wx) (7.16) |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | But now instead of our vector x having n values: x = x 1 , . . . , x n , it will have n + 1 values, with a new 0th dummy value x 0 = 1: x = x 0 , . . . , x n 0 . And instead of computing each h j as follows: |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | h j = Ο n 0 i=1 W ji x i + b j , (7.17) |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | we'll instead use: |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | Ο n 0 i=0 W ji x i , (7.18) |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | where the value W j0 replaces what had been b j . Fig. 7 .9 shows a visualization. Figure 7 .9 Replacing the bias node (shown in a) with x 0 (b). |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | EQUATION |
7 | Neural Networks and Neural Language Models | 7.3 | Feedforward Neural Networks | 7.3.1 | More details on feedforward networks | We'll continue showing the bias as b when we go over the learning algorithm in Section 7.6, but then we'll switch to this simplified notation without explicit bias terms for the rest of the book. |
7 | Neural Networks and Neural Language Models | 7.4 | Feedforward networks for NLP: Classification | nan | nan | Let's see how to apply feedforward networks to NLP tasks! In this section we'll look at classification tasks like sentiment analysis; in the next section we'll introduce neural language modeling. |
7 | Neural Networks and Neural Language Models | 7.4 | Feedforward networks for NLP: Classification | nan | nan | Let's begin with a simple two-layer sentiment classifier. You might imagine taking our logistic regression classifier of Chapter 5, which corresponds to a 1-layer network, and just adding a hidden layer. The input element x i could be scalar features like those in Fig. 5.2, e .g., x 1 = count(words β doc), x 2 = count(positive lexicon words β doc), x 3 = 1 if "no" β doc, and so on. And the output layer y could have two nodes (one each for positive and negative), or 3 nodes (positive, negative, neutral), in which case y 1 would be the estimated probability of positive sentiment, y 2 the probability of negative and y 3 the probability of neutral. The resulting equations would be just what we saw above for a two-layer network (as sketched in Fig. 7.10 ): |
7 | Neural Networks and Neural Language Models | 7.4 | Feedforward networks for NLP: Classification | nan | nan | x = vector of hand-designed features |
7 | Neural Networks and Neural Language Models | 7.4 | Feedforward networks for NLP: Classification | nan | nan | h = Ο (Wx + b) z = Uh y = softmax(z) (7.19) |
7 | Neural Networks and Neural Language Models | 7.4 | Feedforward networks for NLP: Classification | nan | nan | As we mentioned earlier, adding this hidden layer to our logistic regression regression classifier allows the network to represent the non-linear interactions between features. This alone might give us a better sentiment classifier. |
7 | Neural Networks and Neural Language Models | 7.4 | Feedforward networks for NLP: Classification | nan | nan | Most neural NLP applications do something different, however. Instead of using hand-built human-engineered features as the input to our classifier, we draw on deep learning's ability to learn features from the data by representing words as word2vec or GloVe embeddings (Chapter 6). For a text with n input words/tokens w 1 , ..., w n , the input vector will be the concatenated embeddings of the n words: [e w 1 ; ...; e w n ]. If we use the semicolon ';' to mean concatenation of vectors, the equation for our sentiment classifier will be (as sketched in Fig. 7.11 The idea of using word2vec or GloVe embeddings as our input representationand more generally the idea of relying on another algorithm to have already learned an embedding representation for our input words -is called pretraining. Using pretraining pretrained embedding representations, whether simple static word embeddings like word2vec or the more powerful contextual embeddings we'll introduce in Chapter 11, is one of the central ideas of deep learning. (It's also, possible, however, to train the word embeddings as part of an NLP task; we'll talk about how to do this Section 7.7 in the context of the neural language modeling task.) |
7 | Neural Networks and Neural Language Models | 7.5 | Feedforward Neural Language Modeling | nan | nan | As our second application of feedforward networks, let's consider language modeling: predicting upcoming words from prior word context. Neural language modeling is an important NLP task in itself, and it plays a role in many important algorithms for tasks like machine translation, summarization, speech recognition, grammar correction, and dialogue. We'll describe simple feedforward neural language models, first introduced by Bengio et al. (2003) more powerful architectures like the recurrent nets or transformer networks to be introduced in Chapter 9, the feedforward language model introduces many of the important concepts of neural language modeling. Neural language models have many advantages over the n-gram language models of Chapter 3. Compared to n-gram models, neural language models can handle much longer histories, can generalize better over contexts of similar words, and are more accurate at word-prediction. On the other hand, neural net language models are much more complex, slower to train, and less interpretable than n-gram models, so for many (especially smaller) tasks an n-gram language model is still the right tool. |
7 | Neural Networks and Neural Language Models | 7.5 | Feedforward Neural Language Modeling | nan | nan | A feedforward neural LM is a feedforward network that takes as input at time t a representation of some number of previous words (w tβ1 , w tβ2 , etc.) and outputs a probability distribution over possible next words. Thus-like the n-gram LM-the feedforward neural LM approximates the probability of a word given the entire prior context P(w t |w 1:tβ1 ) by approximating based on the N previous words: |